text
stringlengths
4
2.78M
--- abstract: | Population migration is valuable information which leads to proper decision in urban-planning strategy, massive investment, and many other fields. For instance, inter-city migration is a posterior evidence to see if the government’s constrain of population works, and inter-community immigration might be a prior evidence of real estate price hike. With timely data, it is also impossible to compare which city is more favorable for the people, suppose the cities release different new regulations, we could also compare the customers of different real estate development groups, where they come from, where they probably will go. Unfortunately these data was not available. In this paper, leveraging the data generated by positioning team in Didi, we propose a novel approach that timely monitoring population migration from community scale to provincial scale. Migration can be detected as soon as in a week. It could be faster, the setting of a week is for statistical purpose. A monitoring system is developed, then applied nation wide in China, some observations derived from the system will be presented in this paper. This new method of migration perception is origin from the insight that nowadays people mostly moving with their personal Access Point (AP), also known as WiFi hotspot. Assume that the ratio of AP moving to the migration of population is constant, analysis of comparative population migration would be feasible. More exact quantitative research would also be done with few sample research and model regression. The procedures of processing data includes many steps: eliminating the impact of pseudo-migration AP, for instance pocket WiFi, and second-hand traded router; distinguishing moving of population with moving of companies; identifying shifting of AP by the finger print clusters, etc.. author: - | Renyu Zhao\ \ \ G\ \ \ date: 29 March 2016 title: 'Monitoring Chinese Population Migration in Consecutive Weekly Basis from Intra-city scale to Inter-province scale by Didi’s Bigdata' --- Population Migration, Big Data, Access Point, Machine Learning Introduction ============ Migration plays key role in the complex process of civilization and globalization. From states to regions, it impacts the polities and economics. With a grasp of migration in China for instance, one would be better understanding what is going on along with the process of rapid civilization, and how the policies enacted by the governments works or why they does not work[@chinamove][@chinaubb]. There are many interesting studies on population migration in China, some compared intra-provincial migration with inter-provincial migration[@intraprovince], some exploited into the full picture of population migration at provincial level[@interprovince][@provincial], some investigated in people moving within the city [@intracity]. These studies draw a vivid picture on the process of people moving from one place to another in China, trying to explain and to exploit those behind the phenomenon, which are helpful in understanding what happened and what is happening in China. However, what they are in common is that the data they analysis are exclusively from the national census taken by the State Statistical Bureau of China, the census is carried out almost every 10 years, therefore makes the analysis dated and not be able to catch some new phenomenons in this rapidly changed period, not mention that it would consume dramatic resources to conduct such an national census. So far before we exploit Didi’s big data, this detailed, but lagging data is the only source for demographer and urban planner to do researches[@updata][@provincial]. Recently Baidu revealed a heat map visualizes the Chinese New Year migration[^1], it is done by consecutively record position of Baidu Map’s users, and process to put them together for demonstration[@baidu]. This feature is adept at demonstrating such real time massive migration, unfortunately when it comes to moving of permanent residential place, it lacks proper evidences. On one hand, all travelings and tourism are mixed in the data, the users might break the session, or even uninstall its map applications and switch to other, so that no more data is available. What’s more, the social class distribution of users of Baidu Map product might varies from time to time, which makes the statistics unstable. In this work, we propose an efficient and statistically stable way to percept and record population migration, and demonstrator some findings which are urban planners and investment decision makers might be interested. As aforementioned, people always move with their private AP nowadays, and the APs are universally detectable, therefore as long as we are able to scan WiFi signal universally every day, and distinguish home-set AP from others, the moving of people can be sensed. Figure \[fig:bjsig\] shows WiFi signal density Didi scanned in a normal day, the deeper colored red, the more signals canned in corresponding area, mostly it is on the street, that is for the reason that open space is more appropriate for receiving signals. As almost all the area are covered on the map, most AP can be scanned. ![Demonstration of scanned WiFi signal density in north part of downtown Beijing[]{data-label="fig:bjsig"}](bjsig.png){width="8cm"} The following section will be detailed introduction how we processing the data, and how the monitoring system works in Didi, then sections of exploited data followed, in the end some interesting future work will be introduced. How Didi’s data built to the monitoring system ============================================== The objective of this system is not only to monitoring the migration, but also to do data mining and more research in different aspects, so it is designed as open data API in the fundamental level. The system, as shown in Figure \[fig:system\], clean collected AP data to remove noises in the first place, then distinguish family AP from other sources with the help of heterogeneous data generated classification model, and yield data of family to location pairs. Afterwards in the analysis processed, for example, if population migration from January 2016 to March 2016 is in favor, the weekly data in these two months would be aggravated by consecutive right join operation, then a left join operation for the monthly data in time order. ![The framework of Didi’s migration monitoring system[]{data-label="fig:system"}](system.png){width="8cm"} Remove noises ------------- Pure data cleaning is another topic in the field of data mining, it mainly resolve problems on data quality, we would leave it out in this paper. There are some more work for us when clean data is obtained. One of such noises is second-hand trade, this activity also can be observed as AP moving from one place to another. Therefore we have observed major online second-hand trade platforms(2.taobao.com, www.ganji.com, www.58.com), which covers most of second-hand trade in China, it turns out router related transactions accumulated at a month basis takes less than 0.01% of that in our system, therefore we regard it as not has trivial effect in the overall analysis, the real movement are far outnumbered these AP movements caused by second hand trade. When it comes to inter-city migration, second-hand AP trade is far more less. Keep in attention that there are some more exquisite analysis in special cases, where there are only few samples as a certain origin or destination of migration, in studying of these cases, the second hand trade should be paid attention to, so as to eliminate possible impact. The major noises are from moving of companies, the moving of companies contribute a lot in the observation of AP migration. We deal with this problem with heterogeneous data generated classification model. There are many heterogeneous data ready and can be served as feature in the classification model. For instance, we are able to distinguish each AP from each building (in a probability distribution), in some cases even different floors of a building, then with the help of reverse-geocoding service, the properties of the building is get as features in the classification model. Model training -------------- In real practice, we made a Gradient Boosting Decision Tree (GBDT) classification model. For the training set $\! \{(x_i, y_i)\}_{i=1}^n$, and a differentiable loss function $\! L(y, F(x))$, the gradient boosting method update model through aggregately fitting the pseudo-residuals $r$[@gbdt]: $r_{im} = -\left[\frac{\partial L(y_i, F(x_i))}{\partial F(x_i)}\right]_{F(x)=F_{m-1}(x)} \quad \mbox{for } i=1,\ldots,n.$ at each step $m$, until converge. The features are listed in Table \[tab:feature\], and the parameter of number of stumps is set as 4, based on cross validation experiments. **Feature description** **type** ---------------------------------------- ---------- Property 1 type nominal Property 1 probability float ... ... Property x type nominal Property x probability float history of connected terminal int accumulated connected AP count int simultaneous max connected AP count int connected records day-night percentage float : GBDT model feature and type There for 4 types of property: 0-office building 1-residential building 2-mixture 3-uncertain; The ratio is defined as: Sum of Counts at (10:00-16:00)/Sum of Counts at (22:00-6:00) \[tab:feature\] The model is supervised model, and the target of this model is to determine whether the AP is residential or not. Therefore a core work is to label some data for training the model. In our case we designed a method using user session to batch label data for training process. Suppose we designate residential AP as positive sample, and non-residential AP as negative sample, we split user session by time and location so as to identify some positive and negative samples. For example, if a AP is connected by a terminal which called a car service after 9 pm and leave in the next morning, it would be labeled positive, and if it is connected after 9 am till leave in the evening, it would be labeled negative. Feature data are then aligned with the labels and randomly mixed. Top 2 possible buildings are selected for each AP (i.e. x=2 in the model feature table). In this way the training set $\! \{(x_i, y_i)\}_{i=1}^n$ is ready. CART algorithm is employed to build stumps here, and a gradually decreased learning rate $\gamma$ is introduced as regularization term to avoid over-fitting. The cross-validation result shows that the precision is around 97.0%. AP print conformity checking algorithm -------------------------------------- Normally we judge if the AP has been moved by compare the conformity of its prints at the two time-stamps. The mostly used algorithms are Pearson Correlation Coefficient, Cosine Similarity and Mahalanobis Distance etc., some measures distance, which is counterpart os similarity, and some rectifies data for normalization purpose. In this case we turn the prints into simple vectors (e.g. $A$, $B$) and calculate cosine similarity, and in practice, most of the AP are moved far from original place, and the similarity naturally become zero, therefore the threshold is easy to set. $$\text{sim} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\| \|\mathbf{B}\|} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{A_i^2}} \sqrt{\sum\limits_{i=1}^{n}{B_i^2}} }$$ Multi-scale migration observation ================================= In this section, we present several arranged results yield by data of migration between September and December in 2015, which is generated by the aforementioned monitoring system. More than a million migration are gathered in this time period. Inter-provincial and Intra-provincial migration ----------------------------------------------- There are some noticeable points in inter-provincial and intra-provincial migration: - Most provinces facing net emigration, accumulation phenomenon is significant. - Magnitude between cities are interactive courses. - The amount of intra-province migration is also in conformity with economic activity of that area. - Beijing is special among the economic zones. Figure \[fig:prio\] shows net immigration (immigration - emigration) density of each provinces, and top 6 and bottom 4 are regularized and listed in Table \[tab:prio\]. As in the figure, the more dense in red, the more net immigration counts, the more blue, the more emigration counts. ![Net immigration density[]{data-label="fig:prio"}](prio.png){width="8cm"} **Province** **Regularized net immigration count** -------------- --------------------------------------- GUANGDONG 1.00 BEIJING 0.69 JIANGSU 0.58 ZHEJIANG 0.22 SHANGHAI 0.20 LIAONING 0.19 GUANGXI -0.29 HEBEI -0.32 HUNAN -0.37 SICHUAN -0.51 : Top 6 and bottom 4 provinces in net immigration \[tab:prio\] It is clear through observing the figure, that 20/80 law fits immigration/emigration as well as counts of migration in destination, it looks as if people from all provinces moving to the three provinces: Guangdong, Beijing and Jiangsu. As to cities, Table \[tab:ctio\] shows that it is the same situation as the cases of provincial data, 30.2% of the cities are under positive net immigration, and the listed top 8 cities bears 72.4% of all positive net immigration counts. This table might be a perfect explanation of the rocket price hike of real estate in Shenzhen and Suzhou during last half year. All the other cities are far behind in regard of net population immigration. As for Beijing, obviously through this table, it is clear people are still flow-in, which is against the urban plan enacted by the government. However this data is stationary, it is possible that the policies actually works, only slowly. In this case, a continuous observation is necessary, since if the policies of cutting down capital population works, we would see a modest decrease in net immigration count periodically. Next section would demonstrate the use of periodical monitoring data. **City** **Regularized net immigration** ---------- --------------------------------- SHENZHEN 1.00 SUZHOU 0.91 BEIJING 0.75 CHENGDU 0.46 HANGZHOU 0.40 FOSHAN 0.37 WUHAN 0.30 SHANGHAI 0.22 NANPING -0.08 SHAOYANG -0.08 MEISHAN -0.09 ZIGONG -0.09 NEIJIANG -0.13 NANCHONG -0.16 : Top 8 and bottom 6 cities in net immigration \[tab:ctio\] Although Beijing is in the second place of net immigration among all provinces and cities, the counts of immigration, emigration and intra-provincial migration are in the same magnitude, nearly larger by an order of magnitude compared with net immigration count. The destination people moving from Beijing and the origin from where people move to Beijing are also in conformity as shown in Table \[tab:bjio\]. **Source** **Cnt.** **Destination** **Cnt.** ------------ ---------- ----------------- ---------- SHANGHAI 1.00 SHANGHAI 1.00 SHENZHEN 0.71 CHENGDU 0.68 CHENGDU 0.70 LANGFANG 0.67 TIANJIN 0.63 TIANJIN 0.66 HANGZHOU 0.57 SHENZHEN 0.63 SUZHOU 0.54 HANGZHOU 0.60 WUHAN 0.51 GUANGZHOU 0.53 LANGFANG 0.48 WUHAN 0.50 GUANGZHOU 0.45 SUZHOU 0.42 : Beijing’s top 9 immigration source and top 9 emigration destination \[tab:bjio\] Intra-province migration could also reflect economic vitality, Figure \[fig:intropr\] shows intra-province migration density. ![Intra-province migration density[]{data-label="fig:intropr"}](intrapr.png){width="8cm"} If nearby cities are considered as an integrated party and combined together, there are three parties dominates in China: Beijing Area (Beijing and Tianjin in this paper), Yangtze River Delta (Shanghai, Suzhou, Hangzhou in this paper), and Pearl River Delta (Guangzhou, Shenzhen, Foshan, Dongguan in this paper). In Table \[tab:area\] it is clearly shown that most migration happens in between these economic giants. Another significant phenomenon is that although the interactive within Beijing area is least active, the city of Beijing individually bears almost all the inflow to this area. **Direction** **Cnt.** ----------------------------------------- ---------- Intra- Pearl River Delta 1.00 Intra- Yangtze River Delta 0.52 Intra- Beijing Area 0.27 Pearl River Delta - Yangtze River Delta 0.22 Yangtze River Delta - Pearl River Delta 0.22 Yangtze River Delta,Beijing Area 0.21 Beijing Area - Yangtze River Delta 0.20 Pearl River Delta - Beijing Area 0.15 Beijing Area - Pearl River Delta 0.15 : Top 9 migration counts when city group integrated \[tab:area\] Intra-city level migration -------------------------- When it comes to intra-city analysis, we study intra-district migration, as well as the process people moving from one community to another. We still use joined data of migration from September to December in 2015, data of this period is fresh and covers longer than bi-monthly comparison, meanwhile avoided summer holiday and winter holiday. Table \[tab:inbj\] shows top 5 net immigration destination district and top 5 net emigration origin district in Beijing. Haidian District attracted most people in the 3-month period, we are informed from the system that, although the government’s policy encourages transition to southeast, to connect tighter with Tianjin and Hebei, whereas though people moving to the opposite direction. This, again, calls for consecutive monitoring to prove or to falsify. **District** **Regularized migration count** -------------- --------------------------------- HAIDIAN 1.00 CHAOYANG 0.58 FENGTAI 0.21 XICHENG 0.20 DONGCHENG 0.10 CHANGPING -0.22 SHIJINGSHAN -0.24 SHUNYI -0.33 DAXING -0.35 TONGZHOU -0.72 : Top 5 net immigration destination and top 5 net emigration origin by district in Beijing \[tab:inbj\] In the last part of this section, there would be some cases on certain communities in Beijing, to see where the new inhabitants come from, and where the former residents leave for. It is noticeable that, in this scale, inter-city movings still counts for most of the migration, we select those happened with the capital city for better presentation of inter-communities moving processes. Figure \[fig:shangdi\] shows intra-city migration in which the destination or origin is Shangdi, a combination of three communities (Shangdi Dongli, Shangdi Xili, Shangdi Jiayuan) in Haidian district. It is relatively a new developed area, and many internet companies located not far from this place. Some people moved from the same origin, some to the same destination, and totally there are more than thirty cases both in-bound and out-bound in this 3-month period. Only very few cases that the counterpart are at downtown, most of the counterpart places are rural places. Although both the two graphs are alike radiations, they are actually different with close examination: the center of gravity. The emigration destination is notably in the northwest of the immigration origin, it is same as the phenomenon shown in the large picture: people are moving from southeast to northwest. ![Immigration to/Emigration from Shangdi[]{data-label="fig:shangdi"}](shangdiv.png){width="8cm"} Figure \[fig:ganjiakou\] is another example. In this case our target is Ganjiakou, an old central area in downtown Beijing, some state ministries located here, along with many residential communities built in last century. The inbound and outbound transitions is typically in contrast with Shangdi in former example. Immigrations are from the northwest, and emigration are bound to southeast. This is in accordance with the government’s action of moving the city administrations and municipal offices to southeast. ![Immigration to/Emigration from Ganjiakou[]{data-label="fig:ganjiakou"}](ganjiakouv.png){width="8cm"} Consecutive periodical migration observation ============================================ Static data is better at demonstration and mostly are posterior knowledge. Consecutive data not only show dynamic changes, it also provides the evidence of predicting the future, there are many time series machine learning algorithms at hand for this purpose. In this case, if the target is to monitoring population migration, it would be a challenge for real time positioning generated transition data, since it is not able to distinguish business traveling, tourism and some other activities, it is individual based, whereas family play principle part in migration. Our system fits all the needs in such purpose. In this paper we will take net immigration counts in Beijing as an example to illustrate consecutive monthly migration observation. In Figure \[fig:period\], the red line is monthly net immigration counts in Beijing, the baseline is net immigration in October 2015 in Beijing, the magnitude is ten thousand; the blue line is monthly total migration counts as a background, the purple line is the ratio of these two data source. ![Net immigration in Beijing for consecutively 6 months[]{data-label="fig:period"}](period.png){width="8cm"} The sudden rise and fall in the chart happens to witness the last wave of real estate price hiking since year beginning of 2016, as well as the calm down since government regularization since March 2016. It is worth thinking why these two factors fits, no lagging and forecasting. Because as in common sense, there exists many real estate transection that are not for pure purpose of habitat. Actually with our data, a further study on this phenomenon is feasible soon. Conclusions and future work =========================== In this paper, we demonstrated a consecutively migration monitoring system from inter-province scale to intra-city scale. The data is updated in weekly basis in accumulation, several machine learning models are built and trained to identify population migration from noises. In the process of observing data drew from the system, many interesting phenomenon are witnessed. Some of the transitions fits our instinct, while there are also many cases against the policy makers’ will. In macro-scale, with the help of this system, the government would be able to maker better regularization terms, plan better urban city, as well as to examine if the original purpose realized. In micro-scale of the society, the system helps in choosing better habitation, and make better investment choices. Still, the power of the system is far from truly exploited, there are many respects for future work. In machine learning aspect, prediction models would be the an effective way to help make decisions, deviations from prediction can also be exploited to find outliers of certain event. A classification model to identify if the habitats own the house or rent it would contribute features in real estate market related models. In data processing aspect, further study on economic zone, not matter it is as small as combination of communities or as big as delta zone, rather than administrative divisions, should be done. The data can also be employed to examine those urban planning theories on target cities, for instance the shrinking city theory, the central place theory, etc.. [1]{} S. M. Bao, O. B. Bodvarsson, J. W. Hou, and Y. Zhao. Interprovincial migration in china: the effects of investment and migrant networks. 2007. J. Elith, J. R. Leathwick, and T. Hastie. A working guide to boosted regression trees. , 77(4):802–813, 2008. C. C. Fan. Economic opportunities and internal migration: A case study of guangdong province, china\*. , 48(1):28–45, 1996. C. C. Fan. Interprovincial migration, population redistribution, and regional development in china: 1990 and 2000 census comparisons. , 57(2):295–311, 2005. C. C. Fan. China on the move: Migration, the state, and the household. , 196:924–956, 2008. Z. Liang and Z. Ma. China’s floating population: new evidence from the 2000 census. , 30(3):467–488, 2004. S. Martinuzzi, W. A. Gould, and O. M. R. Gonz[á]{}lez. Land development, land use, and urban sprawl in puerto rico integrating remote sensing and population census data. , 79(3):288–297, 2007. S. Yusuf and T. Saich. . World Bank Publications, 2008. J. Zhou, H. Pei, and H. Wu. Early warning of human crowds based on query data from baidu map: Analysis based on shanghai stampede. , 2016. [^1]: http://qianxi.baidu.com/
--- abstract: 'A measurement of beauty hadron production at mid-rapidity in proton-lead collisions at a nucleon-nucleon centre-of-mass energy $\sqrt{s_{\rm NN}}=5.02$ TeV is presented. The semi-inclusive decay channel of beauty hadrons into is considered, where the mesons are reconstructed in the dielectron decay channel at mid-rapidity down to transverse momenta of 1.3 GeV/$c$. The $\Pbottom\APbottom$ production cross section at mid-rapidity, ${\mathrm{d}}\sigma_{\Pbeauty\APbeauty}/{\mathrm{d}}y$, and the total cross section extrapolated over full phase space, $\sigma_{\rm b\bar{b}}$, are obtained. This measurement is combined with results on inclusive production to determine the prompt cross sections. The results in p-Pb collisions are then scaled to expectations from pp collisions at the same centre-of-mass energy to derive the nuclear modification factor $R_{\rm pPb}$, and compared to models to study possible nuclear modifications of the production induced by cold nuclear matter effects. $R_{\rm pPb}$ is found to be smaller than unity at low [$p_{\rm T}$]{} for both coming from beauty hadron decays and prompt .' bibliography: - 'biblio.bib' title: ' Prompt and non-prompt production and nuclear modification at mid-rapidity in p-Pb collisions at ${\bf \sqrt{{\it s}_{\text{NN}}}= 5.02}$ TeV ' --- Acknowledgements {#acknowledgements .unnumbered} ================ The ALICE Collaboration {#app:collab} =======================
--- abstract: 'Increasingly better observations of resolved protoplanetary disks show a wide range of conditions in which planets can be formed. Many transitional disks show gaps in their radial density structure, which are usually interpreted as signatures of planets. It has also been suggested that observed inhomogeneities in transitional disks are indicative of dust traps which may help the process of planet formation. However, it is yet to be seen if the configuration of fully evolved exoplanetary systems can yield information about the later stages of their primordial disks. We use synthetic exoplanet population data from Monte Carlo simulations of systems forming under different density perturbation conditions, which are based on current observations of transitional disks. The simulations use a core instability, oligarchic growth, dust trap analytical model that has been benchmarked against exoplanetary populations.' --- Introduction ============ Planet-forming disks can have either smooth or density perturbed profiles ([@Marel_etal15 van der Marel et al. 2015]). Gas-depleted, density perturbed disks are often called transitional disks. Gaps, cavities, or radial perturbations in such disks sometimes show dust traps in which planet formation may be taking place. Such dust traps may be caused by pressure bumps, which are often theorized as a consequence of the presence of an already-formed planet in the disk ([@Pinilla_etal11 Pinilla et al. 2011]). In this work, we are interested in the impact of disk density perturbations in planetary populations while making no assumptions about the exact mechanism that causes such inhomogeneities in the disk. In order to explore the link between transitional disks and exoplanetary systems, we intend to compare synthetic and observed populations of exoplanetary systems. However, instead of looking at individual cases ([@Raymond_2018 Raymond et al. 2018]) we take a Bayesian inference approach to reconstruct probability distributions of general properties of 3000+ simulated synthetic planetary systems. We thus study the effect of radial density perturbations in the disk structure on the formation of exoplanetary systems. For many transitional disks, a radial density perturbation can be described as a succession of over-dense and under-dense regions which appear as the radial distance $r$ changes ([@Pinilla_etal11 Pinilla et al. 2011]), $$\label{ec:sigmap} \Sigma_p(r)=\Sigma(r)\left(1+A\cos\left(2\pi\frac{r}{fH(r)}\right)\right)\ .$$ Here $\Sigma(r)$ is the surface density distribution ([@Miguel_etal11 Miguel et al. 2011]), $A$ is the amplitude of the perturbation, $f$ is the length scale of the perturbation and $H(r)$ is the scale height of the disk. Here we consider $A=0$ for smooth disks, and $A=0.3$ for transitional disks. Synthetic planetary systems =========================== Population synthesis models are often used for modeling individual exoplanetary systems ([@Raymond_2018 Raymond et al. 2018]) in order to estimate properties of individual planets in the system. We extend this method for 3000 synthetic planetary systems formed in smooth and transitional disks (following the perturbation recipe described above). Each system has initial conditions drawn from prior probability distributions on stellar mass, disk mass and radial extent, stability, metallicity, and gas dissipation timescale from [@Miguel_etal11 Miguel et al. (2011)]. This planet population synthesis framework is also described as part of the review in [@Benz_etal14 Benz et al. (2014)]. The result of each simulation is a system with orbital data like planetary semi-major axis and mass (solid+gas). We calculate consolidated quantities that summarize general properties of each simulated system. Thus, we use quantities like the number of terrestrial and giant planets, total terrestrial planetary mass, average terrestrial planet mass, total planetary mass/disk mass ratio, and what we refer here to as center of mass. This center of mass is not the barycenter of the system but rather the first moment of the mass distribution of planets with respect to their semi-major axes. With the consolidated results per system, we can reconstruct the posterior probability distribution from the simulated quantities mentioned above. This reconstructed posterior can be used to make educated predictions for existing exoplanetary systems based on known properties like stellar mass, metallicity, planetary masses and semi-major axes distributions, etc. Results ======= As an initial benchmark, we looked at the resulting distributions of the location of the center of mass and the total planetary mass/disk mass ratio for systems formed in smooth disks. Figure \[fig:check1\] shows that systems with giant planets are more spread inwards (closer to the star) than systems that only formed terrestrial planets, due to giant planet migration. On the other hand, more of the original disk mass goes to form planets in systems with giant planets than in terrestrial planet-only systems (Figure \[fig:check2\]). Both of these results are expected from exoplanet and planet-forming disk observations. The resulting synthetic planetary systems were divided among systems with giant planets (20-25% of all systems), and systems with terrestrial planets only. We benchmarked our results of the distribution of planetary systems with respect to metallicity. Figure \[fig:Metalhist\]), shows that systems with giant planets tend to be formed in metal-rich systems, both in our simulations and in observations from `exoplanet.eu`. Comparatively, most exoplanetary systems discovered to date tend to be distributed uniformly around the solar value in the HARPS 2011 catalog ([@Mayor_etal11 Mayor et al. 2011]), which we used as a prior for our simulations. For the same initial disk mass, transitional disks form more systems with giant planets than smooth disks (Figure \[fig:SvsT\_1\]). This is due to transitional disks having over-dense regions that favor planet growth. Figure \[fig:NGP\] shows the distribution of stellar mass vs. center of mass for synthetic and observed planetary systems. The 1$\sigma$ and 2$\sigma$ contours show that there is a significant overlap in parameter space between synthetic and observed planetary systems. For exoplanetary systems with a low center of mass the overlap breaks down, which is likely a result of observational bias. ![Marginalized distribution of exoplanetary systems metallicities for synthetic systems with giant planets (green), observed systems with giant planets (red) and from the prior used in the simulation (blue) from [@Mayor_etal11 Mayor et al. (2011)]. The distributions are approximated by a Kernel Density Estimation (solid lines).[]{data-label="fig:Metalhist"}](metalhist.png){width="60.00000%"} Conclusions =========== Our comparison between smooth and transitional disks shows that transitional disks favor the formation of giant planets at lower disk masses than smooth disks. Benchmarking of the results of our simulations show a very good parameter space overlap between synthetic and observed exoplanetary systems. The method of approximating a posterior probability distribution for exoplanetary parameters from our simulations can be used to make educated predictions that direct future surveys of observed exoplanetary systems. ![Marginalized posterior distribution of parent disk masses for synthetic systems with giant planets for smooth and transitional disks. The dashed black lines show lower and upper estimations for the minimum mass solar nebula. The distributions are approximated by a Kernel Density Estimation (solid lines).[]{data-label="fig:SvsT_1"}](main_gp.png){width="70.00000%"} ![Center of mass vs. stellar mass for observed systems (red) and synthetic systems (blue) formed in smooth disks (left) and in transitional disks (right). The 1$\sigma$ (solid line) amd 2$\sigma$ (dashed line) contours were obtained using a Kernel Density Estimation for each set of data points.[]{data-label="fig:NGP"}](GPN1.png){width="80.00000%"} 2014, in H. Beuther, R.S. Klessen, C.P. Dullemond, & T. Henning (eds.) *Protostars and Planets VI*, Univ. of Arizona Press, 944 [Hughes, A.M., Duchêne, G., & Matthews B.C.]{} 2018, *Annu. Rev. Astron. Astrophys.*, 56, 541 [Mayor G.M., Marmier M., Lovis, C., et al.]{} 2011, *arXiv1109.2497* [Miguel, Y., Guilera, M., & Buruni, A.]{} 2011, *MNRAS*, 417, 314 [Pinilla, P., Birnstiel, T., Ricci, L., Dullemond, C.P., Uribe, A.L., Testi, L., & Natta, A.]{} 2012, *A&A*, 538, A114 [Pinilla, P., Tazzari, M., Pascucci, I., et al.]{} 2018, *ApJ*, 859, 32 [Pinilla, P., van der Marel, N., Pérez, L.M., van Dishoeck, E.F., Andrews, S., Birnstiel, T., Herczeg, G., Pontoppidan, K.M., & van Kempen, T.]{} 2015, *A&A*, 584, A16 [Raymond, S.N., Boulet, T., Izidoro, A., Esteves, L., & Bitsch, B.]{} 2018, *MNRAS*, 479, L81 [van der Marel, N., van Dishoeck, E.F., Bruderer, S., Andrews, S.M., Pontoppidan, K.M., Herczeg, G.J., van Kempen, T. & Miotello, A.]{} 2016, *A&A*, 585, A58 [van der Marel N., van Dishoeck E.F., Bruderer, S., Pérez, L., & Isella, A.]{} 2015, *A&A*, 579, A106
--- abstract: 'We propose and theoretically investigate an unambiguous Bell measurement of atomic qubits assisted by multiphoton states. The atoms interact resonantly with the electromagnetic field inside two spatially separated optical cavities in a Ramsey-type interaction sequence. The qubit states are postselected by measuring the photonic states inside the resonators. We show that if one is able to project the photonic field onto two coherent states on opposite sites of phase space, an unambiguous Bell measurement can be implemented. Thus our proposal may provide a core element for future components of quantum information technology such as a quantum repeater based on coherent multiphoton states, atomic qubits and matter-field interaction.' author: - 'Juan Mauricio Torres [^1]' - József Zsolt Bernád - Gernot Alber title: Unambiguous atomic Bell measurement assisted by multiphoton states --- Introduction {#Intro} ============ Establishing well-controlled entanglement between spatially separated quantum systems is essential for quantum communication [@Briegel98; @Duer99]. At its core a quantum repeater employs entanglement which is generated and distributed among intermediary nodes positioned not too distant from each other. Entanglement purification [@Bennett; @Deutsch] enables the distillation of a high-fidelity state from a large number of low-fidelity entangled pairs and with the help of entanglement swapping procedures [@Zuk] the two end points of a repeater are entangled. There are many different implementation proposals for quantum repeaters, utilizing completely different systems and entanglement distribution protocols [@Sangouard]. A promising approach towards these schemes is to require some compatibility with existing optical communication networks. The proposal of van Loock et al. [@vanLoock1; @vanLoock2; @vanLoock3; @vanLoock4] is such an approach where the repeater scheme employs coherent multiphoton states. These proposals assume dispersive interaction between the atomic qubits and the single-mode of the radiation field. This imposes limitations on the photonic postselection. It was shown that these limitations can be overcome in the case of resonant atom-field interactions [@Bernad1; @Bernad2] and it was demonstrated for one building block of a repeater, namely the entanglement generation between spatially separated and neighbouring nodes. A natural extension of this approach is to propose resonant atom-field interaction based schemes also for the other building blocks. In the case of entanglement swapping a complete atomic Bell measurement is required. Bell measurements play a central role also in entanglement-assisted quantum teleportation [@Bennett93] and in superdense coding [@Bennett92]. In the case of photonic qubits theoretical proposals [@Knill; @Pittman; @Munro] have been made and experimental realizations have already been carried out [@Kim; @Schuck]. However, for atomic qubits there are still experimental difficulties which restrain implementations of complete Bell measurements where projections onto the four Bell states can be accomplished. There exist experimental proposals that rely on the application of a controlled NOT gate [@Pellizzari; @Lloyd]. These proposals have the drawback that experimental implementations of two-qubit gates have still complications to attain high fidelity [@Schmidt-Kaler; @Isenhower; @Noelleke]. This implies that the fidelity of the generated Bell states is also affected [@Isenhower]. A proposal focusing specifically on a non-invasive atomic Bell measurement with high fidelity is still missing. In previous work we have introduced a protocol to project onto one Bell state with high fidelity [@Torres2014] based on atomic qubits which interact sequentially with coherent field states prepared in two cavities. The field states emerging after the interactions are postselected by balanced homodyne photodetection. In this paper we expand our previous work to accomplish the projection onto all four Bell states provided the protocol is successful. Thus we introduce an unambiguous Bell measurement of two atomic qubits with the help of coherent multiphoton field states. We demonstrate that the possibility of implementing field projections onto two coherent states on opposite sites of phase space implies the possibility to realize an unambiguous Bell measurement. Our protocol has a finite probability of error depending on the initial states of the atoms. This is due to the imperfect overlap of the field contributions with coherent states. Nevertheless, it is an unambiguous protocol as there are four successful events that lead to postselection of four different Bell states. The scheme is based on basic properties of the two-atom Tavis-Cummings model [@Tavis] and on resonant matter-field interactions which are already under experimental investigation [@Casabone1; @Casabone2; @Reimann; @Nussmann2005]. These considerations make our scheme compatible with a quantum repeater or a quantum relay based on coherent multiphoton states, atomic qubits and resonant matter-field interaction. Our proposal demonstrates that scenarios involving the two-atom Tavis-Cummings model are rich enough to enable future Bell measurement implementations. The paper is organized as follows. In Sec. \[Model\] we introduce the theoretical model and analyse the solutions of the field state with the aid of the Wigner function in phase space. Furthermore, we provide approximate solutions of the global time dependent state vector that facilitate the analysis of the system. In Sec. \[Bell\] we present a scheme to perform an unambiguous Bell measurement provided one is able to project a single mode photonic field onto coherent states. In Sec. \[Discussion\] we provide a numerical analysis of the fidelity of the projected Bell states and discuss general features of the protocol. Details of our calculations are presented in Appendices \[appendix\] and \[appendixoverlap\]. Theoretical model {#Model} ================= Basic equations --------------- In this section we recapitulate basic features of the two-atom Tavis-Cummings model [@Tavis]. This model has been considered previously to study the dynamics of entanglement [@Torres2014; @Jarvis; @Rodrigues; @Kim2002; @Tessier]. The model describes the interaction between two atoms $A$ and $B$ and a single mode of the radiation field with frequency $\omega$. The two identical atoms have ground states ${\ensuremath{| {0} \rangle}}_i$ and excited states ${\ensuremath{| {1} \rangle}}_i$ ($i \in\{A,B\}$) separated by an energy difference of $\hbar \omega$. In the dipole and rotating-wave approximation the Hamiltonian in the interaction picture is given by $$\begin{aligned} \hat{H}&= \hbar g\sum_{i=A,B}\left( \hat{\sigma}^+_i\hat{a}+ \hat{\sigma}^-_i\hat{a}^\dagger\right) \label{Hamilton}\end{aligned}$$ where $\hat{\sigma}^+_i={\ensuremath{| {1} \rangle}}{\ensuremath{\langle {0} |}}_i$ and $\hat{\sigma}^-_i={\ensuremath{| {0} \rangle}}{\ensuremath{\langle {1} |}}_i$ are the atomic raising and lowering operators ($i \in\{A,B\}$), and $\hat{a}$ ($\hat{a}^\dagger$) is the annihilation (creation) operator of the single mode field. The coupling between the atoms and the field is characterized by the vacuum Rabi frequency $2g$. The time evolution of the system can be evaluated for an initial pure state as $${\ensuremath{| {\Psi_t} \rangle}}=e^{-i\hat Ht/\hbar}{\ensuremath{| {\Psi_0} \rangle}}. \label{EPsi}$$ We are interested in the case where the atoms and the cavity are assumed to be prepared in the product state $${\ensuremath{| {\Psi_0} \rangle}}= \Big(c^-{\ensuremath{| {\Psi^-} \rangle}}+ c^+{\ensuremath{| {\Psi^+} \rangle}}+d_\phi^-{\ensuremath{| {\Phi^-_\phi} \rangle}}+ d_\phi^+{\ensuremath{| {\Phi^+_\phi} \rangle}} \Big) {\ensuremath{| {\alpha} \rangle}}, \label{initial}$$ with the radiation field considered initially in a coherent state [@Glauber; @Perelomov] $$\begin{aligned} {\ensuremath{| {\alpha} \rangle}}=\sum_{n=0}^\infty e^{-\frac{|\alpha|^2}{2}} \frac{\alpha^n}{\sqrt{n!}} {\ensuremath{| {n} \rangle}}, \quad\alpha=\sqrt{\overline n}\,e^{i\phi}, \label{coherentstate}\end{aligned}$$ with mean photon number $\bar n$ and photon-number states ${\ensuremath{| {n} \rangle}}$. The parameters $c^\pm$ and $d_\phi^\pm$ are the initial probability amplitudes of the orthonormal Bell states $$\begin{aligned} {\ensuremath{| {\Psi^\pm} \rangle}}&=\tfrac{1}{\sqrt2}\left({\ensuremath{| {0,1} \rangle}}\pm{\ensuremath{| {1,0} \rangle}}\right), \nonumber\\ {\ensuremath{| {\Phi^\pm_\phi} \rangle}}&=\tfrac{1}{\sqrt2}\left( e^{-i\phi}{\ensuremath{| {0,0} \rangle}}\pm e^{i\phi}{\ensuremath{| {1,1} \rangle}} \right), \label{bellstates}\end{aligned}$$ with the atomic states ${\ensuremath{| {i,j} \rangle}}={\ensuremath{| {i} \rangle}}_A{\ensuremath{| {j} \rangle}}_B$ ($i,j \in \{0,1\}$). We have chosen an atomic orthonormal basis containing the states ${\ensuremath{| {\Psi^\pm} \rangle}}$ as the state ${\ensuremath{| {\Psi^-} \rangle}}{\ensuremath{| {n} \rangle}}$ is an invariant state of the system. This is explained in Appendix \[appendix\] where we present the full solution of the temporal state vector. The other two Bell states ${\ensuremath{| {\Phi_\phi^\pm} \rangle}}$ depend on the initial phase $e^{i\phi}$ of the coherent state. They appear naturally in the Tavis-Cummings model due to the exchange of excitations between atoms and cavity, and are involved in an approximate solution of the state vector that facilitate the analysis of the dynamics. Before showing the detailed form of our solution, let us give an overview of the dynamical features that impose relevant time scales in the system. Collapse and revival phenomena ------------------------------ The collapse and revival phenomena of the Jaynes-Cummings model and of the two-atom Tavis-Cummings model play an essential role in the quantum information protocols presented in Refs. [@Bernad1; @Torres2014; @Jarvis]. This behaviour was first found in the time dependent atomic population in the Jaynes-Cummings model [@Eberly1980] when the field is initially prepared in a coherent state: The populations display Rabi oscillations that cease after a collapse time $t_c$ and appear again at a revival time $t_r$. In the case of the two atom Tavis-Cummings model, the collapse and revival time of the Rabi oscillations are given by $$\begin{aligned} t_r=\frac{\pi}{g}\sqrt{4\bar n+2}, \quad t_c=\frac{1}{\sqrt2 g}. \label{revcoltime}\end{aligned}$$ These time scales have been previously introduced and can be found, for instance, in Refs. [@Jarvis; @Eberly1980]. As they play an essential role in the dynamics of the system it is convenient to introduce the rescaled time $$\begin{aligned} \tau =t/t_r=tg/\pi\sqrt{4\bar n+2}. \label{tau}\end{aligned}$$ ![\[wigfig\] Wigner function of the cavity field after the interaction with two two-level atoms at times $\tau=1/4$ (top) and $\tau=1/2$ (bottom) with the rescaled time of Eq. . The initial states of the atoms are defined by the parameters: $c^-=0.5554$, $c^+=0.3213+i0.5004$, $d^-_\phi=-0.2053+i0.3726$, $d^+_\phi=0.1046+i0.3819$, and the parameter $\alpha=\sqrt{36.16}e^{i1.37}$ characterizes the initial coherent sate.](wighalf "fig:"){width="43.00000%"} ![\[wigfig\] Wigner function of the cavity field after the interaction with two two-level atoms at times $\tau=1/4$ (top) and $\tau=1/2$ (bottom) with the rescaled time of Eq. . The initial states of the atoms are defined by the parameters: $c^-=0.5554$, $c^+=0.3213+i0.5004$, $d^-_\phi=-0.2053+i0.3726$, $d^+_\phi=0.1046+i0.3819$, and the parameter $\alpha=\sqrt{36.16}e^{i1.37}$ characterizes the initial coherent sate.](wig "fig:"){width="43.00000%"} Let us explain these phenomena by visualizing the phase space of the radiation field with the aid of the Wigner function [@Schleich; @Risken] $$W_t(\beta,\beta^*) = \frac{1}{\pi^2} \int {\rm Tr}\left\{\hat{\varrho}_t \, e^{\zeta \hat{a}^\dagger-\zeta^* \hat{a}}\right\} e^{\beta\zeta^*-\beta^*\zeta} d^2\zeta, \label{Wignerf}$$ with the complex numbers $\beta$ and $\zeta$. The operator $ \hat{\varrho}_t ={\rm Tr}_{\rm atoms}\left\{{\ensuremath{| {\Psi_{t}} \rangle}}{\ensuremath{\langle {\Psi_{t}} |}}\right\}$ is the density matrix of the field state obtained after taking partial trace over the atomic degrees of freedom from the full density matrix corresponding to the state vector in Eq. . In Fig. \[wigfig\] we show the Wigner function after interaction times $\tau=1/4$ in the top panel and $\tau=1/2$ in the bottom panel. The circular shape corresponds to the initial coherent state ${\ensuremath{| {\alpha} \rangle}}$. This contribution to the field remains stationary as long as there is an initial contribution of the state ${\ensuremath{| {\Psi^-} \rangle}}$. The reason is that ${\ensuremath{| {\Psi^-} \rangle}}{\ensuremath{| {n} \rangle}}$ is an invariant state of the system. There are two other contributions to the field that rotate around the origin. In the top panel of Fig. \[wigfig\] it can be noticed that for an interaction time of $\tau/4$ they have completed a quarter of cycle. At the bottom, the situation at interaction time $\tau/2$ is shown where half a rotation has been completed. The interference fringes between the field contributions signify that there are coherent superposition between these states of the field. The behaviour of the field state in phase space explain the phenomena: Rabi oscillations cease (collapse) when the field contributions are well separated, e.g. at time $\tau/4$, and revive when the field contributions overlap, e.g. at $\tau/2$ or the main revival at $\tau$ when all the field constituents coincide at the position of the initial coherent state. Approximation of the state vector --------------------------------- The full solution to the time dependent state vector of the two atoms Tavis-Cummings model has already been presented in previous work, see for instance [@Kim2002; @Torres2010]. Coherent state approximations have also been considered in the past [@Torres2014; @Jarvis; @Rodrigues; @Gea-Banacloche]. In this context, the eigenfrequencies of the Hamiltonian that depend on the photonic number $n$ are expanded in a first order Taylor series around the mean photon number $\bar n$. However, the coherent state description is accurate only for times well below the revival time. In this work we go beyond the coherent state approximation by considering second order contributions of the eigenfrequencies around $\bar n$. The details can be found in the Appendix \[appendix\] where it is shown that the time dependent state vector of the system can be approximated by $$\begin{aligned} {\ensuremath{| {\Psi^{\rm A}_\tau} \rangle}}= & \frac{1}{N_\tau}\left( c^-{\ensuremath{| {\Psi^-} \rangle}}+ d_\phi^- {\ensuremath{| {\Phi^-_{\phi}} \rangle}} \right) {\ensuremath{| {\alpha} \rangle}}+ \nonumber\\ & \frac{c^+-d^+_{\phi}}{2N_\tau} \left( {\ensuremath{| {\Psi^+} \rangle}}-{\ensuremath{| {\Phi^+_{\phi+2\pi\tau}} \rangle}} \right) {\ensuremath{| {\alpha_\tau^+} \rangle}}+ \nonumber\\ & \frac{c^++d^+_{\phi}}{2N_\tau} \left( {\ensuremath{| {\Psi^+} \rangle}} +{\ensuremath{| {\Phi^+_{\phi-2\pi\tau}} \rangle}} \right) {\ensuremath{| {\alpha_\tau^-} \rangle}}, \label{Psit}\end{aligned}$$ with the photonic states $${\ensuremath{| {\alpha_\tau^\pm} \rangle}}= \sum_{n=0}^\infty \frac{\alpha^ne^{-\frac{|\alpha|^2}{2}}}{\sqrt{n!}} e^{\pm i 2\pi\tau\left[\bar n+1+n-\frac{(n-\bar n)^2}{4\bar n +2} \right]}{\ensuremath{| {n} \rangle}} \label{PhotonicStates}$$ and with the normalization factor $$\begin{aligned} N_\tau=\Big(1&+ {\rm Re}[ (c^++d^+_\phi)^\ast (c^+-d^+_\phi) {\langle{\alpha^-_\tau}|{\alpha^+_\tau}\rangle} ]\sin^2(2\pi\tau) \nonumber\\ &+2 {\rm Re}[d^-_\phi (d^{+}_\phi)^\ast]{\rm Im} [{\langle{\alpha^-_\tau}|{\alpha}\rangle}] \sin( 2\pi\tau). \nonumber\\ &+2{\rm Im}[(c^{+})^\ast d^-_\phi]{\rm Re} [{\langle{\alpha^-_\tau}|{\alpha}\rangle}] \sin (2\pi\tau)\Big)^{1/2}. \label{normalization}\end{aligned}$$ The quantity ${\langle{\alpha^-_\tau}|{\alpha}\rangle}$ is evaluated in Appendix \[appendixoverlap\] and an approximate expression is given in Eq. . In order to test the validity of Eq. we have considered the fidelity $F(\tau)=|{\langle{\Psi^{\rm A}_\tau}|{\Psi_{t_r\tau}}\rangle}|^2$ of the approximated state vector with respect to the exact result given in Eq. . In Fig. \[totalfidelity\] we have plotted the results of numerical evaluations of the fidelity $F(\tau)$ for different values of the mean photon number $\bar n$. It can be noticed that the validity of this approximation improves with increasing mean photon number $\bar n$. In the Appendix \[appendix\] it is discussed that our approximation is valid provided the condition $\tau\ll \sqrt{\bar n}/2\pi$ is fulfilled. The form of the solution given in Eq. allows a simple analysis of the dynamics. It is written in terms of an orthonormal atomic basis of Bell states and is therefore suitable for the analysis of the atomic entanglement. In particular, it is interesting to note that for an initial state without a contribution of the state ${\ensuremath{| {\Phi^-_\phi} \rangle}}$, i.e. $d_\phi^-=0$, a photonic projection that discriminates the state ${\ensuremath{| {\alpha} \rangle}}$ from the states ${\ensuremath{| {\alpha_\tau^\pm} \rangle}}$ can postselect the atomic Bell state ${\ensuremath{| {\Psi^-} \rangle}}$. In Ref. [@Torres2014] we studied this Bell state projection and found that its implementation requires a flexible restriction for the interaction time: it has to be below the revival time and above the collapse time given in Eq. . In the following we concentrate in a more specific interaction time. We analyze the dynamics at the specific interaction time $\tau=1/2$. This analysis will allow us to introduce in Sec. \[Bell\] a protocol to perform the four Bell state projections. ![\[totalfidelity\] Fidelity of the total state of Eq. with respect to the exact solution given by Eq. as a function of the time $\tau$ in Eq. scaled in terms of the revival time $t_r$: Five curves are presented for different values of the mean photon number $\bar n$ as described in the legend. The rest of the parameters are the same as in Fig. \[wigfig\]. ](fidelR){width="48.00000%"} Basic dynamical features at scaled interaction time $\tau=1/2$ -------------------------------------------------------------- There are two main reasons for studying in detail the case with scaled interaction time $\tau=1/2$. The first one is that the time dependent atomic states in Eq. coincide, i.e. $$\begin{aligned} {\ensuremath{| {\Phi_{\phi+\pi}^\pm} \rangle}}={\ensuremath{| {\Phi_{\phi-\pi}^\pm} \rangle}}=-{\ensuremath{| {\Phi_\phi^\pm} \rangle}}. \label{property1}\end{aligned}$$ The second reason is that the photonic states ${\ensuremath{| {\alpha_{1/2}^\pm} \rangle}}$ have completed half a rotation in phase space and lie on the opposite site to the initial coherent state ${\ensuremath{| {\alpha} \rangle}}$ whereby overlapping with the coherent state ${\ensuremath{| {-\alpha} \rangle}}$. This means that at this interaction time and for $|\alpha|\gg 1$, the initial photonic state ${\ensuremath{| {\alpha} \rangle}}$ can be approximately distinguished from the other two states ${\ensuremath{| {\alpha_{1/2}^\pm} \rangle}}$. However, the states ${\ensuremath{| {\alpha_{1/2}^\pm} \rangle}}$ overlap significantly. This can be noticed in Fig. \[wigfig\] where we have plotted the Wigner function. The circular shape corresponds to the initial coherent state ${\ensuremath{| {\alpha} \rangle}}$, while the distorted ellipses on the opposite site of the phase space correspond to the states ${\ensuremath{| {\alpha^\pm_{1/2}} \rangle}}$. To distinguish these two components of the field it is convenient to conceive an experiment that is able to project the field state onto the coherent states ${\ensuremath{| {\pm\alpha} \rangle}}$. In order to study the projection onto the state ${\ensuremath{| {\pm \alpha} \rangle}}$, one has to evaluate its overlap with the photonic states of the state vector in Eq. . First we consider the overlaps that can be neglected for large value $\bar n$, namely $$\begin{aligned} {\langle{\alpha}|{-\alpha}\rangle}= e^{-2\bar n},\quad {\langle{\alpha}|{\alpha_{1/2}^\pm}\rangle}\propto e^{-\frac{2\pi^2}{4+\pi^2}\bar n}. \label{overlapsimple}\end{aligned}$$ The explicit form of the overlap ${\langle{\alpha}|{\alpha_{1/2}^\pm}\rangle}$ is given in Eq. of the Appendix \[appendixoverlap\] where its approximation is also evaluated. The nonvanishing overlaps in the limit of large mean photon number are ${\langle{\alpha}|{\alpha}\rangle}=1$ and $$\begin{aligned} {\langle{-\alpha}|{\alpha_{1/2}^\pm}\rangle} &\approx \sqrt{\frac{2}{\sqrt{4+\pi^2}}} e^{\mp i\left(\frac{1}{2}\arctan\frac{\pi}{2}-(\bar n+1)\pi\right)}. \label{overlaptheta}\end{aligned}$$ The expression in Eq. is also evaluated in detail in Appendix \[appendixoverlap\]. This overlap is real valued if the mean photon number $\bar n=|\alpha|^2$ fulfills the relation $$\begin{aligned} \bar n= m +\tfrac{1}{2\pi}\arctan\tfrac{\pi}{2},\quad {\rm with}\quad m\in \mathbb{N}. \label{ncondition}\end{aligned}$$ If the condition in Eq. is fulfilled and if we suppose an initial atomic state with no contribution from the state ${\ensuremath{| {\Phi^+_\phi} \rangle}}$, i.e. $d^+_\phi=0$, then it can be verified that a projection onto the field state ${\ensuremath{| {-\alpha} \rangle}}$ postselects the atoms in the unnormalized atomic Bell state $\sqrt b c^+{\ensuremath{| {\Psi^+} \rangle}}$, with $$\begin{aligned} b=2/\sqrt{4+\pi^2}. \label{factor}\end{aligned}$$ The success probability of this projection is given by $b|c^+|^2$ which is proportional to the initial probability of this particular Bell state ${\ensuremath{| {\Psi^+} \rangle}}$. The factor $b$ is the result of our inability to project perfectly and simultaneously onto both field states ${\ensuremath{| {\alpha^\pm_{1/2}} \rangle}}$. In the next section we present a protocol that can perform postselection of the four Bell states regardless of the initial state of the atoms. An unambiguous Bell measurement {#Bell} =============================== In this section we introduce a protocol which implements a projection onto four orthogonal atomic Bell states of Eq. for any given initial condition of the atoms. The scheme we propose requires interactions between the atoms with two different cavities as sketched in Fig. \[scheme\]. The interaction time between the atoms and the electromagnetic field in each cavity is assumed to be $\tau=1/2$. The field in the first (second) cavity has to be prepared in a coherent state ${\ensuremath{| {\alpha} \rangle}}$ (${\ensuremath{| {i\alpha} \rangle}}$). After the interaction with the first cavity the resulting field is projected onto the initial state ${\ensuremath{| {\alpha} \rangle}}$. In case of failure a projection onto the state ${\ensuremath{| {-\alpha} \rangle}}$ is performed. The projection of the field postselects the atoms in a state that has contribution of only two of the Bell states. This postselected atomic state is taken as initial condition to interact with a second cavity prepared in the state ${\ensuremath{| {i\alpha} \rangle}}$. The atoms are assumed to evolve freely for a time $\tau_f$ before interacting with a second cavity. This does not affect the protocol as the free Hamiltonian commutes with the interaction Hamiltonian in Eq. . After the interaction of the atoms with the second cavity, the field in the second cavity is projected onto ${\ensuremath{| {i\alpha} \rangle}}$ and if this fails another projection onto the state ${\ensuremath{| {-i\alpha} \rangle}}$ is performed. With this field state projection, the atoms are finally postselected in a unit fidelity Bell state. In what follows we discuss in detail all the possible outcomes of the protocol. There is a finite probability to fail completely when none of the coherent state field projections is successful. This is discussed in Sec. \[Discussion\]. ![\[scheme\] Schematic representation of the proposed atomic Bell measurement: Two atomic qubits interact with the electromagnetic field inside two independent cavities in a Ramsey-type interaction sequence. Different projections on the field states inside the cavities, recorded by detectors ${\rm D}_1$ and ${\rm D}_2$, result in a postselection of atomic Bell states as described in Table \[table\].](schemeboxres){width=".48\textwidth"} Projection onto ${\ensuremath{| {\alpha} \rangle}}$ in the first cavity ----------------------------------------------------------------------- Let us consider a successful projection onto the field state ${\ensuremath{| {\alpha} \rangle}}$ of the first cavity. In this case the atoms are postselected in the state $$\begin{aligned} \frac{1}{\sqrt{P_1}} \left( c^-{\ensuremath{| {\Psi^-} \rangle}}+d_{\phi+\pi/2}^+{\ensuremath{| {\Phi^+_{\phi+\pi/2}} \rangle}} \right) \label{psiat1}\end{aligned}$$ with probability $P_1=|c^-|^2+|d_\phi^-|^2$. To write this state we have also considered the relations $$\begin{aligned} {\ensuremath{| {\Phi_{\phi+\pi/2}^\pm} \rangle}}=-i{\ensuremath{| {\Phi^\mp_{\phi}} \rangle}}, \quad d^\pm_{\phi+\pi/2}=id^\mp_{\phi}. \label{pihalf}\end{aligned}$$ The postselected atomic state of Eq. is taken as initial condition for the interaction with the second cavity prepared in the coherent state ${\ensuremath{| {i\alpha} \rangle}}$ as depicted in Fig. \[scheme\]. Two scenarios are possible for the projection of the field in the second cavity. In the first place we consider a projection onto the coherent state ${\ensuremath{| {i\alpha} \rangle}}$ where the atoms are postselected in the state ${\ensuremath{| {\Psi^-} \rangle}}$ with probability $P_{11}=|c^-|^2/P_1$. This can be verified from Eq. as the new initial state does not have a contribution of ${\ensuremath{| {\Phi^-_{\phi+\pi/2}} \rangle}}$. As the projections performed in the first and second cavity are independent events the state ${\ensuremath{| {\Psi^-} \rangle}}$ can be projected with overall success probability $P_1P_{11}=|c^-|^2$, the initial probability weight of this state before the protocol. The second possibility is to project onto the state ${\ensuremath{| {-i\alpha} \rangle}}$. In that case the atoms are postselected in the state $ {\ensuremath{| {\Phi_{\phi}^-} \rangle}}=i{\ensuremath{| {\Phi_{\phi+3\pi/2}^+} \rangle}} $ provided the condition in Eq. is fulfilled. This can be verified using Eq. with an initial coherent state ${\ensuremath{| {i\alpha} \rangle}}$ and the atoms initially in the state of Eq. that has no contribution of ${\ensuremath{| {\Psi^+} \rangle}}$. The success probability for this event is $P_{10}=b|d_\phi^-|^2/P_1$. Correspondingly the projection onto the atomic state ${\ensuremath{| {\Phi^-_\phi} \rangle}}$ occurs with overall success probability $P_1P_{10}=b|d_\phi^-|^2$. This is proportional to its initial probability weight but not equal. The proportionality factor $b$ is given in Eq. and accounts to the imperfect projection onto the states ${\ensuremath{| {i\alpha_{1/2}^\pm} \rangle}}$. [llll]{} ------------- Field state in detector ${\rm D}_1$ ------------- : \[table\] Summary of the Bell state protocol assisted by photonic state measurements. The first (second) column indicates the photonic field that has to be selected in the first (second) cavity by detector ${\rm D}_1$ (${\rm D}_2$) in the interaction sequence depicted in Fig. \[scheme\]. The third column indicates the resulting atomic state with the probability of occurrence given in the last column with $b=2/\sqrt{4+\pi^2}\approx0.537$. The protocol fails with probability $(1-b)(|d^-_\phi|^2+|d^+_\phi|^2)+(1-b^2)|c^+|^2$. & ------------- Field state in detector ${\rm D}_2$ ------------- : \[table\] Summary of the Bell state protocol assisted by photonic state measurements. The first (second) column indicates the photonic field that has to be selected in the first (second) cavity by detector ${\rm D}_1$ (${\rm D}_2$) in the interaction sequence depicted in Fig. \[scheme\]. The third column indicates the resulting atomic state with the probability of occurrence given in the last column with $b=2/\sqrt{4+\pi^2}\approx0.537$. The protocol fails with probability $(1-b)(|d^-_\phi|^2+|d^+_\phi|^2)+(1-b^2)|c^+|^2$. & ----------------------------------------- Atomic state ${\ensuremath{| {{\rm Bell}} \rangle}}$ ----------------------------------------- : \[table\] Summary of the Bell state protocol assisted by photonic state measurements. The first (second) column indicates the photonic field that has to be selected in the first (second) cavity by detector ${\rm D}_1$ (${\rm D}_2$) in the interaction sequence depicted in Fig. \[scheme\]. The third column indicates the resulting atomic state with the probability of occurrence given in the last column with $b=2/\sqrt{4+\pi^2}\approx0.537$. The protocol fails with probability $(1-b)(|d^-_\phi|^2+|d^+_\phi|^2)+(1-b^2)|c^+|^2$. & Probability\ ${\ensuremath{| {\alpha} \rangle}}$ &${\ensuremath{| {i\alpha} \rangle}}$ &${\ensuremath{| {\Psi^-} \rangle}}$ & $|c^-|^2$\ ${\ensuremath{| {\alpha} \rangle}}$ &${\ensuremath{| {-i\alpha} \rangle}}$ &${\ensuremath{| {\Phi^-_\phi} \rangle}}$ & $b|d^-_\phi|^2$\ ${\ensuremath{| {-\alpha} \rangle}}$ &${\ensuremath{| {i\alpha} \rangle}}$ &${\ensuremath{| {\Phi^+_\phi} \rangle}}$ & $b|d^+_\phi|^2$\ ${\ensuremath{| {-\alpha} \rangle}}$ &${\ensuremath{| {-i\alpha} \rangle}}$ &${\ensuremath{| {\Psi^+} \rangle}}$ & $b^2|c^+|^2$ Projection onto ${\ensuremath{| {-\alpha} \rangle}}$ in the first cavity ------------------------------------------------------------------------ Now we consider a successful projection onto the coherent state ${\ensuremath{| {-\alpha} \rangle}}$ in the first cavity. In this situation the atoms are postselected in the state $$\begin{aligned} \sqrt{\frac{ b}{P_0}} \left( c^+{\ensuremath{| {\Psi^+} \rangle}}- d^-_{\phi+\pi/2}{\ensuremath{| {\Phi^-_{\phi+\pi/2}} \rangle}} \right) \label{psiat2}\end{aligned}$$ with probability $P_0=b|c^+|^2+b|d_\phi^+|^2$. We have used the relations in Eqs. and . The normalized state of Eq. is taken as initial condition to interact with the second cavity prepared in the coherent state ${\ensuremath{| {i\alpha} \rangle}}$. There are two scenarios in the projection of the second cavity. First we consider a successful projection onto the state ${\ensuremath{| {i\alpha} \rangle}}$. As the initial state of Eq. does not have any contribution of ${\ensuremath{| {\Psi^-} \rangle}}$, the atoms are postselected in the state $ {\ensuremath{| {\Phi^+_{\phi}} \rangle}}=i{\ensuremath{| {\Phi^-_{\phi+3\pi/2}} \rangle}} $. This occurs with success probability $P_{01}=b|d^+_\phi|^2/P_0$. Thus the state ${\ensuremath{| {\Phi^+_\phi} \rangle}}$ is postselected with an overall success probability $P_0P_{01}=b|d^+_\phi|^2$. A second possible situation is a projection onto the state ${\ensuremath{| {-i\alpha} \rangle}}$ in the second cavity. In this situation the atoms are postselected in the state ${\ensuremath{| {\Psi^+} \rangle}}$. This can be noted from Eq. as the second initial atomic state of Eq. does not have any contribution of the state ${\ensuremath{| {\Phi^+_{\phi+\pi/2}} \rangle}}$. The success probability of this event is $P_{00}=|c_+|^2b^2/P_0$. It implies an overall success probability of postselecting state ${\ensuremath{| {\Psi^+} \rangle}}$ of $P_0P_{00}=b^2|c^+|^2$. Discussion of the protocol {#Discussion} ========================== ![\[fidelityn\] Fidelity $F_{\rm B}$ of the projected atomic Bell states as a function of the initial mean photon number of the fields inside the cavities: The interaction time in both cavities is given by $\tau=1/2$, i.e. half the revival time (see Eq. ) and the rest of the initial conditions are the same as in Fig. \[totalfidelity\]. Each curve correspond to a different Bell state as explained in the legend. ](fidelbellna2){width="48.00000%"} Fidelity of the postselected Bell states ---------------------------------------- In order to test our protocol based on the approximations of Eqs. and we have numerically evaluated the fidelity $F_{\rm B}=|{\langle{\rm Bell}|{\psi}\rangle}|^2$ of the resulting Bell states in each of the four possible successful outcomes. The state ${\ensuremath{| {\rm Bell} \rangle}}$ stands for any of the four Bell states of Eq. . The state ${\ensuremath{| {\psi} \rangle}}$ is the exact numerical solution after the protocol and depends either on the $\bar n$ or $\tau$. In Fig. \[fidelityn\] we have plotted the fidelity $F_{\rm B}$ for the different Bell states as a function of the mean photon number $\bar n=|\alpha|^2$ of the initial coherent field states ${\ensuremath{| {\alpha} \rangle}}$ and ${\ensuremath{| {i\alpha} \rangle}}$. Interestingly, the protocol already shows high fidelity (above 0.9) even for small mean photon numbers. The results improve for increasing values of $\bar n$ in accordance to the validity of our approximation for high photon number explained in Sec. \[Model\]. The fidelity has an oscillatory periodic behaviour and maxima are achieved close to the values of $\bar n$ predicted by Eq. , i.e. when $\bar n$ is an integer number plus the constant $\arctan(\pi/2)/2\pi\approx 0.16$. A possible error $\delta \bar n$ in the previous value has to fulfill the condition $\delta \bar n\ll 1/\pi$ to ensure a high fidelity of the atomic states. It should be mentioned that in the extreme opposite case in which both cavities are initially prepared in the vacuum state, i.e. $\bar n=0$, the proposed protocol does not work. According to Eq. the four orthogonal Bell states are paired up with three field states and in order to filter out all Bell states they have to be orthogonal. This requirement can only be fulfilled in the limit of high mean number of photons. To test the sensitivity of the protocol with respect to the interaction time we have also evaluated the fidelity $F_{\rm B}$ as a function of the scaled interaction time $\tau$ between the atoms and the cavities. The results with the initial atomic conditions of Fig. \[totalfidelity\] are plotted in Fig. \[fidelity\]. We present the results for an initial coherent state with mean photon number $\bar n= 36+\arctan(\pi/2)/2\pi$. The black solid curve represents the fidelity of projecting onto state ${\ensuremath{| {\Psi^-} \rangle}}$ and it shows a constant unit fidelity in the time interval of the plot. The stability of this result has also been discussed in Ref. [@Torres2014] and is due to the fact that ${\ensuremath{| {\Psi^-} \rangle}}$ is a special invariant atomic state of the two-atom Tavis-Cummings model. The fidelity of the state ${\ensuremath{| {\Phi^+_\phi} \rangle}}$ also shows robustness with respect to the interaction time $\tau$. This is due to the fact that this state is obtained after projecting onto ${\ensuremath{| {i\alpha} \rangle}}$ which is the stationary initial state of the second cavity. Advantages of the two-atom Tavis-Cummings model for generating this particular Bell state have also been mentioned previously in Ref. [@Rodrigues]. The other two fidelities of projections onto states ${\ensuremath{| {\Phi^-_\phi} \rangle}}$ and ${\ensuremath{| {\Psi^+} \rangle}}$ oscillate as a function of $\tau$. In this case the second field projection is performed onto field state ${\ensuremath{| {-i\alpha} \rangle}}$ and this in turn has to “catch” the time dependent states ${\ensuremath{| {i\alpha_\tau^\pm} \rangle}}$. Therefore, the oscillations are originated by the overlap between photonic states ${\langle{-\alpha}|{\alpha^\pm_{\tau}}\rangle}$ that is calculated in the Appendix \[appendixoverlap\]. One can estimate that the fidelity $F_{\rm B}$ around $\tau=1/2$ oscillates with frequency $2(\bar n+1)$. The optimal interaction time according to Eq. is $\tau=1/2$ where the absolute value of the overlap attains its maximum. A possible error $\varepsilon$ in the scaled interaction time $\tau=1/2+\varepsilon$ has to be restricted to the condition $|\varepsilon|\ll 1/4\pi(\bar n +1)$. ![\[fidelity\] Fidelity $F_{\rm B}$ of the projected atomic Bell states as a function of the scaled interaction time $\tau$ (see Eq. ) on both cavities: The initial conditions are the same as in Fig. \[wigfig\] with $\bar n=36.16$. Each curve correspond to a different Bell state of Eq. \[bellstates\] as explained in the legend. ](fidelbell2a2){width="48.00000%"} Experimental constraints ------------------------ Our protocol requires that the pair of atoms interact with two different coherent states. This could be realized, for instance, by transporting and positioning the atomic qubits in separate cavities. Current experimental realizations report coherent transport and controlled positioning of neutral atoms in optical cavities [@Reimann; @Nussmann2005; @Khudaverdyan; @Brakhane], where a dipole trap is used as a conveyor belt to displace them. Two trapped ions have also been reported to be coupled in a controlled way to an optical resonator [@Casabone1; @Casabone2]. The cavity can be shifted with respect to ions, allowing to tune the coupling strength between ions and optical cavity. In this setting, instead of transporting the atoms to a different cavity, the same cavity might be shifted to a position where it decouples from the atoms until the measurement is achieved. Then it would have to be prepared and shifted again for a second interaction with the ions. In our discussion we have not considered losses. The effects of decoherence can be neglected in the strong coupling regime, where the coupling strength $g$ between atoms and cavity is much larger than the spontaneous decay rate of the atoms $\gamma$ and the photon decay rate of the cavity $\kappa$. Actually, in our setting due to the specific interaction time $t_r/2\approx\pi\sqrt{\bar n}/g$ tighter constraints are required. More specifically, for the cavities we require $1/\kappa \gg \pi\sqrt{\bar n}/g$ and for the atoms $1/\gamma\gg\pi\sqrt{\bar n}/g$. The experiment by Khudaverdyan et al. [@Khudaverdyan] achieved ratios $g/\kappa=32.5$ and $g/\gamma=5$ which imply that $ \bar n \ll 2.5$. For a single atom interacting with a cavity, the experiment by Birnbaum et al. [@Birnbaum] involves ratios $g/\kappa=8.26$, $g/\gamma=13.03$ and if there is a possibility to attain these parameters for a two-atom scenario then the constraint would yield $\bar n\ll 7$. In microwave cavities [@Raimond], the numbers are $g/\kappa\approx 60$ and $g/\gamma\approx 3000$ which lead to the condition $\bar n\ll 360$. Thus, the coherence requirement of our proposal is in the reach of current experimental capabilities. We have mentioned that our protocol requires the implementation of projections onto coherent states. We are not aware of an experimental solution to this problem. However, coherent states and the vacuum state are routinely distinguished in current experiments, see e.g. [@Wittmann]. A successful measurement of the vacuum state is achieved when no photons are detected. Therefore, for our purposes it would be sufficient to displace the state of the field in such a way that the field contributions ${\ensuremath{| {\alpha_{1/2}^\pm} \rangle}}$ are close to the vacuum state. This can be achieved by driving the optical cavity with a resonant laser. The Hamiltonian describing this situation in the interaction picture is $\hat V=\hbar(\Omega^\ast \hat a+\Omega \hat a^\dagger)$. Under this interaction the states of the field evolve under the influence of the evolution operator $\hat U_{t_d}=\exp{(-i t_d \hat V/\hbar)}$ that can be identified with the displacement operator $\hat D(\alpha)=\exp{(\alpha \hat a^\dagger- \alpha^\ast \hat a)}$ provided the interaction strength of the laser $\Omega$ and the driving time $t_d$ are adjusted as $\Omega t_d =i\alpha$. In this way one is able perform the displacement $\hat D(\alpha){\ensuremath{| {-\alpha} \rangle}}={\ensuremath{| {0} \rangle}}$. Finally, we conceive a photodetection of the field with three possible outputs; $1)$ No signal, meaning a projection onto the vacuum state, i.e. ${\ensuremath{| {-\alpha} \rangle}}$ in the undisplaced picture; $2)$ a weak signal indicating a failure of the protocol; 3) a strong signal would come from the field state ${\ensuremath{| {2\alpha} \rangle}}$. Probabilities in the protocol ----------------------------- A summary of all the possible outcomes of the protocol is given in the Table \[table\]. We note that summing the probabilities of all the successful outcomes of the protocol results in an overall success probability of $P_{\rm T}=b+(1-b)(|c^-|^2-b|c^+|^2)$ which depends on the initial state of the system. The complementary probability $1-P_{\rm T}$ corresponds to events that lead to failure of the protocol. There is a possible failure after a successful projection onto ${\ensuremath{| {\alpha} \rangle}}$ but unsuccessful projection onto ${\ensuremath{| {-i\alpha} \rangle}}$. This occurs with probability $(1-b)|d_\phi^-|^2$. It also might happen that the projection onto the field state ${\ensuremath{| {-\alpha} \rangle}}$ in the first cavity is unsuccessful. This takes place with probability $(1-b)(|c^+|^2+|d_\phi^+|^2)$. Finally, it is possible that both projections in the first and second cavity fail with probability $(1-b)b|c^+|^2$. Summing all these failure probabilities leads to $1-P_{\rm T}=(1-b)(|d^-_\phi|^2+|d_\phi^+|^2)+(1-b^2)|c^+|^2$. Conclusion {#Conclusions} ========== We have presented a proposal of an unambiguous Bell measurement on two atomic qubits with almost unit-fidelity. The theoretical description of the scheme involves the resonant two-atom Tavis-Cummings model and a Ramsey-type sequential interaction of both atoms with single modes of the electromagnetic field in two spatially separated cavities. The first and second cavities are initially prepared in coherent states ${\ensuremath{| {\alpha} \rangle}}$ and ${\ensuremath{| {i\alpha} \rangle}}$ respectively. The interaction time can be adjusted by controlling the velocities of the two atoms passing trough the cavities. Our discussion has concentrated on basic properties of the two-atom Tavis-Cummings model in the limit of high photon numbers. We have derived an approximate solution of the dynamical equation which is expressed as a sum of three terms correlating atomic and field states. A superposition of two atomic Bell states is correlated with the initial coherent state. Superpositions of the other two Bell states are correlated with two time dependent field states. In phase space these time dependent contributions of the field state overlap on the opposite site to the initial coherent state ${\ensuremath{| {\alpha} \rangle}}$ (${\ensuremath{| {i\alpha} \rangle}}$) in the first (second) cavity at an interaction time of half the revival time. For this reason we have proposed projections onto the two coherent states ${\ensuremath{| {\alpha} \rangle}}$ and ${\ensuremath{| {-\alpha} \rangle}}$ in the first cavity, and ${\ensuremath{| {i\alpha} \rangle}}$ and ${\ensuremath{| {-i\alpha} \rangle}}$ in the second cavity. In order to obtain almost unit fidelity atomic Bell states the mean photon number has to be restricted to the condition given in Eq. . Our protocol has a finite error probability due to the imperfect projection onto the time dependent contributions of the field states in the cavities that overlap with ${\ensuremath{| {-\alpha} \rangle}}$ and ${\ensuremath{| {-i\alpha} \rangle}}$. Nevertheless, the four successful events of our protocol summarized in Table \[table\] unambiguously project onto four different Bell states with almost unit fidelity. In view of current experimental realizations of quantum information protocols in the field of cavity quantum electrodynamics the scheme discussed in this work requires cutting edge technology. An experimental implementation would require accurate control of the interaction time and of the average number of photons in the cavity. Furthermore, the coherent evolution of the joint system must be preserved. This imposes the condition that the characteristic time of photon damping in the cavity and of atomic decay have to be much larger than the interaction time that scales with the square root of the mean photon number in the cavity. Finally, we point out that the implementation of a von Neumann coherent state projection is, up to our knowledge, an open problem that has to be considered in future investigations. If these obstacles are overcome, our proposal offers a key component for quantum information technology such as a multiphoton based hybrid quantum repeater. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by the BMBF project Q.com. Approximations with large mean photon numbers {#appendix} ============================================= In this Appendix we present the derivation of the time dependent state vector of Eq. . It has been shown in Ref. [@Torres2014; @Torres2010] that the time evolution of any initial state in the form of Eq. can be obtained from the solution of the eigenvalue problem of the two-atom Tavis-Cummings Hamiltonian . The exact solution can be written in the following form $${\ensuremath{| {\Psi_t} \rangle}}= {\ensuremath{| {0,0} \rangle}}{\ensuremath{| {\chi_t^0} \rangle}} +{\ensuremath{| {1,1} \rangle}}{\ensuremath{| {\chi^{1}_t} \rangle}} +{\ensuremath{| {\Psi^+} \rangle}}{\ensuremath{| {\chi^+_t} \rangle}} +c_-{\ensuremath{| {\Psi^-} \rangle}}{\ensuremath{| {\alpha} \rangle}} \label{psi}$$ with the relevant photonic states $$\begin{aligned} {\ensuremath{| {\chi^0_t} \rangle}}&= c_0\,p_0{\ensuremath{| {0} \rangle}}+ \sum_{n=1}^\infty \tfrac{ \sqrt{n} \left( \xi_{n,t}^- -\xi_{n,t}^+\right) +\sqrt{n-1}\xi_{n} }{\sqrt{2n-1}} {\ensuremath{| {n} \rangle}}, \nonumber\\ {\ensuremath{| {\chi^1_t} \rangle}}&=\sum_{n=2}^\infty \tfrac{ \sqrt{n-1} \left( \xi_{n,t}^- -\xi_{n,t}^+\right) -\sqrt{n}\xi_{n} }{\sqrt{2n-1}} {\ensuremath{| {n-2} \rangle}}, \nonumber\\ {\ensuremath{| {\chi^+_t} \rangle}}&= \sum_{n=1}^\infty \left( \xi_{n,t}^-+\xi_{n,t}^+ \right) {\ensuremath{| {n-1} \rangle}}, \label{fieldstates}\end{aligned}$$ and with the aid of the following abbreviations $$\begin{aligned} &\xi_{n,t}^\pm = \frac{e^{\pm i \omega_n t}}{2} \left( c_+p_{n-1}\mp \tfrac{\sqrt{n}\,c_0 p_n+\sqrt{n-1}\,c_1 p_{n-2}}{\sqrt{2n-1}} \right), \nonumber\\ &\xi_{n} = \frac{\sqrt{n-1}\,c_0 p_n-\sqrt{n}\,c_1 p_{n-2}}{\sqrt{2n-1}},\quad \omega_n=g\sqrt{4n-2}. \nonumber\end{aligned}$$ The coefficients $p_n$ are initial probability amplitudes of the photon number states ${\ensuremath{| {n} \rangle}}$ of the initial field state ${\ensuremath{| {\alpha} \rangle}}$. The coefficients $c_0$ and $c_1$ are the initial probability amplitudes of the states ${\ensuremath{| {0,0} \rangle}}$ and ${\ensuremath{| {1,1} \rangle}}$ and are related to the probability amplitudes of the state in Eq. by $$\begin{aligned} d_\phi^\pm&= \frac{c_0 e^{i\phi}\pm c_1 e^{-i\phi}}{\sqrt2}. \label{}\end{aligned}$$ The expressions of Eq. can be significantly simplified approximately by taking into account that the field is initially prepared in a coherent state ${\ensuremath{| {\alpha} \rangle}}$ with photonic distribution $p_n=\exp(-\bar n/2+i\phi)\sqrt{\bar n^{n}/n!}$ and by assuming a large mean photon number $\bar n=|\alpha|^2\gg 1$. In such case the photonic distribution has the following property $$\begin{aligned} p_n=\sqrt{\frac{\bar n}{n}}e^{i\phi}p_{n-1}\approx e^{i\phi}p_{n-1}. \label{}\end{aligned}$$ Applying this approximation to the states of Eq. we find the following approximations $$\begin{aligned} &{\ensuremath{| {\chi^0_t} \rangle}}\approx \sum_{n=1}^\infty \tfrac{ (c_++d_\phi^+) e^{-i\omega_n t} -(c_+-d_\phi^+) e^{i\omega_n t} +2d_\phi^- }{2\sqrt2} p_{n-1} {\ensuremath{| {n} \rangle}}, \nonumber\\ &{\ensuremath{| {\chi^1_t} \rangle}}\approx\sum_{n=2}^\infty \tfrac{ (c_++d_\phi^+) e^{-i\omega_n t} -(c_+-d_\phi^+) e^{i\omega_n t} -2d_\phi^- }{2\sqrt2} p_{n-1} {\ensuremath{| {n-2} \rangle}}, \nonumber\\ &{\ensuremath{| {\chi^+_t} \rangle}}\approx \sum_{n=1}^\infty \tfrac{ (c_++d_\phi^+) e^{-i\omega_n t} +(c_+-d_\phi^+) e^{i\omega_n t} }{2} p_{n-1} {\ensuremath{| {n-1} \rangle}}. \nonumber\end{aligned}$$ In order to simplify these expressions we perform a Taylor expansion in the frequencies $\omega_n$ around $\bar n+1$ as $$\begin{aligned} \omega_n/g&\approx\sqrt{4 \bar n+2}+2\frac{n-\bar n-1}{\sqrt{4\bar n+2}} -2\frac{(n-\bar n-1)^2}{(4\bar n+2)^{3/2}}. \label{}\end{aligned}$$ The previous second order expansion is valid provided the third order contribution multiplied by $gt$ is negligible. This imposes the restriction on the interaction time $$t\ll \frac{(4\bar n+2)^{5/2}}{4g \bar{n}^{3/2}} \approx \bar n/g.$$ For the rescaled time $\tau=gt/\pi\sqrt{4\bar n +2}$ used in the main text this implies $\tau\ll\sqrt{\bar n}/2\pi$. In this approximation the field states can be written as $$\begin{aligned} {\ensuremath{| {\chi^0_t} \rangle}}&\approx e^{-i\phi} \tfrac{ (c_++d_\phi^+) {\ensuremath{| {\alpha_t^{-},-1} \rangle}} -(c_+-d_\phi^+) {\ensuremath{| {\alpha_t^{+},-1} \rangle}} +2d_\phi^-{\ensuremath{| {\alpha} \rangle}} }{2\sqrt2}, \nonumber\\ {\ensuremath{| {\chi^1_t} \rangle}}&\approx e^{i\phi} \tfrac{ (c_++d_\phi^+) {\ensuremath{| {\alpha_t^{-},1} \rangle}} -(c_+-d_\phi^+) {\ensuremath{| {\alpha_t^{+},1} \rangle}} -2d_\phi^-{\ensuremath{| {\alpha} \rangle}} }{2\sqrt2}, \nonumber\\ {\ensuremath{| {\chi^+_t} \rangle}}&\approx \tfrac{ (c_++d_\phi^+) {\ensuremath{| {\alpha^{-}_t,0} \rangle}} +(c_+-d_\phi^+) {\ensuremath{| {\alpha^{+}_t,0} \rangle}} }{2}, \label{fieldstates2}\end{aligned}$$ with $$\begin{aligned} {\ensuremath{| {\alpha_t^\pm,j} \rangle}}&= \sum_{n=0}^\infty e^{-\frac{|\alpha|^2}{2}}\frac{\alpha^n}{\sqrt{n!}} e^{\pm i \left(\nu +2\tfrac{n-\bar n+j}{\nu}-2\frac{(n-\bar n+j)^2}{\nu^3} \right)g t} {\ensuremath{| {n} \rangle}} \nonumber\\ &\approx e^{\pm ij2\pi\tau}{\ensuremath{| {\alpha^\pm_\tau} \rangle}}, \label{alphas0}\end{aligned}$$ $j\in \{-1,0,1\}$ and $\nu=\sqrt{4\bar n +2}$. Furthermore, the states ${\ensuremath{| {\alpha_\tau^\pm} \rangle}}$ are defined by Eq. . We neglected the contribution of $j$ in the quadratic term of the exponent in Eq. . This can be justified given the fact a Poisson distribution with high mean value is almost symmetrically centered around its mean with variance equal to its mean. This implies that the maximal relevant value in the quadratic term is given by $$\begin{aligned} {\rm max}\left\{ \frac{(n-\bar n+j)^2}{\nu}\right\}\approx 2+\frac{2j}{\sqrt{\bar n}}+\frac{j}{2\bar n},\end{aligned}$$ which shows that the contribution of $j=-1,0,1$ to this term is negligible for $\bar n\gg 1$. Finally, using the approximations of Eq. and in Eq. and separating the atomic states accompanying to the photonic states ${\ensuremath{| {\alpha_\tau^\pm} \rangle}}$ and ${\ensuremath{| {\alpha} \rangle}}$ yields the result of Eq. . Evaluation of ${\langle{\alpha}|{\alpha^\pm_\tau}\rangle}$ and ${\langle{-\alpha}|{\alpha^\pm_\tau}\rangle}$ {#appendixoverlap} ============================================================================================================= ![\[overlaps\] Top (bottom) figure: Real part of the overlap ${\langle{\alpha}|{\alpha^+_\tau}\rangle}$ (${\langle{-\alpha}|{\alpha^+_\tau}\rangle}$) as function of the rescaled time $\tau$. The red curve was evaluated using the exact expression in the first line of Eq. and the black narrow line corresponds to the approximation given in Eq. . The mean photon number is $\bar n=12.16$. ](pa "fig:"){width="48.00000%"} ![\[overlaps\] Top (bottom) figure: Real part of the overlap ${\langle{\alpha}|{\alpha^+_\tau}\rangle}$ (${\langle{-\alpha}|{\alpha^+_\tau}\rangle}$) as function of the rescaled time $\tau$. The red curve was evaluated using the exact expression in the first line of Eq. and the black narrow line corresponds to the approximation given in Eq. . The mean photon number is $\bar n=12.16$. ](pb "fig:"){width="48.00000%"} In this appendix we investigate the overlaps between the field states ${\ensuremath{| {\alpha^\pm_\tau} \rangle}}$ and ${\ensuremath{| {\pm\alpha} \rangle}}$ defined in Eqs. and respectively. Using the index $j\in\{-1,1\}$ one can write a single expression for the four overlaps as $$\begin{aligned} &{\langle{j\alpha}|{\alpha_{\tau}^\pm}\rangle}= \sum_{n=0}^{\infty} \frac{\bar n^{n}j^n}{n!e^{\bar n}} e^{ \pm i 2\pi\tau \left[ \bar n+1+n -\frac{(n-\bar n)^2}{4\bar n+2}\right] } \label{overlapapp} \\ &\approx \frac{ e^{\pm i(\bar n +1)2\pi\tau} }{\sqrt{2\pi\bar n}} \sum_{n=-\infty}^\infty e^{ \pm i \pi n\left(2\tau+\frac{1-j}{2}\right) -\frac{(1\pm i\pi\tau)}{2\bar n}(n-\bar n)^2} \nonumber\\ &= \frac{ e^{\pm i(\bar n +1)2\pi\tau} }{\sqrt{1\pm i\pi\tau}} \sum_{n=-\infty}^\infty e^{ \pm i2\pi\bar n\left(\tau+\frac{1-j}{4}\pm n\right) -\frac{2\pi^2 \bar n}{1\pm i\pi\tau}\left(\tau+\frac{j-1}{4}\pm n\right)^2 }. \nonumber\end{aligned}$$ In the second line we have approximated the Poisson distribution by a normal distribution and we have extended the sum to $-\infty$. These approximations are valid in the limit $\bar n\gg 1$. In the third line we have used the Poisson summation formula [@Bellman] which in the case of a Gaussian sum can be expressed as $$\begin{aligned} \sum_{n=-\infty}^\infty e^{i2\pi un-s (n-\bar n)^2}= \sqrt{\frac{\pi}{s}}\sum_{n=-\infty}^\infty e^{i2\pi \bar n(n+u) -\frac{\pi^2}{s}(n+u)^2}, \nonumber\end{aligned}$$ with ${\rm Re}[s]>0$. The last expression in Eq. involves a summation of Gaussian terms with variance $(1+\pi^2\tau^2)/4\pi^2\bar n$. This variance is very small provided the condition $4\bar n\gg \tau^2$ is fulfilled. If this requirement is met, there exists a dominant contribution in the summation that corresponds to the value of $n$ where $|\tau+(1-j)/4\pm n|$ achieves its minimum value. This minimum can be evaluated as $$f_j(\tau)= {\rm frac}\left(\tau+\tfrac{1-j}{4}+\tfrac{1}{2}\right)-\tfrac{1}{2},$$ where ${\rm frac}(x)$ denotes the fractional part of $x$. By considering only the dominant term of the last summation in Eq. one can find the following approximation of the overlap between field states $$\begin{aligned} {\langle{j\alpha}|{\alpha^\pm_{\tau}}\rangle}&\approx \frac{ e^{\pm i2\pi[\bar n f_j(\tau)+(\bar n +1)\tau]} }{\sqrt{1\pm i\pi\tau}} e^{-\frac{2\pi^2 \bar n}{1\pm i\pi\tau}[f_j(\tau)]^2}, \label{overlapepsilon}\end{aligned}$$ with $j\in\{-1,1\}$. This result for $\tau=1/2$ and $j=-1$ has been rewritten in polar form in Eq. of the main text, where we used that $f_{-1}(1/2)=0$. In Eq. we have used that $f_1(1/2)=-1/2$. In the top panel of Fig. \[overlaps\] have plotted the real part of the overlap ${\langle{\alpha}|{\alpha^+_\tau}\rangle}$ as a function of the rescaled time $\tau$. The evaluation of the exact expression is shown in red and the approximation in black. The collapse and revival phenomena are well described by the approximation of the overlap in Eq. . Similar treatment to describe the collapse and revival phenomena in the Jaynes-Cummings model has been presented in Ref. [@Karatsuba]. In the bottom figure of Fig. \[overlaps\] we have plotted the real part of the overlap ${\langle{-\alpha}|{\alpha^+_\tau}\rangle}$. [99]{} H.-J. Briegel, W. Dür, J. I. Cirac and P. Zoller, Phys. Rev. Lett. [**81**]{}, 5932 (1998). W. Dür, H.-J. Briegel, J. I. Cirac and P. Zoller, Phys. Rev. A [**59**]{}, 169 (1999). C. H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J. A. Smolin, and W. K. Wootters, Phys. Rev. Lett. [**76**]{}, 722 (1996). D. Deutsch, A. Ekert, R. Jozsa, C. Macchiavello, S. Popescu, and A. Sanpera, Phys. Rev. Lett. [**77**]{}, 2818 (1996). M. Zukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, Phys. Rev. Lett. [**71**]{}, 4287 (1993). N. Sangouard, C. Simon, H. de Riedmatten, and N. Gisin, Rev. Mod. Phys. [**83**]{}, 33 (2011). P. van Loock, T. D. Ladd, K. Sanaka, F. Yamaguchi, K. Nemoto, W. J. Munro, and Y. Yamamoto, Phys. Rev. Lett. [**96**]{}, 240501 (2006). T. D. Ladd, P. van Loock, K. Nemoto, W. J. Munro, and Y. Yamamoto, New J. Phys. [**8**]{}, 184 (2006). P. van Loock, N. Lütkenhaus, W. J. Munro, and K. Nemoto, Phys. Rev. A [**78**]{}, 062319 (2008). D. Gonta and P. van Loock, Phys. Rev. A [**88**]{}, 052308 (2013). J. Z. Bernád and G. Alber, Phys. Rev. A [**87**]{}, 012311 (2013). J. Z. Bernád, H. Frydrych, and G. Alber, J. Phys. B [**46**]{}, 235501 (2013). J.M. Torres, J.Z. Bernád and G. Alber, Phys. Rev. A [**90**]{}, 012304 (2014). C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. [**70**]{}, 1895 (1993). C. H. Bennett and S. J. Wiesner, Phys. Rev. Lett. [**69**]{}, 2881 (1992). E. Knill, R. Laflamme and G. Milburn,Nature [**409**]{}, 46 (2001). T. B. Pittman, M. J. Fitch, B. C Jacobs, and J. D. Franson, Phys. Rev. A [**68**]{}, 032316 (2003). W. J. Munro, K. Nemoto, T. P. Spiller, S. D. Barrett, P. Kok, and R. G. Beausoleil, J. Opt. B: Quantum Semiclass. Opt. [**7**]{}, 135 (2005). Y.-H. Kim, S. P. Kulik and Y. Shih, Phys. Rev. Lett. [**86**]{}, 1370 (2001). C. Schuck, G. Huber, C. Kurtseifer, and H. Weinfurter, Phys. Rev. Lett. [**96**]{}, 190501 (2006). T. Pellizzari, S. A. Gardiner, J. I. Cirac and P. Zoller, Phys. Rev. Lett. [**21**]{}, 3788 (1995). S. Lloyd, M. S. Shahriar, J. H. Shapiro and P. R. Hemmer, Phys. Rev. Lett. [**87**]{}, 167903 (2001). F. Schmidt-Kaler, H. Häffner, M. Riebe, S. Gulde, G. P. T. Lancaster, T. Deuschle, C. Becher, C. F. Roos, J. Eschner, and R. Blatt, Nature [**422**]{}, 408 (2003). L. Isenhower, E. Urban, X. L. Zhang, A. T. Gill, T. Henage, T. A. Johnson, T. G. Walker, and M. Saffman, Phys. Rev. Lett. [**104**]{}, 010503 (2010). C. Nölleke, A. Neuzner, A. Reiserer, C. Hahn, G. Rempe, and S. Ritter, Phys. Rev. Lett. [**110**]{}, 140403 (2013). M. Tavis and F. W. Cummings, Phys. Rev. [**170**]{}, 279 (1968). B. Casabone, A. Stute, K. Friebe, B. Brandstätter, K. Schüppert, R. Blatt, and T. E. Northup, Phys. Rev. Lett. [**111**]{}, 100505, (2013). B. Casabone, K. Friebe, B. Brandstätter, K. Schüppert, R. Blatt, and T. E. Northup, Phys. Rev. Lett. [**114**]{}, 023602 (2015). R. Reimann, W. Alt, T. Kampschulte, T. Macha, L. Ratschbacher, N. Thau, S. Yoon, and D. Meschede, Phys. Rev. Lett. [**114**]{}, 023601 (2015). S. Nußmann, M. Hijlkema, B. Weber, F. Rohde, G. Rempe, and A. Kuhn, Phys. Rev. Lett. [**95**]{} 173602 (2005). R. J. Glauber, Phys. Rev. [**131**]{}, 2766 (1963). A. Perelomov, [*Generalized Coherent States and Their Applications* ]{} (Springer-Verlag Berlin Heidelberg 1986). C. E. A. Jarvis, D. A. Rodrigues, B. L. Györffy, T. P. Spiller, A. J. Short, and J. F. Annett, New J. Phys. [**11**]{}, 103047 (2009). D. A. Rodrigues, C. E. A. Jarvis, B. L. Györffy, T. P. Spiller and J. F. Annett, J. Phys.: Condens. Matter [**20**]{} 075211 (2008). M. S. Kim, J. Lee, D. Ahn, P. L. Knight, Phys. Rev. A, [**65**]{} 040101(R) (2002). T. E. Tessier, I. H. Deutsch, A. Delgado, and I. Fuentes-Guridi Phys. Rev. A, [**68**]{} 062316 (2003). J. H. Eberly, N. B. Narozhny and J. J. Sanchez-Mondragon, Phys. Rev. Lett. [**44**]{} 1323 (1980). W. P. Schleich [*Quantum Optics in Phase Space*]{} (Wiley-VCH, Weinheim, 2001). K. Vogel and H. Risken, Phys. Rev. A [**40**]{}, 2847 (1989). J. M. Torres, E. Sadurni, and T. H. Seligman, J. Phys. A [**43**]{}, 192002 (2010). J. Gea-Banacloche, Phys. Rev. A, [**44**]{} 5913 (1991). M. Khudaverdyan, W. Alt, I. Dotsenko, T. Kampschulte, K. Lenhard, A. Rauschenbeutel, S. Reick, K. Schörner, A. Widera and D. Meschede, New J. Phys., [**10**]{} 073023 (2008). S. Brakhane, W. Alt, T. Kampschulte, M. Martinez-Dorantes, R. Reimann, S. Yoon, A. Widera, and D. Meschede Phys. Rev. Lett., [**109**]{} 173601 (2012). J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys., [**73**]{} 565 (2001). K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Nature [**436**]{}, 87 (2005). C. Wittmann, M. Takeoka, K. N. Cassemiro, M. Sasaki, Gerd. Leuchs, and U. L. Andersen, Phys. Rev. Lett., [**101**]{} 210501 (2008). R.E. Bellman, [*A Brief introduction to theta functions*]{} (Holt, Rinehart and Winston, New York, 1961). A. A. Karatsuba, and E. A. Karatsuba, J. Phys. A, [**42**]{} 195304 (2009). [^1]:
--- abstract: 'Clustering analysis has become a ubiquitous information retrieval tool in a wide range of domains, but a more automatic framework is still lacking. Though internal metrics are the key players towards a successful retrieval of clusters, their effectiveness on real-world datasets remains not fully understood, mainly because of their unrealistic assumptions underlying datasets. We hypothesized that capturing [*traces of information gain*]{} between increasingly complex clustering retrievals—[*InfoGuide*]{}—enables an automatic clustering analysis with improved clustering retrievals. We validated the [*InfoGuide*]{} hypothesis by capturing the traces of information gain using the Kolmogorov-Smirnov statistic and comparing the clusters retrieved by [*InfoGuide*]{} against those retrieved by other commonly used internal metrics in artificially-generated, benchmarks, and real-world datasets. Our results suggested that [*InfoGuide*]{} can enable a more automatic clustering analysis and may be more suitable for retrieving clusters in real-world datasets displaying nontrivial statistical properties.' author: - | Paulo Rocha^1^, Diego Pinheiro^2^, Martin Cadeiras^2^ and Carmelo Bastos-Filho^1^\ ^1^ Department of Computer Engineering, University of Pernambuco, Brazil\ [{phar, carmelofilho}@poli.br]{}\ ^2^ Department of Internal Medicine, University of California, Davis, US\ [{pinsilva, mcadeiras}@ucdavis.edu]{}\ bibliography: - 'bib.bib' title: 'Towards Automatic Clustering Analysis using Traces of Information Gain: The InfoGuide Method' --- Introduction {#Introduction} ============ Clustering analysis has become ubiquitous in the retrieval of clusters from a plethora of datasets arising from a wide range of domains [@Adolfsson:2019bv], supporting the characterization of the development of personalized medical therapies [@Bakir:2018cb], the understanding of intricate social-economic factors [@Mirowsky:2017fv], and the development of healthcare ranking systems [@Wallace:2019fd]. Since the creation of the first clustering algorithm in 1948, by the botanist and evolutionary biologist Thorvald Sørensen while studying biological taxonomy [@sorensen1948method], novel algorithms for clustering retrieval have been proposed [@xu2015comprehensive]. However, a framework for automatic clustering analysis is still lacking ad even determining the optimal number of clusters to be retrieved remains a major methodological issue [@Tibshirani:2001fj]. Given that ground truth labels are inherently absent, the clustering retrieval largely relies on internal metrics of clustering quality [@arbelaitz2013extensive]. These metrics are idealized aspects of clustering quality defined a priori and often involve unrealistic assumptions about the datasets [@Tibshirani:2001fj; @rousseeuw1987silhouettes; @calinski1974dendrite]. Nevertheless, clustering algorithms not only, directly or indirectly, retrieve clusters according to these internal metrics [@macqueen1967some; @ward1963], but also have their clustering retrieval subsequently evaluated according to these same metrics. As a result, different internal metrics often disagree with each other regarding the quality of a specific clustering retrieval [@Tibshirani:2001fj; @xu2015comprehensive]. Another crucial issue relies on the fact that most of the metrics use distances and, sometimes, the distances in different attributes of the problem may have different meanings. We hypothesized that capturing the [*traces of information gain*]{} between increasingly complex clustering retrievals—the [*InfoGuide*]{} method—can enable a more automatic clustering analysis. We validated the [*InfoGuide*]{} hypothesis by capturing the traces of information using the Kolmogorov–Smirnov statistic and comparing the clusters retrieved by [*InfoGuide*]{} against those retrieved by other commonly used internal metrics over artificially-generated, benchmarks, and real-world datasets. Our results suggest that [*InfoGuide*]{} may be more suitable to retrieve clusters in real-world datasets displaying nontrivial statistical properties. Related Work {#Related Work} ============ The application of a clustering algorithm $g$ over a dataset $\mathcal{X}$ to retrieve $k$ groups, a clustering retrieval $C^{(k)}$, can be generally defined as a mapping $g_{k}: \mathcal{X} \rightarrow C^{(k)}$ such that each data point $x \in \mathcal{X}$ is assigned to one of the $k$ clusters $c_i^{k} \in C^{(k)}$. Each $c^{(k)}_i$ in $C^{(k)}$ represents a subgroup of the dataset $\mathcal{X}^{(N_i,F)}_i$ as following: $$\begin{aligned} C^{(k)} = \{c^{(k)}_1, c^{(k)}_2, \dots, c^{(k)}_k\} \enspace , \end{aligned} \label{Eq:C}$$ in which $N_i$ is the number of data points in $c^{(k)}_i$. Clustering algorithms can be classified into different categories according to their assumptions about clustering retrieval, namely, partitions, hierarchy, density, distribution, subspace, to name but a few [@rodriguez2019clustering; @xu2015comprehensive]. Despite the differences among categories, the main idea underlying clustering retrieval is that data points belonging to the same cluster should be similar to each other and dissimilar from data points belonging to other clusters [@sorensen1948method]. In general, the similarity between two data points depends on their distance [@sorensen1948method]. The chosen distance (e.g., Euclidean, Manhattan, Mahalanobis) applied to the $g_k$ can generate a bias in the shape of the groups, resulting in different clustering retrievals even the same $\mathcal{X}$ and $k$. To proper evaluate the quality of the mapping $g_{k}: \mathcal{X} \rightarrow C^{(k)}$, different internal metrics $m_{k}: C^{(k)} \rightarrow q$ have been proposed, in which $q$ represents a comparable scalar enabling the comparison between different numbers of $k$ as well as the possibility of finding the optimal number of clusters $\hat{k}$ for ${g_k} \in [k_{min}, k_{max}]$. In the extensive study of Arbelaitz et al, 30 internal metrics were evaluated in a wide range of datasets, demonstrating that most of the metrics simply determine the quality of clustering retrievals by applying primarily two criteria: the distance between points in the same cluster, described as [*cohesion*]{}, and the distance between different groups, described as [*separation*]{}. These metrics have a bias that better evaluates a set of groups with both low cohesion and high separation, and can thus be defined as distance-based internal metrics [@arbelaitz2013extensive]. Conversely, an information theoretic measure of cluster separability was developed by Gokcay and Principe as a cost function to guide an optimization clustering algorithm [@gokcay2002information]. The authors used both artificially generated and image segmentation datasets. Similarly, Faivishevsky and Goldberger proposed the clusters mutual information as the maximization objective to be used by a clustering algorithm [@faivishevsky2010nonparametric]. The authors demonstrated that an entropy-based approach may be more suitable than a distance-based approach for clustering analysis over both artificially-generated and benchmark datasets. In this paper, a information theoretic approach for clustering analysis was moved forward given the main challenges faced by distance-based metrics especially in real-world datasets with nontrivial distributions, in which concepts such as averages and distance-based similarities become unrealistic [@gokcay2002information]. In this sense, we proposed the [*InfoGuide*]{} method for automatic clustering analysis in which an optimal clustering retrieval is based on the information gained between increasingly complex clustering retrievals. Methods {#Methods} ======= Clustering analysis involves the following elements: a dataset, a set of clustering algorithms, as well as internal and external metrics of clustering retrieval ( \[fig:method\], A). In this work, we proposed the [*InfoGuide*]{} method for automatic clustering analysis using traces of information gain ( \[fig:method\], B). ![image](clustering_framework_infoguide_diagram.png){width="7.1in"} InfoGuide—An Automatic Retrieval of Clusters using Traces of Information Gain ----------------------------------------------------------------------------- The challenge in clustering analysis is retrieving the highest number $\hat{k}$ of [*meaningful*]{} clusters as close to the optimal number $k^*$ of clusters as possible, avoiding both underfitting $\hat{k} < k^*$ and overfitting $\hat{k} > k^*$. The definition of a [*meaningful*]{} cluster not only depends on the specific internal metric used but also is affected by the specific clustering algorithm employed. Let $C^{(k)}$ and $C^{(k+1)}$ be the set of $k$ and $k+1$ increasingly complex clustering retrievals, respectively, the [*InfoGuide*]{} method retrieves the smallest number of clusters $\hat{k}$ as long as an increased information gain can be obtained between increasingly complex clustering retrievals as following: $$\hat{k} = \textrm{smallest k such that } C^{(k+1)} \,{\buildrel d \over =}\, C^{(k)} \enspace , \label{Eq:selectk}$$ in which the clustering retrieval $C^{(k+1)}$ is equivalent to $C^{(k)}$ according to the pairwise equivalencies between their individual clusters as following: $$\begin{aligned} & C^{(k+1)} \,{\buildrel d \over =}\, C^{(k)} \iff \\ & (\forall c_i \in C^{(k+1)}, c_j \in C^{(k)}) \; \exists\; c^{(k+1)}_i \,{\buildrel d \over =}\, c^{(k)}_j \enspace , \label{Eq:C=C} \end{aligned}$$ in which individual clusters $c^{(k+1)}_i$ and $c^{(k)}_j$ are as following: $$\begin{aligned} c^{(k+1)}_i \,{\buildrel d \over =}\,c^{(k)}_j \iff (\forall f \in F) \; f_i \,{\buildrel d \over =}\, f_j \enspace , \end{aligned} \label{Eq:f}$$ in which the feature $f$ in $c^{(k+1)}_i$ and $c^{(k)}_i$ are equivalent in distribution. Therefore, the [*InfoGuide*]{} method only considers that the clustering retrieval $C ^ {(k + 1)}$ increases the information gain relative to $C_ {k}$ when it retrieves novel clusters not already contained in $C_ {k}$. Otherwise, retrieving a higher number of clusters only results in a more complex model without information gain. In this work, the Kolmogorov-Smirnov $KS$ statistic was used to quantify the equivalency in distribution between features such that $f_i \,{\buildrel d \over =}\, f_j \equiv KS(f_i, f_j)$. Information gain is thus the statistical evidence that both features may not come from the same statistical distribution whenever the p-value of the $KS$ test is lower than the statistical significance $\alpha$ after using the Bonferroni correction for the $F \times (k+1) \times k$ multiple comparisons. The optimal $\hat{k} \in [k_{min}, k_{max}]$ is the highest $\hat{k}$ that can be obtained for a range of $\alpha_u \in (0, \alpha]$. Metrics ------- The [*InfoGuide*]{} method was compared with three commonly used internal metrics that embrace the two main ideas underlying clustering analysis, namely, cohesion and separation. Let $\mathcal{X}^{N}$ be a dataset with $N$ data points and $C^{(k)}$ a clustering retrieval, these internal metrics use the following basic calculations of distance: between two data points, $(x_i - x_j)$, between a data point and the estimated value of a group, $(x_i - \langle c^{(k)}_i \rangle)$, and between the estimated values of a group and a dataset, $(\langle c^{(k)}_i \rangle - \langle C^{(k)} \rangle)$. The [*Silhouette*]{} chooses the optimal $\hat{k}$ by maximizing the average difference between the separation and cohesion as following [@rousseeuw1987silhouettes]: $$SI = \frac{1}{N} \sum\limits_{i=1}^{N} \frac{b_i\; - \;a_i}{max(b_i\;,\;a_i)} \enspace , \label{Eq:silhouette}$$ in which $a_i$ measures the cohesion of a data point $i$ as following: $$a_i = \frac{1}{N_i - 1} \sum\limits_{j=1, j\neq i}^{N_i} (x_i - x_j) \enspace , \label{Eq:ai}$$ and $b_i$ measures the separation of a data point $i$ to the other points belonging to nearest cluster as following: $$b_i = \min\limits_{1\leq l \leq k ,x_i \not\subset c^{(k)}_i} \bigg( \frac{1}{N_l} \sum\limits_{j=1}^{N_l} (x_i - x_j) \bigg) \enspace . \label{Eq:bi}$$ The [*Calinsk-Harabasz (CH) Index*]{} chooses the optimal $\hat{k}$ by maximizing the ratio between the Sum of Squares Within (SSW) and the Sum of Squares Between (SSB) as following [@calinski1974dendrite]: $$CH = \frac{N - k}{k - 1} \cdot \frac{SSB}{SSW} \enspace , \label{Eq:ch-index}$$ in which $ SSW = \sum\limits_{i=1}^{k}\sum\limits_{j=1}^{N_i} (x_j - \langle c^{(k)}_i\rangle)^2$ is a measure of cohesion and $ SSB = \sum\limits_{i=1}^{k} N_i \cdot (\langle c^{(k)}_i \rangle - \langle C^{(k)}\rangle)^2$ is a measure of separation. It is normalized by the number of data points $N$ and the number of groups $k$ to ensure a similar scale when comparing different numbers of groups. The [*Gap Statistic*]{} maximizes the $SSW$ of a clustering retrieval from the actual dataset relative to what would be expected by a clustering retrieval from an uniformly distributed dataset $SSW_{random}$ as following  [@Tibshirani:2001fj]: $$Gap = \mathbb{E}(\log{(SSW_{random})}) - \log{(SSW)} \enspace , \label{Eq:gap}$$ such that greater the difference between random and actual cohesions, the higher the quality of the clustering retrieval. The optimal $\hat{k}$ is chosen as the smallest $k$ where $Gap(k) \geq Gap(k+1) - S_{k+1}$, and $S_{k+1}$ is the standard deviation of $\log{SSW_{random}}$. Experimental Setup ------------------ The [*InfoGuide*]{} method was validated by comparing the quality of its clustering retrievals with those of other internal metrics over artificially-generated, benchmarks, and real-world datasets. Three commonly used clustering algorithms with distinct underlying approaches were used: K-Means [@macqueen1967some], Gaussian Mixture Model (GMM) [@rasmussen2000infinite] and the Agglomerative Ward which is a Hierarchical Agglomerative with Ward’s linkage [@ward1963]. For each algorithms, $k \in [k_{min}, k_{max}]$ clusters were repeatedly retrieved $30$ times with $k_{min}=1$ and $k_{max}=11$. The optimal clustering retrieval $\hat{C}$ was obtained according to the [*InfoGuide*]{} as well as to the Silhouette, Calinsk-Harabasz Index, and Gap Statistic. A total of $7,920$ clustering retrievals $C^{(k)}$ were considered using $8$ datasets $\times$ $30$ trials $\times$ $3$ algorithms $\times$ $|[k_{min}, k_{max}]| = 11$ number of clusters. For artificially-generated and benchmark datasets, for which the ground truth $C^*$ are available, two evaluations were performed: the probability of finding the true $k^*$, $Pr(\langle \hat{k} \rangle = k^*)$, and the Normalized Mutual Information ($NMI(C^*, \hat{C})$) between the clusters retrieved $\hat{C}$ and ground-truth $C^*$. The probability of finding $k^*$ was quantified using the Wilson Score, which estimates the population proportion of a binomial distribution in which a success is encoded as $\hat{k} = k^*$. The $NMI$ quantifies the decrease in the entropy of $\hat{C}$ by knowing $C^*$. For real-world datasets, an external evaluation was performed by quantifying the goodness of fit of a prediction model when the optimal clustering retrieval $\hat{C}$ is included as an additional predictor. In this work, a Linear Regression was used, and the goal was to compare different metrics instead of obtaining the best prediction model. To control for model complexity and avoids overfitting, the adjusted $R^2$ out-sample, $R^2_{adj-out}$ was used. All of the code, datasets, and analysis are available on the Open Science Framework (OSF) repository of this project at <https://doi.org/10.17605/OSF.IO/ZQYNC>. Data ---- Artificially-generated, benchmark, and real-world datasets were used ( \[tab:data\]). The artificial datasets were reproduced from the previous work of Tibshirani et al on Gap Statistic [@Tibshirani:2001fj], in which $5$ datasets were artificially generated according to normally distributed features. For this work, the first dataset was excluded to ensure a fair comparison among the other internal metrics. This dataset arbitrarily assumes that only one group exists and internal metrics such as Silhouette and the CH index are not intrinsically able to retrieve $\hat{k}$ as one. Benchmark datasets have been extracted from the UCI repository [@Dua:2019], which contains, unlike artificially-generated data, datasets with non-normal statistical distributions, often displaying, for instance, a high skewness. In this work, a real-world dataset of containing socioeconomic variables at the county-level was obtained from the American Community Survey [@acs]. It includes race, education, and income for each county in the United States. A goodness of fit measurement of a prediction model was used in which the number of heart-failure deaths is predicted based on the following associated predictors: the total population size, the number of population with diabetes and obesity as well as the percentage of the population with age greater than 65 years. The dataset was obtained from the Centers for Disease Control and Prevention [@cdc]. ------------ -------------- ------ ----- ------- $N$ $F$ $k^*$ type dataset artificial b 1000 10 3 c 1000 10 4 d 1000 10 4 e 1000 10 2 benchmark Iris 150 4 3 Wine 178 13 3 Wine quality 1599 11 6 real-world ACS county 3142 21 - ------------ -------------- ------ ----- ------- : \[tab:data\] The characteristics of the datasets. Results {#Results} ======= Comparison of Clustering Retrieval among Dataset Types ------------------------------------------------------ Quality measures of clustering retrieval quantifies to extent to which the retrieved clusters resemble idealized clustering aspects that are often unrealistic when considered the statistical properties underlying the generating process of real-world datasets. Not surprisingly, these measures are largely evaluated over artificially-generated datasets. The clustering retrieval of [*InfoGuide*]{} was compared against other approaches using both artificially-generated ( \[fig:prob\_mutual\_types\], left) and benchmark data sets ( \[fig:prob\_mutual\_types\], right). ![Comparison of clusters retrieved from artificially-generated and benchmark datasets according to (left) the probability of finding the true number of clusters $k^*$ and (right) the normalized mutual information between the retrieved $\mathcal{\hat{C}}$ and true $C^*$ clusters. []{data-label="fig:prob_mutual_types"}](types_metrics_prob_norm_mutual_without_a.png){width="\linewidth"} Overall, the correct number of clusters is more likely retrieved and a higher information gain is typical obtained in the artificially-generated datasets than in the benchmark datasets. [*InfoGuide*]{} not only displays the highest information gain in the benchmark datasets but also it displays the smallest decrease in information gain from artificial to benchmark datasets. Though the Silhouette and Gap appears to retrieve superior clusters in the artificial datasets, they retrieve the worst clusters in the benchmark datasets. Comparison of Clustering Retrieval among Algorithms --------------------------------------------------- Generally, each clustering algorithm attempts to retrieve clusters that resemble its idealized aspects of clustering quality defined a priori. Therefore, the clustering retrieval of each algorithm was separately compared according $Pr(\langle k \rangle = k^*)$ and $NMI(C^*, \hat{C})$ using both artificially-generated ( \[fig:prob\_mutual\_algorithms\], left) and benchmark ( \[fig:prob\_mutual\_algorithms\], right) datasets. ![Comparison of clusters retrieved from artificially-generated (left) and (right) benchmark datasets according to (top) the probability of finding the true number of clusters $k^*$ and (bottom) the normalized mutual information between the retrieved $\mathcal{\hat{C}}$ and true $\mathcal{C}^*$ clusters. []{data-label="fig:prob_mutual_algorithms"}](types_algorithms_metrics_prob_norm_mutual_without_a.png){width="\linewidth"} When the Agglomerative-Ward is used, both Gap and Silhouette tend have retrieved the best clusters in the artificial datasets but the worst clusters in the benchmark datasets. Even [*InfoGuide*]{} has retrieved the worst clusters when the Agglomerative-Ward is used. Interestingly, Agglomerative-Ward is the only deterministic algorithm and its results may suggest that stochastic components may aid algorithms navigating complex datasets. When the algorithms GMM and KMeans were used, each metric retrieved comparable clusters from the artificial datasets according to either $Pr(\langle k \rangle = k^*)$ and $NMI$. Using the benchmark datasets, however, [*InfoGuide*]{} retrieved the best clusters when the algorithm GMM was used such that when compared to the second-best metric, Gap, [*InfoGuide*]{} were two times more likely to retrieve the correct number of clusters and also obtained almost two times more information gain. Comparison of Clustering Retrievals in Real-World Datasets ---------------------------------------------------------- Clustering analysis has been used to find groups in real-world datasets lacking ground truth. To circumvent the absence of ground truth, external validation is commonly used by independently choosing an external dataset of interest that contains metadata associated to all data points within each cluster. ![Results of the Linear regression model for $R^2_{adjusted}$ (out-of-sample) metric. The predict value was the rate of heart failure by county at USA, the clusters found by each algorithm, guided by each metric, using information about race, income and education (also by county), were used to help the prediction. The clusters found by [*InfoGuide*]{} added more information to the model in comparison to the other metrics for the GMM and K-Means algorithms.[]{data-label="fig:model"}](r2_adjusted_outsample.png){width="\linewidth"} Overall, the clusters retrieved by $InfoGuide$ obtained the highest adjusted coefficient of determination out-sample $R^2_{adj-out}$ when compared to the other metrics ( \[fig:model\]). The clusters retrieved by [*InfoGuide*]{} were able to explain roughly 3% more variation of heart failure deaths. Though it is a modest improvement, it can correspond to a total of 100 thousand heart failure deaths incorrectly predicted among the 2.3 million total heart failure deaths in the US. Conclusions {#Conclusions} =========== After half-century since the inception of the first clustering algorithm, however, clustering analysis still lacks a more automatic framework for clustering retrieval that is based on internal metrics with less unrealistic assumptions. In this work, we proposed the [*InfoGuide*]{} method that uses traces of information gain for automatic clustering analysis. The results demonstrated that [*InfoGuide*]{} may be more suitable for retrieving clusters in real-world datasets displaying nontrivial statistical properties. In benchmark and real-world datasets, GMM, which is the algorithm with less strict assumptions, was capable of obtaining the best clustering retrieval. Future works should include a more diverse set of clustering algorithms and datasets from other domains. Despite additional validations, the [*InfoGuide*]{} method and the idea of using traces of information gain may become a suitable method for automatic clustering analysis.
--- address: - 'Center for Nuclear Studies, Department of Physics, The George Washington University, Washington, D.C. 20052, USA' - | Jurusan Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia\ and\ Center for Nuclear Studies, Department of Physics, The George Washington University, Washington, D.C. 20052, USA author: - 'C. Bennhold, H. Haberzettl' - 'T. Mart' title: 'A new resonance in $K^+\Lambda$ electroproduction: the $D_{13}$(1895) and its electromagnetic form factors' --- Introduction ============ The physics of nucleon resonance excitation continues to provide a major challenge to hadronic physics[@nstar] due to the nonperturbative nature of QCD at these energies. While methods like Chiral Perturbation Theory are not amenable to $N^*$ physics, lattice QCD has only recently begun to contribute to this field. Most of the theoretical work on the nucleon excitation spectrum has been performed in the realm of quark models. Models that contain three constituent valence quarks predict a much richer resonance spectrum[@NRQM; @capstick94] than has been observed in $\pi N\to \pi N$ scattering experiments. Quark model studies have suggested that those “missing” resonances may couple strongly to other channels, such as the $K \Lambda$ and $K \Sigma$ channels[@capstick98] or final states involving vector mesons. The Elementary Model ==================== Using new SAPHIR data[@saphir98] we reinvestigate the $p(\gamma, K^+)\Lambda$ process employing an isobar model described in Ref.[@fxlee99]. We are especially interested in a structure around W = 1900 MeV, revealed in the $K^+ \Lambda$ total cross section data for the first time. Guided by a recent coupled-channels analysis[@feuster98], the low-energy resonance part of this model includes three states that have been found to have significant decay widths into the $K^+\Lambda$ channel, the $S_{11}$(1650), $P_{11}$(1710), and $P_{13}(1720)$ resonances. In order to approximately account for unitarity corrections at tree-level we include energy-dependent widths along with partial branching fractions in the resonance propagators[@fxlee99]. The background part includes the standard Born terms along with the $K^*$(892) and $K_1$(1270) vector meson poles in the $t$-channel. As in Ref.[@fxlee99], we employ the gauge method of Haberzettl[@haberzettl98] to include hadronic form factors. The fit to the data was significantly improved by allowing for separate cut-offs for the background and resonant sector. For the former, the fits produce a soft value around 800 MeV, leading to a strong suppression of the background terms while the resonant cut-off is determined to be 1900 MeV. \[table\_cc1\] [|l|c|c|c|]{} &&\ & &         Extracted         & Quark Model\ \[0.5ex\]      $S_{11}(1650)$ & \*\*\*\*& $-4.83\pm 0.05$ & $-4.26\pm 0.98$\      $P_{11}(1710)$ & \*\*\*& $ ~~1.03\pm 0.17$ & $-0.54\pm 0.12$\      $P_{13}(1720)$ & \*\*\*\*& $~ 1.17\pm 0.04 $ & $-1.29\pm 0.24$\      $D_{13}(1895)$ & $\dagger$ & $~ 2.29^{+0.72}_{-0.20}$ & $-2.72\pm 0.73$\ \[0.5ex\]\ Results from Kaon Photoproduction: a new $D_{13}$ State at 1895 MeV =================================================================== Figure \[fig:total\] compares our model described above with the [SAPHIR]{} total cross section data. Our result shows only one peak near threshold and cannot reproduce the data at higher energies without the inclusion of a new resonance with a mass of around 1900 MeV. While there are no 3 - or 4-star isospin 1/2 resonances around 1900 MeV in the Particle Data Book, several 2-star states are listed, such as the $P_{13}(1900)$, $F_{17}(1990)$, $F_{15}(2000)$ and $D_{13}(2080)$. On the theoretical side, the constituent quark model by Capstick and Roberts[@capstick94] predicts many new states around 1900 MeV, however, only few of them have been calculated to have a significant $K \Lambda$ decay width[@capstick98]. These are the $[S_{11}]_3$(1945), $[P_{11}]_5$(1975), $[P_{13}]_4$(1950), and $[D_{13}]_3$(1960) states, where the subscript refers to the particular band that the state is predicted in. We have performed fits for each of these possible states, allowing the fit to determine the mass, width and coupling constants of the resonance. While we found that all four states can reproduce the structure at $W$ around 1900 MeV, it is only the $[D_{13}]_3$(1960) state that is predicted to have a large photocoupling along with a sizeable decay width into the $K \Lambda$ channel. Table \[table\_cc1\] presents the remarkable agreement, up to a sign, between the quark model predictions and our extracted results for the $[D_{13}]_3$(1960) state. In our fit, the mass of the $D_{13}$ comes out to be 1895 MeV; we will use this energy to refer to this state below. How reliable are the quark model predictions? Clearly, one test is to confront its predictions with the extracted couplings for the well-established resonances in the low-energy regime of the $p(\gamma, K^+)\Lambda$ reaction, the $S_{11}(1650)$, $P_{11}(1710)$ and $P_{13}(1720)$ excitations. Table \[table\_cc1\] shows that the magnitudes of the extracted partial widths for the $S_{11}(1650)$, $P_{11}(1710)$, and $P_{13}(1720)$ are in good agreement with the quark model. Therefore, even though the amazing quantitative agreement for the decay widths of the $D_{13}$ (1895) is probably accidental we believe the structure in the SAPHIR data is in all likelihood produced by a state with these quantum numbers. Further evidence for this conclusion is found below in our discussion on the recent JLab kaon electroproduction data. As shown in Ref.[@mart99] the difference between the two calculations is much smaller for the differential cross sections. Including the $D_{13}$(1960) does not affect the threshold and low-energy regime while it does improve the agreement at higher energies. The difference between the two models can be seen more clearly in Fig.\[fig:dif1\], where the differential cross section is plotted in a three-dimensional form. As shown by the lower part of Fig.\[fig:dif1\], the signal for the missing resonance at $W$ around 1900 MeV is most pronounced in the forward and backward direction. Therefore, in order to see such an effect in the differential cross section, angular bins should be more precise for these two kaon directions. Figure \[fig:plamsig\] shows that the influence of the new state on the recoil polarization is rather small for all angles, which demonstrates that the recoil polarization is not the appropriate observable to further study this resonance. On the other hand, the photon asymmetry of $K^+\Lambda$ photoproduction shows larger variations between the two calculations, especially for higher energies. Here the inclusion of the new state leads to a sign change in this observable, a signal that should be easily detectable by experiments with linearly polarized photons. Figure \[fig:cxcz\] shows double polarization observables for an experiment with circularly polarized photon and polarized recoil. As expected, we find no influence of the $D_{13}$(1895) at threshold. At resonance energies there are again clear differences between the two predictions. Results from Kaon Electroproduction: Electromagnetic Form Factors of the $D_{13}$(1895) ======================================================================================= All previous descriptions of the kaon electroproduction[@previous] process have performed fits to both photo- and electroproduction data simultaneously, in an attempt to provide a better constraint on the coupling constants. This method clearly runs the danger of obscuring - rather than clarifying - the underlying production mechanism. For example, anomalous behavior of the response functions in a certain $k^2$ range would be parameterized into the effective coupling constants, rather than be expressed in a particular form factor. Here, we adopt the philosophy used in pion electroproduction over the last decade: we demand that the kaon electroproduction amplitude be fixed at the photon point by the fit to the photoproduction data. Thus, all hadronic couplings, photocouplings and hadronic form factors are fixed, the only remaining freedom comes from the electromagnetic form factors of the included mesons and baryons. Extending our isobar model of Ref.[@fxlee99] to finite $k^2$ requires the introduction of additional contact terms in the Born sector in order to properly incorporate gauge invariance[@haberzettl99]. We choose standard electromagnetic form factors for the nucleon [@gari92], for the hyperons we use the hybrid vector meson dominance model[@williams97]. We use the monopole form factors for the meson resonances, where their cut-offs are taken as free parameters, determined to be $\Lambda =1.107$ GeV and $0.525$ GeV for the $K^*$(892) and $K_1$(1270), respectively. That leaves the resonance form factors to be determined which in principle can be obtained from pion electroproduction. In practice, the quality of the data at higher W has not permitted such an extraction. For the $S_{11}$(1650) state, we use a parameterization given by Ref.[@penner]. For the $P_{11}$(1710), $P_{13}(1720)$ and $D_{13}$(1895) states we adopt the following functional form for their Dirac and Pauli form factors $F_1$ and $F_2$: $$\begin{aligned} F(k^2) &=& \left( 1-\frac{k^2}{\Lambda^2} \right)^{-n} \, ,\end{aligned}$$ with the parameters $\Lambda$ and $n$ to be determined by the kaon electroproduction data. The resulting parameters are listed in Table \[table\_ff\]. \[table\_ff\] ------------------------------------------ ------------------- ------------- ------------------- ------------         Resonance $\Lambda_1$ (GeV) $n_1$ $\Lambda_2$ (GeV) $n_2$ \[0.8ex\]         $P_{11}(1710)$             $1.37$         $4$         $- $         $-$         $P_{13}(1720)$             $2.00$         $1$         $2.00$         $3.31$         $D_{13}(1895)$             $0.36$         $4$         $1.21$         $4$ \[0.5ex\] ------------------------------------------ ------------------- ------------- ------------------- ------------ : Parameters for the $P_{11}(1710)$, $P_{13}(1720)$, and $D_{13}(1895)$ form factors. Figure \[fig:jlab\] shows the result of our fit. Clearly, the amplitude that includes the $D_{13}(1895)$ resonance yields much better agreement with the new experimental data[@gabi98] from Hall C at JLab. The model without this resonance produces a transverse cross section which drops monotonically as a function of $-k^2$, while in the longitudinal case this model dramatically underpredicts the data for small momentum transfer. With a $W=1.83$ GeV the data are close in energy to the new state, thus allowing us to study the $-k^2$ dependence of its form factors. The contribution of the Born terms is neglegibly small for the transverse cross section but remains sizeable for the longitudinal one. We point out that without the $D_{13}(1895)$ we did not find a reasonable description of the JLab data, even if we provided for maximum flexibility in the functional form of the other resonance form factors. The same holds true if the new resonance is assumed to be an $S_{11}$ or a $P_{11}$ state. Even including an additional $P_{13}$ state around 1900 MeV does not improve the fit to the electroproduction data. It is only with the interference of two form factors given by the coupling structure of a different spin-parity state, viz.$D_{13}$, that a description becomes possible. We therefore find that these new kaon electroproduction data provide additional evidence supporting our suggestion that the quantum numbers of the new state indeed correspond to a $D_{13}$. The form factors extracted for the $D_{13}(1895)$ are shown in Fig.\[fig:form\], in comparison to the Dirac and Pauli form factors of the proton and those of the $\Delta$(1232). While the $F_2(k^2)$ form factors look similar for all three baryons, $F_1(k^2)$ of the $D_{13}(1895)$ resonance falls off dramatically at small $k^2$. It is the behavior of this form factor that leads to the structure of the transverse and longitudinal cross sections at a $-k^2$ = 0.2 - 0.3 GeV$^2$; at higher $k^2$ both response functions are dominated by $F_2(k^2)$. The experimental exploration of the small $k^2$ regime could therefore provide stringent constraints on the extracted form factors. Conclusion ========== We have investigated a structure around $W= 1900$ MeV in the new [SAPHIR]{} total cross section data in the framework of an isobar model and found that the data can be well reproduced by including a new $D_{13}$ resonance with a mass, width and coupling parameters in good agreement with the values predicted by the recent quark model calculation of Ref.[@capstick98]. To further elucidate the role and nature of this state we suggest measurements of the polarized photon asymmetry around $W = 1900$ MeV for the $p(\gamma, K^+)\Lambda$ reaction. Furthermore, we extended our isobar description to kaon electroproduction by allowing only electromagnetic resonance transition form factors to vary. Employing the new JLab Hall C $p(e, e' K^+)\Lambda$ data at $W=1.83$ GeV we find that a description of these data is only possible when the new $D_{13}$ state is included in the model. The dominance of this state at these energies allowed us to extract its transition form factors, one of which was found to to be dramatically different from other resonance form factors. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Gregor Penner for providing his parameterization of the $\Delta (1232)$ form factor. This work was supported by the US DOE grant DE-FG02-95ER-40907 (CB and HH) and the University Research for Graduate Education (URGE) grant (TM). [99]{} , Washington, D.C., 1997, edited by H. Haberzettl, C. Bennhold, and W. J. Briscoe, $\pi N$ [*Newsletter*]{} [**14**]{}, 1 (1998). N. Isgur and G. Karl, [*Phys. Lett.*]{} B [**72**]{}, 109 (1977); [*Phys. Rev.*]{} D [**23**]{}, 817 (1981); R. Koniuk and N. Isgur, [*Phys. Rev.*]{} D [**21**]{}, 1868 (1980). S. Capstick and W. Roberts, [*Phys. Rev.*]{} D [**49**]{}, 4570 (1994). S. Capstick and W. Roberts, [*Phys. Rev.*]{} D [**58**]{}, 074011 (1998). Collaboration: M.Q. Tran [*et al*]{}., [*Phys. Lett.*]{} B [**445**]{}, 20 (1998). F.X. Lee, T. Mart, C. Bennhold, and L.E. Wright, ‘Quasifree Kaon Photoproduction on Nuclei’, [nucl-th/9907119]{}. T. Feuster and U. Mosel, [*Phys. Rev.*]{} C [**58**]{}, 457 (1998); [*Phys. Rev.*]{} C [**59**]{}, 460 (1999). H. Haberzettl, [*Phys. Rev.*]{} C [**56**]{}, 2041 (1997); H. Haberzettl, C. Bennhold, T. Mart, and T. Feuster, [*Phys. Rev.*]{} C [**58**]{}, R40 (1998). S. Capstick, [*Phys. Rev.*]{} D [**46**]{}, 2864 (1992). Particle Data Group: C. Caso [*et al*]{}., [*Eur. Phys. J.*]{} C [**3**]{}, 1 (1998). ABBHHM Collaboration, [*Phys. Rev.*]{} [**188**]{}, 2060 (1969). T. Mart and C. Bennhold, ‘Evidence for a missing nucleon resonance in kaon photoproduction’, [nucl-th/9906096]{}. R.A. Williams, C.-R. Ji, and S.R. Cotanch, [*Phys. Rev.*]{} C [**46**]{}, 1617 (1992); T. Mart, C. Bennhold, and C. E. Hyde-Wright, [*Phys. Rev.*]{} C [**51**]{}, R1074 (1995); J.C. David, C. Fayard, G.H. Lamot, and B. Saghai, [*Phys. Rev.*]{} C [**53**]{}, 2613 (1996). H. Haberzettl, T. Mart, and C. Bennhold, in preparation. M.F. Gari and W. Krümpelmann, [*Phys. Rev.*]{} D [**45**]{}, 1817 (1992). R.A. Williams and T.M. Small, [*Phys. Rev.*]{} C [**55**]{}, 882 (1997). G. Penner, T. Feuster, and U. Mosel, ‘Pion Electroproduction and Pion Induced Dileptonproduction on the Nucleon’, [nucl-th/9802010]{}. G. Niculescu [*et al.*]{}, [*Phys. Rev. Lett.*]{} [**81**]{}, 1805 (1998). P. Brauel [*et al*]{}., [*Z. Phys.*]{} C [**3**]{}, 101 (1979).
--- abstract: 'These notes were prepared in occasion of a mini-course given by the author at the “CIMPA Research School - Hamiltonian and Lagrangian Dynamics" (10–19 March 2015 - Salto, Uruguay). The talks were meant as an introduction to the problem of finding periodic orbits of prescribed energy for autonomous Tonelli Lagrangian systems on the twisted cotangent bundle of a closed manifold. In the first part of the lecture notes, we put together in a general theorem old and new results on the subject. In the second part, we focus on an important class of examples: magnetic flows on surfaces. For such systems, we discuss a special method, originally due to Taĭmanov, to find periodic orbits with low energy and we study in detail the stability properties of the energy levels.' address: 'WWU Münster, Mathematisches Institut, Einsteinstrasse 62, D-48149 Münster, Germany' author: - Gabriele Benedetti bibliography: - 'school.bib' title: | Lecture notes on closed orbits\ for twisted autonomous Tonelli Lagrangian flows\ --- Introduction ============ The study of invariant sets plays a crucial role in the understanding of the properties of a dynamical system: it can be used to obtain information on the dynamics both at a local scale, for example to determine the existence of nearby stable motions, and at a global one, for example to detect the presence of chaos. In this regard we refer the reader to the monograph [@Mos73]. In the realm of continuous flows periodic orbits are the simplest example of invariant sets and, therefore, they represent the first object of study. For systems admitting a Lagrangian formulation closed orbits received special consideration in the past years, in particular for the cases having geometrical or physical significance, such as geodesic flows [@Kli78] or mechanical flows in phase space [@Koz85]. In [@Con06] Contreras formulated a very general theorem about the existence of periodic motions for autonomous Lagrangian systems over compact configuration spaces. Later on, this result was analysed in detail by Abbondandolo, who discussed it in a series of lecture notes [@Abb13]. It is the purpose of the present work to present a generalization of such a theorem, based on the recent papers [@Mer10; @AB15b], to systems which admit only a *local* Lagrangian description (Theorem \[thm:main\] below). Among these systems we find the important example of *magnetic flows on surfaces*, which we introduce in Section \[sub:hom\]. We look at them in detail in the last part of the notes: first, we will sketch a method, devised by Taĭmanov in [@Tai93], to find periodic orbits with low energy; second, we will study the *stability* of the energy levels, a purely symplectic property, which has important consequences for the existence of periodic orbits. Let us start now our study by making precise the general setting in which we work. Twisted Lagrangian flows over closed manifolds {#sub:twi} ---------------------------------------------- Let $M$ be a closed connected $n$-dimensional manifold and denote by $$\begin{aligned} \pi:TM&\ \longrightarrow\ M\\ (q,v)&\ \longmapsto\ q \end{aligned}\quad\quad\quad\quad \begin{aligned} \pi:T^*M&\ \longrightarrow\ M\\ (q,p)&\ \longmapsto\ q \end{aligned}$$ the tangent and the cotangent bundle projection of $M$. Let us fix also an auxiliary Riemannian metric $g$ on $M$ and let $|\cdot|$ denote the associated norm. Let $\sigma\in\Omega^2(M)$ be a closed $2$-form on $M$ which we refer to as the *magnetic form*. We call *twisted cotangent bundle* the symplectic manifold $(T^*M,\omega_\sigma)$, where $\omega_\sigma:=d\lambda-\pi^*\sigma$. Here $\lambda$ is the canonical $1$-form defined by $$\lambda_{(q,p)}\ =\ p\circ d_{(q,p)}\pi\,, \quad\quad \forall\, (q,p)\in T^*M\,.$$ If $K:T^*M\rightarrow{{\mathbb R}}$ is a smooth function, we denote by $t\mapsto \Phi^{(K,\sigma)}_t$ the Hamiltonian flow of $K$. It is generated by the vector field $X_{(K,\sigma)}$ defined by $$\omega_\sigma(X_{(K,\sigma)},\,\cdot\,)\ =\ -dK\,.$$ In local coordinates on $T^*M$ such flow is obtained by integrating the equations $$\left\{\begin{aligned} \dot q&=\ \frac{\partial K}{\partial p}\,,\\ \dot p&=\ -\frac{\partial K}{\partial q}\ +\ \sigma\left(\frac{\partial K}{\partial p},\,\cdot\,\right)\,. \end{aligned} \right.$$ The function $K$ is an integral of motion for $\Phi^{(K,\sigma)}$. Moreover, if $k$ is a regular value for $K$, then the flow lines lying on $\{K=k\}$ are tangent to the $1$-dimensional distribution $\ker\omega_\sigma|_{\{K=k\}}$. This means that if $K':T^*M\rightarrow{{\mathbb R}}$ is another Hamiltonian with a regular value $k'$ such that $\{K'=k'\}=\{K=k\}$, then $\Phi^{(K',\sigma)}$ and $\Phi^{(K,\sigma)}$ are the same up to a *time reparametrization* on the common hypersurface. In other words, there exists a smooth family of diffeomorphisms $\tau_{z}:{{\mathbb R}}\rightarrow{{\mathbb R}}$ parametrized by $z\in\{K'=k'\}=\{K=k\}$ such that $$\tau_{z}(0)\ =\ 0\ \quad\mbox{and}\quad\ \Phi^{(K,\sigma)}_t(z)\ =\ \Phi^{(K',\sigma)}_{\tau_{z}(t)}(z)\,.$$ Hence, there is a bijection between the closed orbits of the two flows on the hypersurface. Let $L:TM\rightarrow{{\mathbb R}}$ be a *Tonelli Lagrangian*. This means that for every $q\in M$, the restriction $L|_{T_qM}$ is superlinear and strictly convex (see [@Abb13]): $$\label{eq:ton} \begin{aligned} \lim_{|v|\rightarrow+\infty}\frac{L(q,v)}{|v|}\ &=\ +\infty\,,\quad\forall\, q\in M\,,\\ \frac{\partial^2L}{\partial v^2}(q,v)\ &>\ 0\,,\quad\forall\, (q,v)\in TM\,, \end{aligned}$$ where $\frac{\partial^2L}{\partial v^2}(q,v)$ is the Hessian of $L|_{T_qM}$ at $v\in T_qM$. The *Legendre transform* associated to $L$ is the fibrewise diffeomorphism $$\begin{aligned} \mathcal L:TM&\ \longrightarrow\ T^*M\\ (q,v)&\ \longmapsto\ \frac{\partial L}{\partial v}(q,v)\,.\end{aligned}$$ The *Legendre dual* of $L$ is the *Tonelli Hamiltonian* $$\begin{aligned} H:T^*M&\ \longrightarrow\ {{\mathbb R}}\\ (q,p)&\ \longmapsto\ p\Big(\mathcal L^{-1}(q,p)\Big)-L\big(\mathcal L^{-1}(q,p)\big)\,,\end{aligned}$$ which satisfies the analogue of on $T^*M$. For every $k\in{{\mathbb R}}$, let $\Sigma^*_k:=\{H=k\}$. These sets are compact and invariant for $\Phi^{(H,\sigma)}$. As a consequence such a flow is complete. We can use $\mathcal L$ to pull back to $TM$ the Hamiltonian flow of $H$. Let $\Phi^{(L,\sigma)}$ be the flow on $TM$ defined by conjugation $$\mathcal L\circ\Phi^{(L,\sigma)}\ =\ \Phi^{(H,\sigma)}\circ\mathcal L\,.$$ We call $\Phi^{(L,\sigma)}$ a $\textsf{twisted\ Lagrangian\ flow}$ and we write $X_{(L,\sigma)}$ for its generating vector field. Since $\Phi^{(H,\sigma)}$ is complete, $\Phi^{(L,\sigma)}$ is complete as well. The next proposition shows that the flow $\Phi^{(L,\sigma)}$ is locally a standard Lagrangian flow. Let $U\subset M$ be an open set such that $\sigma|_U=d\theta$ for some $\theta\in\Omega^1(U)$. There holds $$X_{(L-\theta,0)}\ =\ X_{(L,\sigma)}|_U\,,$$ where $L-\theta:TU\rightarrow{{\mathbb R}}$ is the Tonelli Lagrangian defined by $(L-\theta)(q,v)=L(q,v)-\theta_q(v)$ and $X_{(L-\theta,0)}$ is the standard Lagrange vector field of $L-\theta$. The proof of this result follows from the next exercise. \[exe-EL\] Prove the following generalization of the Euler-Lagrange equations. Consider a smooth curve $\gamma:[0,T]\rightarrow M$. Then, the curve $(\gamma,\dot\gamma)$ is a flow line of $X_{(L,\sigma)}$ if and only if for every open set $W\subset M$ and every linear symmetric connection $\nabla$ on $W$, $$\left(\nabla_{\dot\gamma}\frac{\partial L}{\partial v}\right)(\gamma,\dot\gamma)\ =\ \frac{\partial L}{\partial q}(\gamma,\dot\gamma)\ +\ \sigma_{\gamma}(\dot\gamma,\cdot)$$ at every time $t\in[0,T]$ such that $\gamma(t)\in W$. In the above formula $\frac{\partial L}{\partial q}\in T^*M$ denotes the restriction of the differential of $L$ to the horizontal distribution given by $\nabla$. The magnetic form ----------------- Let $[\sigma]\in H^2(M;{{\mathbb R}})$ denote the cohomology class of $\sigma$. We observe that for any $\theta\in\Omega^1(M)$, there holds $$X_{(L+\theta,\sigma+d\theta)}=X_{(L,\sigma)}\,.$$ Since $L+\theta$ is still a Tonelli Lagrangian, we expect that general properties of the dynamics depend on $\sigma$ only via $[\sigma]$. Moreover, if $\theta\in\Omega^1(M)$ is defined by $\theta_q:=-\frac{\partial L}{\partial v}(q,0)$, then $$\min_{v\in T_qM}\Big(L(q,v)\ +\ \theta_q(v)\Big)\ =\ L(q,0)\ +\ \theta_q(0)\,,\quad\forall\,q\in M\,.$$ Therefore, without loss of generality we assume from now on that $L|_{T_qM}$ attains its minimum at $(q,0)$, for every $q\in M$. We can refine the classification of $\sigma$ given by $[\sigma]$ by looking at the cohomological properties of its lift to the universal cover. Let $\widetilde \sigma$ be the pull-back of $\sigma$ to the universal cover $\widetilde M\rightarrow M$. We say that $\sigma$ is *weakly exact* if $[\widetilde\sigma]=0$. This is equivalent to asking that $$\int_{S^2}u^*\sigma\ =\ 0\,,\quad\forall\,u:S^2\longrightarrow M\,.$$ We say that $\sigma$ admits a *bounded weak primitive* if there is $\widetilde\theta\in\Omega^1(\widetilde M)$ such that $d\widetilde\theta=\widetilde\sigma$ and $$\sup_{\widetilde q\in\widetilde M}|\widetilde\theta_{\widetilde q}|\ <\ +\infty\,.$$ In this case we write $[\widetilde\sigma]_b=0$. Notice that both notions that we just introduced depend on $\sigma$ only via $[\sigma]$. \[ex:sup\] If $M$ is a surface and $[\sigma]\neq0$, show that - if $M=S^2$, then $[\widetilde\sigma]\neq0$; - if $M={{\mathbb T}}^2$, then $[\widetilde\sigma]=0$, but $[\widetilde\sigma]_b\neq0$; - if $M\notin\{S^2,{{\mathbb T}}^2\}$, then $[\widetilde\sigma]_b=0$. Using the second point, prove that - if $M={{\mathbb T}}^n$ and $[\sigma]\neq0$, then $[\widetilde\sigma]=0$, but $[\widetilde\sigma]_b\neq0$; - if $M$ is any manifold and $[\widetilde\sigma]_b=0$, then $$\int_{{{\mathbb T}}^2}u^*\sigma\ =\ 0\,,\quad\forall\,u:{{\mathbb T}}^2\longrightarrow M\,.$$ Energy ------ As twisted Lagrangian flows are described by an autonomous Hamiltonian on the twisted cotangent bundle, they possess a natural first integral. It is the Tonelli function $E:TM\rightarrow{{\mathbb R}}$ given by $E:=H\circ\mathcal L$. We call it the *energy* of the system and we write $\Sigma_k:=\{E=k\}$, for every $k\in{{\mathbb R}}$. Let $V:M\rightarrow{{\mathbb R}}$ denote the restriction of $E$ to the zero section and let $e_m(L)$ and $e_0(L)$ denote the minimum and maximum of $V$, respectively. The energy can be written as $$E(q,v)\ =\ \frac{\partial L}{\partial v}(q,v)(v)\ -\ L(q,v)$$ and, for every $q\in M$, we have $$\min_{v\in T_qM}E(q,v)\ =\ E(q,0)\ =\ V(q)\ =\ -L(q,0)\,.$$ Moreover, - $k>e_0(L)$ if and only if $\pi:\Sigma_k\rightarrow M$ is an $S^{n-1}$-bundle (isomorphic to the unit tangent bundle of $M$). - $k<e_m(L)$ if and only if $\Sigma_k=\emptyset$. If $q_0\in M$ is a critical point of $V$, then $(q_0,0)$ is a constant periodic orbit of $\Phi^{(L,\sigma)}$ with energy $V(q_0)$. The Mañé critical value of the universal cover ---------------------------------------------- When $\sigma$ is weakly exact we define the *Mañé critical value* of the universal cover as $$\label{mandef} c(L,\sigma)\ :=\ \inf_{d\widetilde\theta\,=\,\widetilde\sigma}\left(\,\sup_{\widetilde q\in\widetilde M}\widetilde H(\widetilde q,\widetilde\theta_{\widetilde q})\,\right)\ \in\ {{\mathbb R}}\cup\{+\infty\}\,,$$ where $\widetilde H:T^*\widetilde M\rightarrow{{\mathbb R}}$ is the lift of $H$ to $\widetilde M$. This number plays an important role, since as it will be apparent from Theorem \[thm:main\] and the examples in Section \[sub:hom\] the dynamics on $\Sigma_k$ changes dramatically when $k$ crosses $c(L,\sigma)$. \[prp-man\] If $\sigma$ is weakly exact, then - $c(L,\sigma)<+\infty$ if and only if $[\widetilde\sigma]_b=0$; - $c(L,\sigma)\geq e_0(L)$; - if $\sigma=d\theta_0$, where $\theta_0(\cdot)=\mathcal L(\cdot,0)$, then $c(L,\sigma)=e_0(L)$ and the converse is true, provided $e_0(L)=e_m(L)$; - given two Tonelli Lagrangians $L_1$ and $L_2$ and two real numbers $k_1$ and $k_2$ such that $\{H_1=k_1\}=\{H_2=k_2\}$, then $$c(L_1,\sigma)\geq k_1\ \Longleftrightarrow\ c(L_2,\sigma)\geq k_2\ \ \mbox{and}\ \ c(L_1,\sigma)\leq k_1\ \Longleftrightarrow\ c(L_2,\sigma)\leq k_2\,.$$ Example I: electromagnetic Lagrangians -------------------------------------- Let $g$ be a Riemannian metric on $M$ and $V:M\rightarrow{{\mathbb R}}$ be a function. Suppose that the Lagrangian is of *mechanical type*, namely it has the form $$L(q,v)\ =\ \frac{1}{2}|v|^2\ -\ V(q)\,,$$ where $|\cdot|$ is the norm associated to $g$. In this case we refer to $\Phi^{(L,\sigma)}$ as a *magnetic flow* since we have the following physical interpretation of this system: it models the motion of a charged particle $\gamma$ moving in $M$ under the influence of a potential $V$ and a stationary magnetic field $\sigma$. Using Exercise \[exe-EL\], the equation of motion reads $$\label{El-sur} \nabla_{\dot{\gamma}}\dot\gamma\ =\ -\nabla V(\gamma)\ +\ Y_\gamma(\dot\gamma)\,,$$ where $\nabla V$ is the gradient of $V$ and, for every $q\in M$, $Y_q:T_qM\rightarrow T_qM$ is defined by $$g_q(Y_q(v_1),v_2)\ =\ \sigma_q(v_1,v_2)\,,\quad\forall\, v_1,v_2\in T_qM\,.$$ Prove that, if $k>\max V$, $\Phi^{(L,\sigma)}|_{\Sigma_k}$ can be described in terms of a purely kinetic system. Namely, define the Jacobi metric $g_k:=\frac{k-V}{k}g$ and the Lagrangian $L_k(q,v):=\frac{1}{2}|v|^2_k$, where $|\cdot|_k$ is the norm induced by $g_k$. Using the Hamiltonian formulation, show that $\Phi^{(L,\sigma)}|_{\{E=k\}}$ is conjugated (up to time reparametrization) to $\Phi^{(L_k,\sigma)}|_{\{E_k=k\}}$, where $E_k$ is the energy function of $L_k$. In the particular case $M=S^2$, magnetic flows describe yet another interesting mechanical system. Consider a rigid body in ${{\mathbb R}}^3$ with a fixed point and moving under the influence of a potential $V$. Suppose that $V$ is invariant under rotations around the axis $\hat z$. We identify the rigid body as an element $\psi\in SO(3)$. Since $SO(3)$ is a Lie group, we use left multiplications to get $TSO(3)\simeq SO(3)\times{{\mathbb R}}^3\ni(\psi,\Omega)$, where $\Omega$ is the angular speed of the body. Thus, we have a Lagrangian system on $SO(3)$ with $L=\frac{1}{2}|\Omega|^2-V(\psi)$ and $\sigma=0$. Here $|\cdot|$ denote the metric induced by the tensor of inertia of the body. The quotient of $SO(3)$ by the action of the group of rotations around $\hat z$ is a two-sphere. The quotient map $q:SO(3)\rightarrow S^2$ sends $\psi$ to the unit vector in ${{\mathbb R}}^3$, whose entries are the coordinates of $\hat z$ in the basis determined by $\psi$. By the rotational symmetry, the quantity $\Omega\cdot \hat z$ is an integral of motion. Hence, for every $\omega\in{{\mathbb R}}$, the set $\{\Omega\cdot\hat z=\omega\}\subset TSO(3)$ is invariant under the flow and we have the commutative diagram $$\xymatrix{ \big(\{\Omega\cdot\hat z=\omega\},X_{(L,0)}\big)\ar[r]^-{dq} \ar[d]_{\pi} & \big(TS^2,X_{(L_\omega,\sigma_\omega)}\big) \ar[d]^{\pi} \\ SO(3)\ar[r]^{q} & S^2\,.}$$ The resulting twisted Lagrangian system $(L_\omega,\sigma_\omega)$ on $S^2$ can be described as follows: - $L_\omega(q,v)=\frac{1}{2}|v|^2-V_\omega(q)$, where $|\cdot|$ is the norm associated to a *convex* metric $g$ on $S^2$ (independent of $\omega$) and $V_\omega$ is a potential (depending on $\omega$); - $\sigma_\omega=\omega\cdot\kappa$, where $\kappa$ is the curvature form of $g$ (in particular $\sigma_\omega$ has integral $4\pi\omega$ and, if $\omega\neq0$, it is a symplectic form on $S^2$). The rigid body model presented in this subsection is described in detail in [@Kha79]. We refer the reader to [@Nov82], for other relevant problems in classical mechanics that can be described in terms of twisted Lagrangian systems. Example II: magnetic flows on surfaces {#sub:hom} -------------------------------------- We now specialize further the example of electromagnetic Lagrangians that we discussed in the previous subsection and we consider purely kinetic systems on a closed oriented Riemannian surface $(M,g)$. In this case $$L(q,v)\,:=\ \frac{1}{2}|v|^2\,,$$ and $\sigma=f\cdot\mu$, where $\mu$ is the metric area form and $f:M\rightarrow {{\mathbb R}}$. The magnetic endomorphism can be written as $Y=f\cdot\imath$, where $\imath:TM\rightarrow TM$ is the fibrewise rotation by $\pi/2$. If the surface is isometrically embedded in the Euclidean space ${{\mathbb R}}^3$, $Y$ is the classical Lorentz force. Namely, we have $Y_q(v)=v\times B(q)$, where $\times$ is the outer product of vectors in ${{\mathbb R}}^3$ and $B$ is the vector field $B:M\rightarrow{{\mathbb R}}^3$ perpendicular to $M$ and determined by the equation ${\operatorname{vol}}_{{{\mathbb R}}^3}(B,\,\cdot,\,\cdot\,)=\sigma$, where ${\operatorname{vol}}_{{{\mathbb R}}^3}$ is the Euclidean volume. For purely kinetic systems $E=L$ and, therefore, the solutions of the twisted Euler-Lagrange equations are parametrized by a multiple of the arc length. More precisely, if $(\gamma,\dot\gamma)\subset\Sigma_k$, then $|\dot\gamma|=\sqrt{2k}$. In particular, the solutions with $k=0$ are exactly the constant curves. To characterise the solutions with $k>0$ we write down explicitly the twisted Euler-Lagrange equation : $$\label{El-sur2} \nabla_{\dot\gamma}\dot\gamma=f(\gamma)\cdot\imath\dot\gamma\,.$$ We see that $\gamma$ satisfies if and only if $|\dot\gamma|=\sqrt{2k}$ and $$\label{cur-sur} \kappa_\gamma=s\cdot f(\gamma)\,,\quad\quad s\,:=\ \frac{1}{\sqrt{2k}}\,,$$ where $\kappa_\gamma$ is the geodesic curvature of $\gamma$. The advantage of working with Equation is that it is invariant under orientation-preserving reparametrizations. Let us do some explicit computations when the data are homogeneous. Thus, let $g$ be a metric of constant curvature on $M$ and let $\sigma=\mu$. When $M\neq{{\mathbb T}}^2$ we assume, furthermore, that the absolute value of the Gaussian curvature is $1$. By , in order to find the trajectories of $\Phi^{(L,\sigma)}$ we need to solve the equation $\kappa_\gamma=s$ for all $s>0$. Denote by $\widetilde M$ the universal cover of $M$. Then, $\widetilde{S^2}=S^2$, $\widetilde{{{\mathbb T}}^2}={{\mathbb R}}^2$ and, if $M$ has genus larger than one, $\widetilde M=\mathbb H$, where $\mathbb H$ is the hyperbolic plane. Our strategy will be to study the trajectories of the lifted flow and then project them down to $M$. Working on the universal cover is easier since there the problem has a bigger symmetry group. Notice, indeed, that the lifted flow is invariant under the group of orientation preserving isometries ${\operatorname{Iso}}_+(\widetilde M)$. ### The two-sphere Let us fix geodesic polar coordinates $(r,\varphi)\in(0,\pi)\times {{\mathbb R}}/2\pi{{\mathbb Z}}$ around a point $q\in S^2$ corresponding to $r=0$. The metric takes the form $dr^2+(\sin r)^2d\varphi^2$. Let $C_{r}(q)$ be the boundary of the geodesic ball of radius $r$ oriented in the counter-clockwise sense. We compute $\kappa_{C_{r}(q)}=\frac{1}{\tan r}$. Observe that $\tan r$ takes every positive value exactly once for $r\in(0,\pi/2)$. Therefore, if $s>0$, the trajectories of the flow are all supported on $C_{r(s)}(q)$, where $q$ varies in $S^2$ and $$r(s)\ =\ \arctan\frac{1}{s}\,\in\, (0,\pi/2)\,.$$ In particular, all orbits are closed and their period is $$T(s)\ =\ \frac{2\pi s}{\sqrt{s^2+1}}\,.$$ ### The two-torus In this case we readily see that the trajectories of the lifted flow are circles of radius $r(s)=1/s$. In particular, all the orbits are closed and contractible. Their period is $T(s)=2\pi$, hence it is independent of $s$ (or $k$). ### The hyperbolic surface We fix geodesic polar coordinates $(r,\varphi)\in(0,+\infty)\times {{\mathbb R}}/2\pi{{\mathbb Z}}$ around a point $q\in \mathbb H$ corresponding to $r=0$. The metric takes the form $dr^2+(\sinh r)^2d\varphi^2$. Defining $C_r(q)$ as in the case of $S^2$, we find $\kappa_{C_r(q)}=\frac{1}{\tanh r}$. Observe that $\tanh r$ takes all the values in $(0,1)$ exactly once, for $r\in (0,+\infty)$. Therefore, if $s\in(1,+\infty)$, the trajectories of the flow are the closed curves $C_{r(s)}(q)$, where $q$ varies in $\mathbb H$ and $$r(s)\ =\ {\operatorname{arc}}\!\tanh\frac{1}{s}\,\in\, (0,+\infty)$$ In particular, for $s$ in this range all periodic orbits are contractible. The formula for the periods now reads $$T(s)=\frac{2\pi s}{\sqrt{s^2-1}}\,.$$ To understand what happens, when $s\leq1$ we take the upper half-plane as a model for the hyperbolic plane. Thus, let $\mathbb H=\{\,z=(x,y)\in{{\mathbb C}}\ |\ y>0\,\}$. In these coordinates, the hyperbolic metric has the form $\frac{dx^2+dy^2}{y^2}$ and $${\operatorname{Iso}}_+(\mathbb H)\ =\ \Big\{\,z\mapsto\frac{az+b}{cz+d}\ \,\Big|\ \,a,b,c,d\in{{\mathbb R}}\,, \ ad-bc\,=\,1\,\Big\}\,.$$ We readily see that the affine transformations $z\mapsto az$, with $a>0$ form a subgroup of ${\operatorname{Iso}}_+(\mathbb H)$. This subgroup preserves all the Euclidean rays from the origin and acts transitively on each of them. Hence, we conclude that such curves have constant geodesic curvatures. If $\varphi\in(0,\pi)$ is the angle made by such ray with the $x$-axis, we find that the geodesic curvature of such ray is $\cos\varphi$. In order to do such computation one has to write the metric using Euclidean polar coordinates centered at the origin. Using the whole isometry group, we see that all the segments of circle intersecting $\partial\mathbb H$ with angle $\varphi$ have geodesic curvature $\cos\varphi$. We claim that if $s\in(0,1)$ and $\nu\neq0$ is a free homotopy class of loops of $M$, there is a unique closed curve $\gamma_{s,\nu}$ in the class $\nu$, which has geodesic curvature $s$. The class $\nu$ correspond to a conjugacy class in $\pi_1(M)$. We identify $\pi_1(M)$ with the set of deck transformations and we let $F:\mathbb H\rightarrow\mathbb H$ be a deck transformation belonging to the given conjugacy class. By a standard result in hyperbolic geometry, $F$ has two fixed points on $\partial\mathbb H$ (remember, for example, that there exists a geodesic in $\mathbb H$ invariant under $F$). Then, $\gamma_{s,\nu}$ is the projection to $M$ of the unique segment of circle connecting the fixed points of $F$ and making an angle $\varphi=\arccos s$ with $\partial\mathbb H$. The uniqueness of $\gamma_{s,\nu}$ stems form the uniqueness of such segment of circle. In a similar fashion, we consider the subgroup of ${\operatorname{Iso}}_+(\mathbb H)$ made by the maps $z\mapsto z+b$, with $b\in{{\mathbb R}}$. It preserves the horizontal line $\{y=1\}$ and act transitively on it. Hence, such curve has constant geodesic curvature. A computation shows that it is equal to $1$, if it is oriented by $\partial_x$. Using the whole isometry group, we see that all the circles tangent to $\partial \mathbb H$ have geodesic curvature equal to $1$. Following [@Gin96] we see that there is no closed curve in $M$ with such geodesic curvature. By contradiction, if such curve exist, then its lift would be preserved by a non-constant deck transformation. We can assume without loss of generality that such lift is the line $\{y=1\}$. We readily see that the only elements in ${\operatorname{Iso}}_+(\mathbb H)$ which preserve $\{y=1\}$ are the horizontal translation. However, no such transformation can be a deck transformation, since it has only one fixed point on $\partial\mathbb H$. Show that in this case $c(L,\sigma)=\frac{1}{2}$. The Main Theorem ---------------- We are now ready to state the central result of this mini-course. \[thm:main\] The following four statements hold. 1. Suppose $[\widetilde\sigma]_b=0$. For every $k>c(L,\sigma)$, 1. there exists a closed orbit on $\Sigma_k$ in any non-trivial free homotopy class; 2. if $\pi_{d+1}(M)\neq0$ for some $d\geq 1$, there exists a contractible orbit on $\Sigma_k$. 2. Suppose $[\widetilde\sigma]=0$. There exists a contractible orbit on $\Sigma_k$, for almost every energy $k\in(e_0(L),c(L,\sigma))$. 3. Suppose $[\widetilde\sigma]\neq0$. There exists a contractible orbit on $\Sigma_k$, for almost every energy $k\in(e_0(L),+\infty)$. 4. There exists a contractible orbit on $\Sigma_k$, for almost every $k\in(e_m(L),e_0(L))$. The set for which existence holds in (2), (3) and (4) contains all the $k's$ for which $\Sigma_k^*$ is a stable hypersurface in $(T^*M,\omega_\sigma)$ (see [@HZ94 *page 122*]). In these notes, we will prove (1), (2) and (3) above by relating closed orbits of the flow to the zeros of a closed $1$-form $\eta_k$ on the space of loops on $M$. We introduce such form and prove some of its general properties in Section \[sec:act\]. In Section \[sec:min\] we describe an abstract minimax method that we apply in Section \[sec:geo\] to obtain zeros of $\eta_k$ in the specific cases listed in the theorem. A proof of (4) relies on different methods and it can be found in [@AB15b]. When $[\sigma]=0$, the theorem was proven by Contreras [@Con06]. Point *(1)* and *(2)*, with the additional hypothesis $[\widetilde\sigma]_b=0$, were proven by Osuna [@Osu05]. Point *(2)* was proven in [@Mer10; @Mer16], for electromagnetic Lagrangians, and in [@AB15b] for general systems. A sketch of the proof of point *(3)* was given in [@Nov82 Section 3] and in [@Koz85 Section 3.2]. It was rigorously established in [@AB15b]. Point *(4)* follows by employing tools in symplectic geometry. For the weakly exact case it can also be proven using a variational approach as shown in [@Abb13 Section 7]. For Lagrangians of mechanical type and vanishing magnetic form the existence problem in such interval has historically received much attention (see [@Koz85 Section 2] and references therein). We end up this introduction by defining the notion of stability mentioned in the theorem. Stable hypersurfaces -------------------- In general, the dynamics on $\Sigma^*_k$ may exhibit very different behaviours as $k$ changes. However, given a regular energy level $\Sigma^*_{k_0}$, in some special cases we can find a new Hamiltonian $H':T^*M\rightarrow{{\mathbb R}}$ such that $\{H'=k'_0\}=\Sigma^*_{k_0}$ and such that $\Phi^{(H',\sigma)}|_{\{H'=k'_0\}}$ and $\Phi^{(H',\sigma)}|_{\{H'=k'\}}$ are conjugated, up to a time reparametrization, provided $k'$ is sufficiently close to $k'_0$. We say that an embedded hypersurface $\imath:\Sigma^*\longrightarrow T^*M$ is $\mathsf{stable}$ in the symplectic manifold $(T^*M,\omega_\sigma)$ if there exists an open neighbourhood $W$ of $\Sigma^*$ and a diffeomorphism $\Psi_W:\Sigma^*\times(-\varepsilon_0,\varepsilon_0)\rightarrow W$ with the property that: - $\Psi_W|_{\Sigma^*\times\{0\}}=\imath$; - the function $H^W:W\rightarrow {{\mathbb R}}$ defined through the commutative diagram $$\xymatrix{ \Sigma^*\times(-\varepsilon_0,\varepsilon_0)\ar[r]^-{\Psi_W} \ar[d]_{{\operatorname{pr}}_2} & W \ar[dl]^-{H^W}\,, \\ (-\varepsilon_0,\varepsilon_0) & }$$ is such that, for every $k\in(-\varepsilon_0,\varepsilon_0)$, $\Phi^{(H^W,\sigma)}|_{\{H^W=0\}}$ and $\Phi^{(H^W,\sigma)}|_{\{H^W=k\}}$ are conjugated by the diffeomorphism $w\mapsto\Psi_W(\imath^{-1}(w),k)$ up to time reparametrization. In this case, the reparametrizing maps $\tau_{(z,k)}$ vary smoothly with $(z,k)\in\Sigma^*\times(-\varepsilon_0,\varepsilon_0)$ and satisfy $\tau_{(z,0)}={\operatorname{Id}}_{{\mathbb R}}$, for all $z\in\Sigma^*$. This implies that there is a bijection between the periodic orbits on $\Sigma^*=\{H^W=0\}$ and those on $\{H^W=k\}$. Thanks to a result of Macarini and G. Paternain [@MP10], if $\Sigma^*$ is the energy level of some Tonelli Hamiltonian, the function $H^W$ can be taken to be Tonelli as well. Suppose that for some $k>e_0(L)$, $\Sigma^*_k$ is stable with stabilizing neighbourhood $W$. Up to shrinking $W$, there exists a Tonelli Hamiltonian $H_k:T^*M\rightarrow{{\mathbb R}}$ such that $H^W=H_k$ on $W$. In order to check whether an energy level is stable or not, we give the following necessary and sufficient criterion that can be found in [@CM05 Lemma 2.3]. A hypersurface $\Sigma_k^*$ is stable if and only if there exists $\alpha\in\Omega^1(\Sigma_k^*)$ such that $$\textit{(a)}\ \ d\alpha(X_{(H,\sigma)},\,\cdot\,)\ =\ 0\,,\quad\quad \textit{(b)}\ \ \alpha(X_{(H,\sigma)})(z)\ \neq\ 0\,,\quad\forall\, z\in\Sigma_k^*\,.$$ In this case $\alpha$ is called a $\mathsf{stabilizing\ form}$. The first condition is implied by the following stronger assumption $$\textit{(a')}\ \ d\alpha\ =\ \omega_\sigma|_{\Sigma^*_k}\,.$$ If (a’) and (b) are satisfied we say that $\Sigma^*_k$ is of $\mathsf{contact\ type}$ and we call $\alpha$ a contact form. We distinguish between $\mathsf{positive}$ and $\mathsf{negative}$ contact forms according to the sign of the function $\alpha(X_{(H,\sigma)})$. In Section \[sec:sta\], we give some sufficient criteria for stability for magnetic flows on surfaces. The free period action form {#sec:act} =========================== For the proof of the Main Theorem we need to characterize the periodic orbits on $\Sigma_k$ via a variational principle on a space of loops. To this purpose we have first to adjust $L$. Adapting the Lagrangian ----------------------- Let us introduce a subclass of Tonelli Lagrangians whose fibrewise growth is quadratic. This will enable us to define the action functional on the space of loops with square-integrable velocity. We say that $L$ is $\mathsf{quadratic\ at\ infinity}$ if there exists a metric $g_\infty$ and a potential $V_\infty:M\rightarrow{{\mathbb R}}$ such that $L(q,v)=\frac{1}{2}|v|_\infty^2-V_\infty(q)$ outside a compact set. The next result tells us that, if we look at the dynamics on a fixed energy level, it is not restrictive to assume that the Lagrangian is quadratic at infinity. For any fixed $k\in{{\mathbb R}}$, there exists a Tonelli Lagrangian $L_k:TM\rightarrow{{\mathbb R}}$ which is quadratic at infinity and such that $L_k=L$ on $\{E\leq k_0\}$, for some $k_0>k$. By choosing $k_0$ sufficiently large, we can obtain $e_0(L)=e_0(L_k)$ and, if $[\widetilde\sigma]=0$, also $c(L,\sigma)=c(L_k,\sigma)$. From now on, we assume that $L$ is quadratic at infinity. In this case there exist positive constants $C_0$ and $C_1$ such that $$\label{est-quad} C_1|v|^2\ -\ C_0\ \leq\ L(q,v)\ \leq\ C_1|v|^2\ +\ C_0\,,\quad\forall\,(q,v)\in TM\,.$$ An analogous statement holds for the energy. The space of loops ------------------ We define the space of loops where the variational principle will be defined. Given $T>0$, we set $$W^{1,2}({{\mathbb R}}/T{{\mathbb Z}},M)\ :=\ \Big\{\,\gamma:{{\mathbb R}}/T{{\mathbb Z}}\rightarrow M\ \Big|\ \gamma \mbox{ is absolutely continuous\,, }\int_0^T|\dot\gamma|^2\,{{\mathrm{d}}}t<\infty\, \Big\}\,.$$ Since we look for periodic orbits of arbitrary period, we want to let $T$ vary among all the positive real numbers ${{\mathbb R}}^+$. This is the same as fixing the parametrization space to ${{\mathbb T}}:={{\mathbb R}}/{{\mathbb Z}}$ and keeping track of the period as an additional variable. Namely, we have the identification $$\begin{aligned} \bigsqcup_{T>0}W^{1,2}({{\mathbb R}}/T{{\mathbb Z}},M)&\ \longrightarrow\ \Lambda\,:=\,W^{1,2}({{\mathbb T}},M)\times{{\mathbb R}}^+\\ \gamma(t)&\ \longmapsto\ \big(x(s):=\gamma(sT),T\big)\,.\end{aligned}$$ Given a free homotopy class $\nu\in[{{\mathbb T}},M]$, we denote by $W^{1,2}_\nu\subset W^{1,2}({{\mathbb T}},M)$ and $\Lambda_\nu\subset\Lambda$ the loops belonging to such class. We use the symbol $0$ for the class of contractible loops. The set $\Lambda$ is a Hilbert manifold with $T_{(x,T)}\Lambda\simeq T_xW^{1,2}\times{{\mathbb R}}$, where $T_xW^{1,2}\simeq W^{1,2}({{\mathbb T}},x^*(TM))$ is the space of absolutely continuous vector fields along $x$ with square-integrable covariant derivative. The metric on $\Lambda$ is given by $g_\Lambda=g_{W^{1,2}}+{{\mathrm{d}}}T^2$, where $$(g_{W^{1,2}})_x(\xi_1,\xi_2)\ :=\ \int_0^1g_{x(s)}(\xi_1(s),\xi_2(s))\,{{\mathrm{d}}}s\ +\ \int_0^1g_{x(s)}(\xi_1'(s),\xi_2'(s))\,{{\mathrm{d}}}s\,.$$ For any $T_->0$, $W^{1,2}\times[T_-,+\infty)\subset\Lambda$ is a complete metric space. For more details on the space of loops we refer to [@Abb13 Section 2] and [@Kli78]. We end this subsection with two more definitions, which will be useful later on. First, we let $$\frac{\partial}{\partial T}\ \in\ \Gamma(\Lambda)$$ denote the coordinate vector associated with the variable $T$. Then, if $x\in W^{1,2}$, we let $$e(x)\ :=\ \int^1_0|x'|^2{{\mathrm{d}}}s\quad\quad\mbox{and}\quad\quad \ell(x)\ :=\ \int^1_0|x'|\,{{\mathrm{d}}}s$$ be the $L^2$-*energy* and the *length* of $x$, respectively. We define analogous quantities for $\gamma\in\Lambda$. We readily see that $\ell(x)=\ell(\gamma)$ and $e(x)=Te(\gamma)$. Moreover, $\ell(x)^2\leq e(x)$ holds. The action form --------------- In this subsection, for every $k\in{{\mathbb R}}$, we construct $\eta_k\in\Omega^1(\Lambda)$, which vanishes exactly at the set of periodic orbits on $\Sigma_k$. Such $1$-form will be made of two pieces: one depending only on $L$ and $k$ and one depending only on $\sigma$. The first piece will be the differential of the function $$\begin{aligned} A_k:\Lambda&\longrightarrow {{\mathbb R}}\\ \gamma&\longmapsto \int_0^T\Big[L(\gamma,\dot\gamma)+k\Big]\,{{\mathrm{d}}}t\ =\ T\cdot\int_0^1\left[L\left(x,\frac{x'}{T}\right)+k\right]{{\mathrm{d}}}s\,.\end{aligned}$$ Such function is well-defined since $L$ is quadratic at infinity (see ). It was proven in [@AS09] that $A_k$ is a $C^{1,1}$ function (namely, $A_k$ is differentiable and its differential is locally uniformly Lipschitz-continuous). In order to define the part of $\eta_k$ depending on $\sigma$, we first introduce a differential form $\tau^\sigma\in\Omega^1(W^{1,2})$ called the *transgression* of $\sigma$. It is given by $$\tau^\sigma_x(\xi)\ :=\ \int_0^1\sigma_{x(s)}(\xi(s),x'(s))\,{{\mathrm{d}}}s\,,\quad\forall\,(x,\xi)\in TW^{1,2}\,.$$ By writing $\tau^\sigma$ in local coordinates, it follows that it is locally uniformly Lipschitz. If $u:[0,1]\rightarrow W^{1,2}$ is a path of class $C^1$, then $$\int_0^1u^*\tau^\sigma\ =\ \int_{[0,1]\times{{\mathbb T}}}\hat u^*\sigma\,,$$ where $\hat u:[0,1]\times{{\mathbb T}}\rightarrow M$ is the cylinder given by $\hat u(r,t)=u(r)(t)$. If $u_a:{{\mathbb T}}\rightarrow W^{1,2}$ is a homotopy of closed paths with parameter $a\in[0,1]$, then we get a corresponding homotopy of tori $\hat u_a$. Since $\sigma$ is closed, the integral of $\hat u_a^*\sigma$ on ${{\mathbb T}}^2$ is independent of $a$. We conclude that the integral of $\tau^\sigma$ on $u_a$ does not depend on $a$ either. Namely, $\tau^\sigma$ is a *closed form*. The $\mathsf{free\ period\ action\ form}$ at energy $k$ is $\eta_k\in\Omega^1(\Lambda)$ defined as $$\eta_k\,:=\ dA_k\ -\ {\operatorname{pr}}^*_{W^{1,2}}\tau^\sigma\,,$$ where ${\operatorname{pr}}^*_{W^{1,2}}:\Lambda\rightarrow W^{1,2}$ is the natural projection $(x,T)\mapsto x$. The free period action form is closed and its zeros correspond to the periodic orbits of $\Phi^{(L,\sigma)}$ on $\Sigma_k$. The correspondence with periodic orbits follows by computing $\eta_k$ explicitly on $TW^{1,2}\times 0$ and on $\frac{\partial}{\partial T}$. If $\xi\in TW^{1,2}$, then $$(\eta_k)_\gamma(\xi,0)\ =\ \int_0^T\Big[\frac{\partial L}{\partial q}(\gamma,\dot\gamma)\cdot\xi_T\, +\,\frac{\partial L}{\partial v}(\gamma,\dot\gamma)\cdot\dot\xi_T\,+\,\sigma_\gamma(\dot\gamma,\xi_T)\Big]{{\mathrm{d}}}t\,,$$ where $\xi_T$ is the reparametrization of $\xi$ on ${{\mathbb R}}/T{{\mathbb Z}}$. In the direction of the period we have $$\begin{aligned} \nonumber (\eta_k)_\gamma\left(\frac{\partial}{\partial T}\right)\ =\ d_\gamma A_k\left(\frac{\partial}{\partial T}\right)&\ =\ \int_0^1L\left(x,\frac{x'}{T}\right){{\mathrm{d}}}s\ +\ k\ -\ T\cdot\int_0^1\frac{\partial L}{\partial v}\left(x,\frac{x'}{T}\right)\cdot \frac{x'}{T^2}\,{{\mathrm{d}}}s\\ &\ =\ k-\int_0^1E\left(x,\frac{x'}{T}\right){{\mathrm{d}}}s\label{etakper}\\ \nonumber&\ =\ k-\frac{1}{T}\int_0^TE(\gamma,\dot\gamma)\,{{\mathrm{d}}}t\,.\end{aligned}$$ Vanishing sequences ------------------- Our strategy to prove existence of periodic orbits will be to construct zeros of $\eta_k$ by approximation. Let $\nu\in[{{\mathbb T}},M]$ be a free homotopy class. A sequence $(\gamma_m)\subset\Lambda_\nu$ is called a $\mathsf{vanishing\ sequence}$ (at level $k$), if $$\lim_{m\rightarrow\infty}\left|\eta_k\right|_{\gamma_m}\ =\ 0\,.$$ A limit point of a vanishing sequence is a zero of $\eta_k$. Thus, the crucial question is: when does a vanishing sequence admit limit points? Clearly, if $T_m\rightarrow 0$ or $T_m\rightarrow+\infty$ the set of limit points is empty. We now see that the opposite implication also holds. \[lem:van-bou\] If $(\gamma_m)$ is a vanishing sequence, there exists $C>0$ such that $$\label{eq-et} e(x_m)\ \leq\ C\cdot T_m^2\,.$$ We compute $$C_1\cdot\frac{e(x_m)}{T_m^2}-C_0\ \stackrel{\mbox{}^{(\star)}}{\leq}\ \int_0^1E\left(x_m,\frac{x_m'}{T_m}\right){{\mathrm{d}}}s\ =\ k-\eta^k_{\gamma_m}\left(\frac{\partial}{\partial T}\right)\ \stackrel{\mbox{}^{(\star\star)}}{\leq}\ k+\sup_m|\eta_k|_{\gamma_m}\,.$$ where in $(\star)$ we used applied to $E$, and in $(\star\star)$ we used that $$\left|\frac{\partial}{\partial T}\right|\ =\ 1\,.$$ The desired estimate follows by observing that, since the sequence $\big(\,|\eta_k|_{\gamma_m}\big)\subset[0,+\infty)$ is infinitesimal, it is also bounded from above. \[prp-conv\] If $(\gamma_m)$ is a vanishing sequence and $0<T_-\leq T_m\leq T_+<+\infty$ for some $T_-$ and $T_+$, then $(\gamma_m)$ has a limit point. By compactness of $[T_-,T_+]$, up to subsequences, $T_m\rightarrow T_\infty>0$. By , the $L^2$-energy of $x_m$ is uniformly bounded. Thus, $(x_m)$ is uniformly $1/2$-Hölder continuous. By the Arzelà-Ascoli theorem, up to subsequences, $(x_m)$ converges uniformly to a continuous $x_\infty:{{\mathbb T}}\rightarrow M$. Therefore, $x_m$ eventually belongs to a local chart $\mathcal U$ of $W^{1,2}$. In $\mathcal U$, $\eta_k$ can be written as the differential of a standard action functional depending on time (see [@AB15b]) and the same argument contained in [@Abb13 Lemma 5.3] when $\sigma=0$ implies that $(\gamma_m)$ has a limit point. In order to construct vanishing sequences we will exploit some geometric properties of $\eta_k$. One of the main ingredients to achieve this goal will be to define a vector field on $\Lambda$ generalizing the negative gradient vector field of the function $A_k$. We introduce it in the next subsection. The flow of steepest descent ---------------------------- Let $X_k$ denote the vector field on $\Lambda$ defined by $$X_k\,:=\ -\,\frac{\sharp\,\eta_k}{\sqrt{1+|\eta_k|^2}}\,\,$$ where $\sharp$ denote the duality between $1$-forms and vector fields induced by $g_{\Lambda}$. Since $X_k$ is locally uniformly Lipschitz, it gives rise to a flow which we denote by $r\mapsto\Phi^k_r$. For every $\gamma\in\Lambda$, we denote by $u_\gamma:[0,R_\gamma)\rightarrow\Lambda$ the maximal positive flow line starting at $\gamma$. We say that $\Phi^k$ is *positively complete* on a subset $Y\subset \Lambda$ if, for all $\gamma\in\Lambda$, either $R_\gamma=+\infty$ or there exists $R_{\gamma,Y}\in[0,R_\gamma)$ such that $u_\gamma(R_{\gamma,Y})\notin Y$. Except for the scaling factor $1/\sqrt{1+|\eta_k|^2}$, the vector field $X_k$ is the natural generalization of $-\nabla A_k=-\sharp(dA_k)$ to the case of non-vanishing magnetic form. We introduce such scaling so that $|X_k|\leq 1$ and we can give the following characterization of the flow lines $u_\gamma$ with $R_\gamma<+\infty$. Let $u:[0,R)\rightarrow\Lambda$ be a maximal positive flow line of $X_k$ and for all $r\in[0,R)$ set $u(r):= \gamma(r)=(x(r),T(r))$. If $R<+\infty$, then there exists a sequence $(r_m)_{m\in{{\mathbb N}}}\subset[0,R)$ and a constant $C$ such that $$\lim_{m\rightarrow\infty}r_m=R\,,\quad\ \ \lim_{m\rightarrow\infty}T(r_m)=0\,,\quad\ \ e(x(r_m))\ \leq\ C\cdot T(r_m)^2\,,\ \ \forall\,m\in{{\mathbb N}}\,.$$ By contradiction, we suppose that $0<T_-:=\inf_{[0,R)} T(r)$. Since $|X_k|\leq1$, $u_\gamma$ is uniformly continuous and, by the completeness of $W^{1,2}\times[T_-,+\infty)$, there exists $$\gamma_\infty\,:=\ \lim_{r\rightarrow R}u(r)\,.$$ By the existence theorem of solutions of ODE’s, there exists a neighbourhood $\mathcal B$ of $\gamma_\infty$ and $R_\mathcal B>0$ such that $$\forall\,\gamma\in \mathcal B\,,\quad r\longmapsto\Phi^{k}_r(\gamma)\, \mbox{ exists in }[0,R_\mathcal B]\,.$$ This contradicts the fact that $R$ is finite as soon as $r\in[0,R)$ is such that $\gamma(r)\in \mathcal B$ and $R-r<R_\mathcal B$. Therefore, $\inf T=0$. Hence, we find a sequence $r_m\rightarrow R$ such that $T(r_m)\rightarrow 0$ and, for every $m$, $\frac{dT}{dr}(r_m)\leq 0$. The last property implies that $$0\ \geq\ \frac{dT}{dr}(r_m)\ =\ d_{u(r_m)}T(X_k)\ =\ -\frac{\eta_k\left(\frac{\partial}{\partial T}\right)}{\sqrt{1+|\eta_k|^2}}(u(r_m))\,.$$ Finally, using Equation and the estimates in , we have $$0\leq(\eta_k)_{u(r(m))}\left(\frac{\partial}{\partial T}\right)=k-\int_0^1E\left(x(r_m),\frac{x'(r_m)}{T(r_m)}\right){{\mathrm{d}}}s\leq k-C_1\int_0^1\frac{|x'(r_m)|^2}{T(r_m)^2}{{\mathrm{d}}}s+C_0\,.\qedhere$$ The above proposition shows that flow lines whose interval of definition is finite come closer and closer to the subset of constant loops. As we saw in Lemma \[lem:van-bou\] the same is true for vanishing sequences with infinitesimal period. For these reasons in the next subsection we study the behaviour of the action form on the set of loops with short length. The subset of short loops ------------------------- We now define a local primitive for $\eta_k$ close to the subset of constant loops. For $k>e_0(L)$, such primitive will enjoy some properties that will enable us to apply the minimax theorem of Section \[sec:min\] to prove the Main Theorem. For our arguments we will need estimates which hold uniformly on a compact interval of energies. Hence, for the rest of this subsection we will suppose that a compact interval $I\subset (e_0(L),+\infty)$ is fixed. Let $M_0\subset W^{1,2}_0$ be the constant loops parametrized by ${{\mathbb T}}$ and $M_0\times{{\mathbb R}}^+\subset \Lambda_0$ the constant loops with arbitrary period. We readily see that $\tau^\sigma|_{M_0}=0$. Thus, $\eta_k=dA_k|_{M_0\times{{\mathbb R}}^+}$ and $$\label{action-m0} A_k(x,T)\ =\ T\,(k-V(x))\,,\quad\forall\,(x,T)\in M_0\times{{\mathbb R}}^+\,.$$ Now that we have described $\eta_k$ on constant loops, let us see what happens nearby. First, we need the following lemma. \[def-ret\] There exists $\delta_*>0$ such that $\{\ell<\delta\}\subset W^{1,2}$ retracts with deformation on $M_0$, for all $\delta\leq\delta_*$. Thus, we have $\tau^\sigma|_{\{\ell<\delta_*\}}\, =\, dP^\sigma$, where $$\begin{aligned} P^\sigma:\{\ell<\delta_*\}&\ \longrightarrow\ {{\mathbb R}}\\ x&\ \longmapsto\ \int_{B^2}\hat u_x^*\sigma\,, \end{aligned}$$ where $\hat u_x:B^2\rightarrow M$ is the disc traced by $x$ under the action of the deformation retraction. Furthermore, there exists $C>0$ such that $$\label{psigma} |P^\sigma(x)|\ \leq\ C\cdot \ell(x)^2\,.$$ Choose $\delta<2\rho(g)$, where $\rho(g)$ is the injectivity radius of $g$. With this choice, for each $x\in\{\ell<\delta\}$ and each $s\in{{\mathbb T}}$, there exists a unique geodesic $y_s:[0,1]\rightarrow M$ joining $x(0)$ to $x(s)$. For each $a\in[0,1]$ define $x_a:{{\mathbb T}}\rightarrow M$ by $x_a(s):=y_s(a)$. Taking a smaller $\delta$ if necessary, one can prove that $a\mapsto|x_a'|$ is a non-decreasing family of functions (use normal coordinates at $x(0)$). Thus, $a\mapsto \ell(x_a)$ is non-decreasing as well and $$\begin{aligned} [0,1]\times\{\ell<\delta\}&\longrightarrow \{\ell<\delta\}\\ (a,x)&\longmapsto x_a\end{aligned}$$ yields the desired deformation. In order to estimate $P^\sigma$ is enough to bound the area of the deformation disc $\hat u_x$: $${\operatorname{area}}(\hat u_x)\leq\int_0^1{{\mathrm{d}}}a\int_0^1 \left|\frac{dy_s}{da}(a)\right|\cdot|x_a'(s)|\,{{\mathrm{d}}}s\leq \int_0^1{{\mathrm{d}}}a\int_0^1d(x(0),x(s))|x'(s)|\,{{\mathrm{d}}}s\leq \frac{\ell(x)}{2}\ell(x)\,.$$ In view of this lemma, for all $\delta\in(0,\delta_*]$, we define the set $$\mathcal V^{\delta}\, :=\ \{\ell<\delta\}\times{{\mathbb R}}^+\,\subset\,\Lambda_0$$ and the function $S_k:\mathcal V^{\delta_*}\,\longrightarrow\,{{\mathbb R}}$ given by $$S_k\,:=\ A_k\ -\ P^\sigma\circ{\operatorname{pr}}_{W^{1,2}}\,.$$ Such a function is a primitive of $\eta_k$ on $\mathcal V^{\delta_*}$. By , it admits the following upper bound. There exists $C>0$ such that, for every $\gamma\in \mathcal V^{\delta_*}$, there holds $$\label{boundsk} S_k(\gamma)\ \leq\ C\cdot\left(\frac{e(x)}{T}\ +\ T\ +\ \ell(x)^2\right)\,,\quad\forall\, k\in I\,.$$ This result has an immediate consequence on vanishing sequences and flow lines of $\Phi^k$. \[cor-perbel\] Let $b>0$ and $k\in I$ be fixed. The following two statements hold: 1. if $(\gamma_m)$ is a vanishing sequence such that $\gamma_m\notin\{S_k<b\}$ for all $m\in{{\mathbb N}}$, then $T_m$ is bounded away from zero; 2. the flow $\Phi^k$ is positively complete on the set $\Lambda\setminus\{S_k<b\}$. We conclude this section by showing that the infimum of $S_k$ on short loops is zero and it is approximately achieved on constant loops with small period. Furthermore, $S_k$ is bounded away from zero on the set of loops having some fixed positive length. \[prp-mp\] There exist $\delta_I\leq \delta_*$ and positive numbers $b_I,T_I$ such that, for all $k\in I$, $$\label{mp-k} \textit{(a)}\ \ \inf_{\mathcal V^{\delta_{I}}}S_k\ =\ 0\,,\quad\quad\textit{(b)}\ \ \inf_{\partial \mathcal V^{\delta_{I}}}S_k\ \geq\ b_I\,,\quad\quad\textit{(c)}\ \ \sup_{M_0\times\{T_{I}\}}S_k\ <\ \frac{b_I}{2}\,.$$ Since for all $q\in M$, the function $L|_{T_qM}$ attains its minimum at $(q,0)$, the estimate from below on $L$ obtained in can be refined to $$L(q,v)\ \geq\ C_1|v|^2\,+\,\min_{q\in M} L(q,0)\ =\ C_1|v|^2\,-\,e_0(L)\,.$$ From this inequality and , we can bound from below $S_k(\gamma)$: $$\begin{aligned} S_k(\gamma)\ &\geq\ T\cdot\int_0^1\big[C_1\cdot\frac{|x'|^2}{T^2}-e_0(L)+k\Big]{{\mathrm{d}}}s\,-\,C \cdot\ell(x)^2\\ &\geq\ C_1\cdot\frac{e(x)}{T}+(k-e_0(L))\cdot T\,-\,C\cdot \ell(x)^2\\ &\stackrel{\mbox{}^{(\star)}}{\geq}\ 2\sqrt{C_1(\min I-e_0(L))}\cdot\ell(x)\,-\,C\cdot \ell(x)^2\,.\end{aligned}$$ where in $(\star)$ we made use of the inequality between arithmetic and geometric mean. Hence, there exists $\delta_I>0$ sufficiently small, such that the last quantity is positive if $\ell(x)<\delta_I$ and bounded from below by $$b_I\,:=\ 2\sqrt{C_1(\min I-e_0(L))}\cdot\delta_I\,-\,C\cdot \delta_I^2\ >0$$ if $\ell(x)=\delta_I$. This implies Inequality *(b)* in and that $\inf_{V^{\delta_{I}}}S_k\geq0$. To prove that $\inf_{V^{\delta_{I}}}S_k\leq0$ and that there exists $T_I$ such that Inequality *(c)* in holds, we just recall from that $$\lim_{T\rightarrow0}\sup_{M_0\times\{T\}}S_k\ =\ 0\,.\qedhere$$ In the next section we will prove a minimax theorem for a class of closed $1$-form on abstract Hilbert manifolds. Such a class will satisfy a general version of the properties we have proved so far for $\eta_k$. The minimax technique {#sec:min} ===================== In this section we present an abstract minimax technique which represents the core of the proof of the Main Theorem. We formulate it in a very general form on a non-empty Hilbert manifold $\mathscr H$. An abstract theorem {#sub-minimax} ------------------- We start by setting some notation for homotopy classes of maps from Euclidean balls into $\mathscr H$. Let $d\in{{\mathbb N}}$ and $\mathscr U$ be a subset of $\mathscr H$. Define $\big[(B^d,\partial B^d),(\mathscr H,\mathscr U)\big]$ as the set of homotopy classes of maps $\gamma:(B^d,\partial B^d)\rightarrow(\mathscr H,\mathscr U)$. By this we mean that the maps send $B^d$ to $\mathscr H$ and $\partial B^d$ to $\mathscr U$, and that the homotopies do the same. The classes $[\gamma]$, where $\gamma$ is such that $\gamma(B^d)\subset \mathscr U$ are called *trivial*. If $\mathscr U'\subset\mathscr U$, we have a map $$i^{\mathscr U'}_{\mathscr U}:\big[(B^d,\partial B^d),(\mathscr H,\mathscr U')\big]\longrightarrow \big[(B^d,\partial B^d),(\mathscr H,\mathscr U)\big]$$ We are now ready to state the main result of this section. \[thm-for\] Let $\mathscr H$ be a non-empty Hilbert manifold, $\mathscr I=[k_0,k_1]$ be a compact interval and $d\geq1$ an integer. Let $\alpha_k\in\Omega^1(\mathscr H)$ be a family of Lipschitz-continuous forms parametrized by $k\in\mathscr I$ and such that - the integral of $\alpha_k$ over contractible loops vanishes; - $\alpha_k=\alpha_{k_0}+(k-k_0)d\mathscr T$, where $\mathscr T:\mathscr H\rightarrow(0,+\infty)$ is a $C^{1,1}$ function such that $$\sup_{\mathscr H}|d\mathscr T|\ < \ +\infty\,.$$ Define the vector field $$\mathscr X_k\,:=\ -\,\frac{\sharp\, \alpha_k}{\sqrt{1+|\alpha_k|^2}}\,,$$ where $\sharp$ is the metric duality, and suppose that there exists an open set $\mathscr V\subset\mathscr H$ such that: - there exists $\mathscr S_k:\overline{\mathscr V}\rightarrow{{\mathbb R}}$ satisfying $$\label{eq:prim} d\mathscr S_k\ = \ \alpha_k\,,\quad\quad \mathscr S_k\ =\ \mathscr S_{k_0}\ +\ (k-k_0)\,\mathscr T\,;$$ - there exists a real number $$\label{eq:infbeta} \beta_0\ < \ \inf_{\partial \mathscr V}\mathscr S_{k_0}\ =:\,\beta_{\partial\mathscr V}$$ such that the flow $r\mapsto\Phi^{\mathscr X_k}_r$ is positively complete on the set $\mathscr H\setminus\{\mathscr S_k<\beta_0\}$; - there exists a set $\mathscr M\subset\{\mathscr S_{k_1}<\beta_0\}$ and a class $\mathscr G\in[(B^d,\partial B^d),(\mathscr H,\mathscr M)\big]$ such that $i^{\mathscr M}_{\mathscr V}(\mathscr G)$ is non-trivial. Then, the following two statements hold true. First, for all $k\in\mathscr I$, there exists a sequence $(h^k_m)_{m\in{{\mathbb N}}}\subset\mathscr H\setminus\{\mathscr S_k< \beta_0\}$ such that $$\lim_{m\rightarrow\infty}|\alpha_k|_{h^k_m}\ =\ 0\,.$$ Second, there exists a subset $\mathscr I_*\subset\mathscr I$ such that - $\mathscr I\setminus\mathscr I_*$ is negligible with respect to the $1$-dimensional Lebesgue measure; - for all $k\in\mathscr I_*$ we have $$\sup_{m\in{{\mathbb N}}}\mathscr T(h^k_m)\ <\ +\infty\,.$$ Moreover, if there exists a $C^{1,1}$-function $\widehat{\mathscr S}_k:\mathscr H\rightarrow{{\mathbb R}}$ which extends $\mathscr S_k$ and satisfies on the whole $\mathscr H$, we also have that $$\lim_{m\rightarrow\infty}\widehat{\mathscr S}_k(h^k_m)\ =\ \inf_{\gamma\in\mathscr G}\sup_{\xi\in B^d}\ \widehat{\mathscr S}_k\circ \gamma \,(\xi)\ \geq\ \beta_{\partial \mathscr V}\,.$$ To prove Theorem \[thm:main\]*(1a)* we will also need a version of the minimax theorem for $d=0$, namely when the maps are simply points in $\mathscr H$. We state it here for a single function and not for a $1$-parameter family since this will be enough for the intended application. For a proof we refer to [@Abb13 Remark 1.10]. \[thm:fun\] Let $\mathscr H$ be a non-empty Hilbert manifold and let $\widehat{\mathscr S}:\mathscr H\rightarrow {{\mathbb R}}$ be a $C^{1,1}$-function bounded from below. Suppose that the flow of the vector field $$\mathscr X\,:=\ -\,\frac{\nabla \widehat{\mathscr S}}{\sqrt{1+|\nabla\widehat{\mathscr S}|^2}}$$ is positively complete on some non-empty sublevel set of $\widehat{\mathscr S}$. Then, there exists a sequence $(h_m)_{m\in{{\mathbb N}}}\subset\mathscr H$ such that $$\lim_{m\rightarrow+\infty}|d_{h_m}\widehat{\mathscr S}|\ =\ 0\,,\quad\quad\lim_{m\rightarrow+\infty}\widehat{\mathscr S}(h_m)\ =\ \inf_{\mathscr H}\widehat{\mathscr S}\,.$$ In the next two subsections we prove Theorem \[thm-for\]. First, we introduce some preliminary definitions and lemmas and then we present the core of the argument. Preliminary results ------------------- We start by defining the *variation* of the $1$-form $\alpha_k$ along any path $u:[a_0,a_1]\rightarrow \mathscr H$. It is the real number $$\begin{aligned} \label{deltas} \alpha_k(u)\,:=\ \int_{a_0}^{a_1}\alpha_k\left(\frac{du}{da}\right)(u(a))\,{{\mathrm{d}}}a\,.\end{aligned}$$ We collect the properties of the variation along a path in a lemma. \[lem:prim\] If $u$ is a path in $\mathscr H$ and $\overline{u}$ is the inverse path, we have $$\alpha_k(\overline{u})\ =\ -\, \alpha_k(u)\,.$$ If $u_1$ and $u_2$ are two paths in $\mathscr H$ such that the ending point of $u_1$ coincides with the starting point of $u_2$, we denote by $u_1\ast u_2$ the concatenation of the two paths and we have $$\alpha_k(u_1\ast u_2)\ =\ \alpha_k(u_1)\ +\ \alpha_k(u_2)\,,$$ If $u$ is a contractible closed path in $\mathscr H$, we have $$\alpha_k(u)\ =\ 0\,.$$ Finally, let $\gamma:Z\rightarrow \mathscr H$ be any smooth map from a Hilbert manifold $Z$ such that there exists a function $\mathscr S_k^\gamma: Z\rightarrow \mathscr H$ with the property that $$d\,\mathscr S_k^\gamma\ =\ \gamma^*\alpha_k\,.$$ Then, for all paths $z:[a_0,a_1]\rightarrow Z$ we have $$\label{eq:prim2} \alpha_k(\gamma\circ z)\ =\ \mathscr S_k^\gamma(z(a_1))\ -\ \mathscr S_k^\gamma(z(a_0))\,.$$ Let us come back to the statement of Theorem \[thm-for\]. Fix a point $\xi_*\in \partial B^d$ and for every $\gamma\in\mathscr G$ define the unique $\mathscr S_k^\gamma:B^d\rightarrow\mathscr H$ such that $$\label{eq:prim3} d\,\mathscr S_k^\gamma\ =\ \gamma^*\alpha_k\,,\quad\quad \mathscr S_k^\gamma(\xi_*)\ =\ \mathscr S_k(\gamma(\xi_*))\,.$$ We observe that this is a good definition since $B^d$ is simply connected and $\gamma(\xi_*)$ belongs to the domain of definition of $\mathscr S_k$ as $\gamma\in\mathscr G$. Moreover, if $\alpha_k$ admits a global primitive $\widehat {\mathscr S}_k$ on $\mathscr H$ extending $\mathscr S_k$, then clearly we have $\mathscr S_k^\gamma=\widehat{\mathscr S}_k\circ\gamma$. Finally, thanks to the previous lemma, for every $\xi\in B^d$ we have the formula $$\label{eq:primvar} \mathscr S^\gamma_k(\xi)\ =\ \mathscr S_k(\gamma(\xi_*))\ +\ \alpha_k(\gamma\circ z_\xi)\,,$$ where $z_\xi:[0,1]\rightarrow B^d$ is any path connecting $\xi_*$ and $\xi$. If $d\neq 1$, then $\mathscr S^\gamma_k$ does not depend on the choice of the point $\xi_*\in \partial B^d$ as $S^{d-1}=\partial B^d$ is connected. On the other hand, if $d=1$ there are two possible choices for $\xi_*$ and the two corresponding primitives of $\gamma^*\eta_k$ differ by a constant, which depends only on the class $\mathscr G$ and not on $\gamma$. We define the $\mathsf{minimax\ function}$ $c_{\mathscr G}:\mathscr I\rightarrow{{\mathbb R}}\cup\{-\infty\}$ by $$\begin{aligned} c_{\mathscr G}(k)\,:=\ \inf_{\gamma\in\mathscr G}\ \sup_{\xi\in B^d}\ \mathscr S_k^\gamma(\xi)\,.\end{aligned}$$ In the next lemma we show that $c_{\mathscr G}(k)$ is finite and that, for each $\gamma\in\mathscr G$, the points almost realizing the supremum of the function $\mathscr S_k^\gamma$ lie in the complement of the set $\{\mathscr S_k\,<\,\beta_0\}$. \[lem:almax\] Let $k\in\mathscr I$ and $\gamma\in\mathscr G$. There holds $$\label{eq:bel} \sup_{B^d}\ \mathscr S^\gamma_k\ \geq\ \beta_{\partial \mathscr V}\,.$$ Moreover, if $\beta_1<\beta_{\partial\mathscr V}$, then $\forall\,\xi\in B^d$ the following implication holds $$\label{eq:almax} \mathscr S_k^\gamma(\xi)\ \geq\ \sup_{B^d}\ \mathscr S_k^\gamma\ -\ (\beta_{\partial\mathscr V}-\beta_1)\quad\xRightarrow{\quad\ \ }\quad \gamma(\xi)\ \notin\ \{\mathscr S_k\,<\,\beta_1\}\,.$$ Since $i^{\mathscr M}_{\mathscr V}(\mathscr G)$ is non-trivial, the set $\{\xi\in B^d\,|\, \gamma(\xi)\in\partial \mathscr V\}$ is non-empty. Therefore, there exists an element $\widehat\xi$ in this set and a path $z_{\widehat\xi}:[0,1]\rightarrow B^d$ from $\xi_*$ to $\widehat\xi$ such that $\gamma\circ z_{\widehat\xi}|_{[0,1)}\subset \mathscr V$. By and we have $$\mathscr S^\gamma_k(\widehat\xi)\ =\ \mathscr S_k(\gamma(\xi_*))\,+\,\alpha_k(\gamma\circ z_{\widehat\xi})\ =\ \mathscr S_k(\gamma(\xi_*))\,+\,\Big(\mathscr S_k(\gamma(\widehat\xi))-\mathscr S_k(\gamma(\xi_*))\Big)\ =\ \mathscr S_k(\gamma(\widehat\xi))\,,$$ which implies by . In order to prove the second statement we consider $\xi\in B^d$ such that $\gamma(\xi)\in\{\mathscr S_k\,<\beta_1\}$. Without loss of generality there exists a path $z_{\xi,\widehat\xi}:[0,1]\rightarrow B^d$ from $\xi$ to $\widehat\xi$ such that $z_{\xi,\widehat\xi}|_{[0,1)}\subset\mathscr V$. Using twice, we compute $$\begin{aligned} \sup_{B_d}\mathscr S^\gamma_k\ \geq\ \mathscr S^\gamma_k(\widehat\xi)\ =\ \mathscr S_k^\gamma(\xi)\,+\,\alpha_k(\gamma\circ z_{\xi,\widehat\xi})\ &=\ \mathscr S_k^\gamma(\xi)\,+\,\Big(\mathscr S_k(\gamma(\widehat\xi))-\mathscr S_k(\gamma(\xi))\Big)\\ &>\ \mathscr S_k^\gamma(\xi)\,+\,\big(\beta_{\partial \mathscr V}-\beta_1\big)\,,\end{aligned}$$ which yields the contrapositive of the implication we had to show. We now see that, since the family $k\mapsto\alpha_k$ is monotone in the parameter $k$, the same is true for the numbers $c_{\mathscr G}(k)$. If $k_2\leq k_3$ and $\gamma\in\mathscr G$, we have $$\label{fin-dif} \mathscr S^\gamma_{k_3}\ =\ \mathscr S^\gamma_{k_2}\ +\ (k_3-k_2)\,\mathscr T\circ \gamma\,.$$ As a consequence, $c_{\mathscr G}$ is a non-decreasing function. We observe that $$\begin{aligned} \bullet&\ \ d\big(\mathscr S^\gamma_{k_3}\,-\,\mathscr S^\gamma_{k_2}\big)\ =\ \gamma^*\big(\alpha_{k_3}-\alpha_{k_2}\big)\ =\ \gamma^*\big((k_3-k_2)\, d\mathscr T\big)\\ \bullet&\ \ \mathscr S^\gamma_{k_3}(\xi_*)\,-\, \mathscr S^\gamma_{k_2}(\xi_*)\ =\ \mathscr S_{k_3}(\gamma(\xi_*))\,-\mathscr S_{k_2}(\gamma(\xi_*))\ =\ (k_3-k_2)\,\mathscr T(\gamma(\xi_*))\,.\end{aligned}$$ These two equalities imply that the function $\mathscr S_{k_2}^\gamma+(k_3-k_2)\mathscr T\circ\gamma$ satisfies with $k=k_3$. Since these conditions identify a unique function, equation follows. In particular, we have $\mathscr S^\gamma_{k_2}\leq\mathscr S^\gamma_{k_3}$. Taking the inf-sup of this inequality on $\mathscr G$, we get $c_{\mathscr G}(k_2)\leq c_{\mathscr G}(k_3)$. We end this subsection by adjusting the vector field $\mathscr X_{k}$ so that its flow becomes positively complete on all $\mathscr H$. We fix $\beta_1\in(\beta_0,\beta_{\partial \mathscr V})$ and let $\mathscr B:[\beta_0,\beta_1]\rightarrow[0,1]$ be a function that is equal to $0$ in a neighbourhood of $\beta_0$ and equal to $1$ in a neighbourhood of $\beta_1$. We set $$\check {\mathscr X}_{k}\,:=\ (\mathscr B\circ\mathscr S_{k})\cdot \mathscr X_{k}\,\in\,\Gamma(\mathscr H)\,.$$ We observe that $$\bullet\ \check {\mathscr X}_{k}\,=\,0\ \ \mbox{on}\ \ \{\mathscr S_{k}<\beta_0\}\,,\quad\quad\bullet\ \check {\mathscr X}_{k}\,=\,\mathscr X_{k}\ \ \mbox{on}\ \ \mathscr H\setminus\{\mathscr S_{k}<\beta_1\}\,,$$ and, hence, the flow $\Phi^{\check {\mathscr X}_{k}}$ is positively complete. Proof of Theorem \[thm-for\] ---------------------------- Let us define the subset $$\mathscr I_*:=\, \Big\{\,k\in[k_0,k_1)\ \Big|\ \exists\, C(k_*)\, \mbox{ such that }\, c_{\mathscr G}(k)-c_{\mathscr G}(k_*)\, \leq\, C(k_*)(k-k_*)\,,\ \forall\, k\in[k_*,k_1]\,\Big\}\,.$$ Namely, $\mathscr I_*$ is the set of points at which $c_{\mathscr G}$ is Lipschitz-continuous on the right. Since $c_{\mathscr G}$ is a non-decreasing real function, by Lebesgue Differentiation Theorem, $c_{\mathscr G}$ is Lipschitz-continuous at almost every point. In particular, $\mathscr I\setminus\mathscr I_*$ has measure zero. We are now ready to show that 1. for all $k\in\mathscr I$, there exists a vanishing sequence $(h^k_m)_{m\in{{\mathbb N}}}\subset\mathscr H\setminus\{\mathscr S_k<\beta_0\}$ and that 2. for all $k_*\in\mathscr I_*$, such vanishing sequence can be taken to satisfy $$\sup_{m\in{{\mathbb N}}}\mathscr T(h^{k_*}_m)\ <\ C(k_*)\ +\ 3\,.$$ We will prove only the statement about the vanishing sequences with parameter in $\mathscr I_*$, as the argument can be easily adapted to prove the statement for a general parameter in $\mathscr I$. We assume by contradiction that there exists a positive number $\varepsilon_0$ such that $$|\alpha_{k_*}|\ \geq\ \varepsilon_0\,,\quad \mbox{on}\ \ \{\mathscr T\,<\,C(k_*)+3\}\setminus\{\mathscr S_{k_*}\,<\,\beta_1\}\,.$$ Consider a decreasing sequence $(k_m)_{m\in{{\mathbb N}}}\subset(k_*,k_1]$ such that $k_m\rightarrow k_*$. Set $\delta_m:=k_m-k_*$ and take a corresponding sequence $(\gamma_m)_{m\in{{\mathbb N}}}\subset\mathscr G$ such that $$\sup_{B^d}\mathscr S^{\gamma_m}_{k_m}\ <\ c_{\mathscr G}(k_m)\ +\ \delta_m\,.$$ For every $\xi\in B^d$ we consider the sequence of flow lines $$\begin{aligned} u_m^\xi:[0,1]&\longrightarrow\mathscr H\\ r&\longmapsto \Phi^{\check {\mathscr X}_{k_*}}_r(\gamma_m(\xi))\,.\end{aligned}$$ Conversely, for any time parameter $r\in[0,1]$, we get the map $$\gamma^r_m\,:=\ \Phi^{\check {\mathscr X}_{k_*}}_r(\gamma_m)\,.$$ We readily see that $\gamma^r_m|_{\partial B^d}=\gamma_m|_{\partial B^d}$ and $\gamma^r_m\in\mathscr G$. In particular, for every $\xi\in B^d$ and $r\in [0,1]$ the concatenated curve $$\big(\gamma_m\circ z_{\xi}\big)\ \ast\ u^\xi_m|_{[0,r]}\ \ast\ \big(\overline{\gamma^r_m\circ z_\xi}\big)$$ is contractible. Therefore, Lemma \[lem:prim\] and Equation yield $$\label{deltahomo} \mathscr S_{k_*}^{\gamma^r_m}(\xi)\ =\ \mathscr S_{k_*}^{\gamma_m}(\xi)\ +\ \alpha_{k_*}(u_m^\xi|_{[0,r]})\,.$$ Finally, since $u_m^\xi$ is a flow line, we have $$\label{deltav} \alpha_{k_*}(u_m^\xi|_{[0,r]})=\int_0^r\alpha_{k_*}\left(-\frac{\mathscr B\cdot\sharp\alpha_{k_*}}{\sqrt{1+|\alpha_{k_*}|^2}}\right)\!(u_m^\xi(\rho))\,{{\mathrm{d}}}\rho=-\int_0^r\frac{\mathscr B\cdot|\alpha_{k_*}|^2}{\sqrt{1+|\alpha_{k_*}|^2}}(u_m^\xi(\rho))\,{{\mathrm{d}}}\rho\,.$$ Therefore $\alpha_{k_*}(u_m^\xi|_{[0,r]})\leq0$ and we find that, for every $m\in{{\mathbb N}}$, $$\label{eq:noninc} r\longmapsto \mathscr S^{\gamma^r_m}_{k_*}\quad \mbox{is a non-increasing family of functions on } B^d\,.$$ Let us estimate the supremum of $\mathscr S_{k_*}^{\gamma^r_m}$. When $r=0$, and the definition of $\mathscr I_*$ imply: $$\label{supa0} \sup_{B^d}\mathscr S_{k_*}^{\gamma_m}\ \leq\ \sup_{B^d}\mathscr S_{k_m}^{\gamma_m}\ <\ c_{\mathscr G}(k_m)+\delta_m\ \leq\ c_{\mathscr G}(k_*)+(C(k_*)+1)\,\delta_m\,.$$ Thus, by we get, for every $r\in[0,1]$, $$\label{supuma} \sup_{B^d}\mathscr S_{k_*}^{\gamma_m^r}\ <\ c_{\mathscr G}(k_*)+(C(k_*)+1)\,\delta_m\,.$$ If $r\in[0,1]$, we define the sequence of subsets of $B^d$ $$\begin{aligned} J^r_m\,:&=\ \big\{\,\mathscr S_{k_*}^{\gamma^r_m}\ >\ c_{\mathscr G}(k_*)\,-\,\delta_m\,\big\}\,.\end{aligned}$$ Let us give a closer look to these sets. First, we observe that if $\xi\in J^r_m$, then and imply that $$\label{eq:vardec} \alpha_{k_*}(u^\xi_m|_{[0,r]})\ >\ c_{\mathscr G}(k_*)-\delta_m\ -\ \big(\,c_{\mathscr G}(k_*)+(C(k_*)+1)\,\delta_m\,\big)\ =\ -\,(C(k_*)+2)\,\delta_m\,.$$ Then, we claim that for $m$ large enough $$\xi\in J^r_m\quad\Longrightarrow\quad \gamma^r_m(\xi)\ \in\ \big\{\mathscr T<C(k_*)+3\big\}\setminus\big\{\mathscr S_{k_*}<\beta_1\big\}\,,\quad\forall\,r\in[0,1]\,.$$ First, we observe that $$\mathscr S_{k_*}^{\gamma_m^r}(\xi)\ >\ c_{\mathscr G}(k_*)\ -\ \delta_m\ \geq\ \sup_{B^d}\mathscr S_{k_*}^{\gamma_m^r}\ -\ (C(k_*)+2)\,\delta_m\,.$$ If $m$ is large enough, then $(C(k_*)+2)\,\delta_m<(\beta_{\partial \mathscr V}-\beta_1)$ and Lemma \[lem:almax\] implies that $\gamma^r_m(\xi)\notin\{\mathscr S_{k_*}<\beta_1\}$. As a by-product we get that $u^\xi_m|_{[0,r]}$ is a genuine flow line of $\Phi^{{\mathscr X}_{k_*}}$. Then, we estimate $\mathscr T(\gamma^r_m(\xi))$. We start by taking $r=0$. In this case from we get $$\mathscr T(\gamma_m(\xi))\ =\ \frac{\mathscr S_{k_m}^{\gamma_m}(\xi)-\mathscr S_{k_*}^{\gamma_m}(\xi)}{\delta_m}\ <\ \frac{c_{\mathscr G}(k_m)+\delta_m-c_{\mathscr G}(k_*)+\delta_m}{\delta_m}\ <\ C(k_*)+2\,.$$ To prove the inequality for arbitrary $r$ we bound the variation of $\mathscr T$ along $u^\xi_m|_{[0,r]}$ in terms of the action variation: $$\begin{aligned} -\alpha_{k_*}(u_m^\xi|_{[0,r]})\ =\ -\int_0^{r}\alpha_{k_*}\left(\frac{du^\xi_m}{d\rho}\right){{\mathrm{d}}}\rho\ &\geq\ \int_0^{r}\left|\frac{du^\xi_m}{d\rho}\right|^2{{\mathrm{d}}}\rho\\ &\geq\ \frac{1}{r}\left(\int_0^{r}\left|\frac{du^\xi_m}{d\rho}\right|{{\mathrm{d}}}\rho\right)^2\\ &\geq\ \frac{1}{r}\left(\int_0^{r}\frac{1}{1+\sup_\mathscr H|d\mathscr T|}\left|\frac{d(\mathscr T\circ u^\xi_m)}{d\rho}\right|{{\mathrm{d}}}\rho\right)^2\\ &\geq\ \frac{1}{r(1+\sup_\mathscr H|d\mathscr T|)^2}|\mathscr T(u^\xi_m(r))-\mathscr T(u^\xi_m(0))|^2\,.\end{aligned}$$ Using and rearranging the terms we get for $m$ large enough $$|\mathscr T(\gamma^r_m(\xi))-\mathscr T(\gamma_m(\xi))|^2\ \leq\ r\cdot(1+\sup_\mathscr H|d\mathscr T|)^2\cdot (C(k_*)+2)\,\delta_m\ <\ 1\,.$$ Hence, if $m$ is large enough the bound on $\mathscr T$ we were looking for follows from $$\mathscr T(\gamma^r_m(\xi))\ \leq\ \mathscr T(\gamma_m(\xi))\ +\ |\mathscr T(\gamma^r_m(\xi))-\mathscr T(\gamma_m(\xi))|\ <\ (C(k_*)+2)\ +\ 1\,.$$ The claim is thus completely established. The last step to finish the proof of Theorem \[thm-for\] is to show that $J^1_m=\emptyset$ for $m$ large enough. By contradiction, let $\xi\in J^1_m$. Since $\xi\in J^r_m$ for all $r\in[0,1]$, we see that $u_m^\xi$ is a flow line of $\Phi^{\mathscr X_{k_*}}$ contained in $\{\mathscr T<C(k_*)+3\}\setminus\{\mathscr S_{k_*}<\beta_1\}$. Using and continuing the chain of inequalities in , we find $$-\,(C(k_*)+2)\,\delta_m\ <\ \alpha_{k_*}(u_m^\xi)\ \leq\ -\,\frac{\varepsilon_0^2}{\sqrt{1+\varepsilon_0^2}}$$ (where we used that the real function $w\mapsto \frac{w}{\sqrt{1+w}}$ is increasing). Such inequality cannot be satisfied for $m$ large, proving that the sets $J^1_m$ become eventually empty. Finally, since $J^1_m=\emptyset$, we obtain that $c_{\mathscr G}(k_*)\leq\sup_{B^d}\mathscr S_{k_*}^{\gamma^1_m}\leq c_{\mathscr G}(k_*)-\delta_m$. This contradiction finishes the proof of Theorem \[thm-for\]. In the next section we will determine when $\eta_k$ satisfies the hypotheses of the abstract theorem we have just proved. Proof of the Main Theorem {#sec:geo} ========================= We now move to the proof of points *(1)*, *(2)*, *(3)* of Theorem \[thm:main\]. In the first preparatory subsection, we will see when the action form is exact. Primitives for $\eta_k$ ----------------------- We know that $\eta_k$ is exact if and only if so is $\tau^\sigma$. The next proposition, whose simple proof we omit, gives necessary and sufficient conditions for the transgression form to be exact. If $[\widetilde\sigma]\neq0$, then $\tau^\sigma|_{W^{1,2}_\nu}$ is not exact for any $\nu$. If $[\widetilde\sigma]=0$, then $$\begin{aligned} \widehat{P}^\sigma:W^{1,2}_0&\longrightarrow{{\mathbb R}}\,.\\ x&\longmapsto \int_{B^2}\hat u_x^*\sigma\end{aligned}$$ is a primitive for $\tau^\sigma$. Here $\hat u_x$ is any capping disc for $x$. This definition extends the primitive $P^\sigma$, which we constructed on the subset of short loops. If $[\widetilde\sigma]_b=0$, then, given $\nu$ and a reference loop $x_\nu\in W^{1,2}_\nu$, $$\begin{aligned} \widehat{P}^\sigma:W^{1,2}_\nu&\longrightarrow{{\mathbb R}}\,.\\ x&\longmapsto \int_{B^2}\hat u_{x_\nu,x}^*\sigma\end{aligned}$$ is a primitive for $\tau^\sigma$. Here $\hat u_{x_\nu,x}$ is a connecting cylinder from $x_\nu$ to $x$. If we take $x_0$ as a constant loop, the two definitions of $\widehat{P}^\sigma$ coincide on $W^{1,2}_0$. Show that if $M={{\mathbb T}}^2$ and $[\sigma]\neq0$, then $\tau^\sigma|_{W^{1,2}_\nu}$ is not exact if $\nu\neq0$. We set $\widehat{S}_k:=A_k-\widehat{P}^\sigma\circ{\operatorname{pr}}_{W^{1,2}}$ in the two cases above where $\widehat{P}^\sigma$ is defined. Theorem A in [@CIPP98] tells us when $\widehat{S}_k$ is bounded from below. \[prp-below\] If $[\widetilde\sigma]=0$, then $\widehat{S}_k:\Lambda_0\rightarrow{{\mathbb R}}$ is bounded from below if and only if $k\geq c(L,\sigma)$. If $[\widetilde\sigma]_b=0$, the same is true for $\widehat{S}_k:\Lambda_\nu\rightarrow{{\mathbb R}}$. Originally the critical value was introduced by Mañé as the infimum of the values of $k$ such that $\widehat{S}_k:\Lambda_0\rightarrow{{\mathbb R}}$ is bounded from below [@Man97; @CDI97]. Thus, the proposition above establishes the equivalence between the more geometric definition in and the original one. Prove that $\widehat S_k|_{\Lambda_\nu}$ is bounded from below if and only if $\widehat S_k|_{\Lambda_0}$ is bounded from below if and only if $\widehat S_k|_{\Lambda_0}$ is non-negative. As a by-product of Proposition \[prp-below\], we can give a criterion guaranteeing that a vanishing sequence for $\eta_k$ has bounded periods, provided $k>c(L,\sigma)$. \[per-bou\] Let $\nu\in[{{\mathbb T}},M]$ and $[\widetilde\sigma]_b=0$. If $k>c(L,\sigma)$ and $b\in{{\mathbb R}}$, then there exists a constant $C(\nu,k,b)$ such that $$\forall\, \gamma\in\Lambda_\nu\,,\quad \widehat{S}_k(\gamma)\ <\ b\ \ \Longrightarrow\ \ T\ <\ C(\nu,k,b)\,.$$ We readily compute $$T\ =\ \frac{\widehat{S}_k(\gamma)-\widehat{S}_{c(L,\sigma)}(\gamma)}{k-c(L,\sigma)}\ \leq\ \frac{b-\inf_{\Lambda_\nu}\widehat{S}_{c(L,\sigma)}}{k-c(L,\sigma)}\ =:\ C(\nu,k,b)\,.\qedhere$$ Non-contractible orbits ----------------------- We now prove the existence of non-contractible orbits as prescribed by the Main Theorem. Let $\nu\in[{{\mathbb T}},M]$ be a non-trivial class, $\sigma$ be a magnetic form such that $[\sigma]_b=0$ and $k>c(L,\sigma)$. Thanks to Proposition \[prp-below\], the infimum of $\widehat{S}_k$ on $\Lambda_\nu$ is finite. Then, we apply Theorem \[thm:fun\] with $\mathscr H=\Lambda_\nu$ and $\widehat{\mathscr S}=\widehat{S}_k$ and we obtain a vanishing sequence $(\gamma_m)_{m\in{{\mathbb N}}}$ such that $\widehat{S}_k(\gamma_m)$ is uniformly bounded. By Corollary \[per-bou\] the sequence of periods is bounded from above. By Corollary \[cor-perbel\] the sequence of periods is also bounded away from zero. Therefore, we can apply Proposition \[prp-conv\] to get a limit point of the sequence. Contractible orbits ------------------- We start by recalling a topological lemma. \[prp-top\] If $d\geq1$ and $\delta\leq\delta_*$ (see Lemma \[def-ret\]), there are natural bijections $$\xymatrixcolsep{35pt}\xymatrix{ \displaystyle\frac{\pi_{d+1}(M)}{\pi_1(M)}\ \ar[rd]^{F}\ar[r]&\ [\,S^{d+1},\,M\,]\ar[d]&\\ &\ \big[\,\big(B^d,\partial B^d\big)\,,\,\big(W^{1,2}_0, M_0\big)\,\big]\ \ar[r]^<<<<<{\quad i^{M_0}_{\{\ell<\delta\}}}&\ \big[\,\big(B^d,\partial B^d\big)\,,\,\big(W^{1,2}_0,\{\ell<\delta\}\big)\,\big]\,,}$$ where $\pi_{d+1}(M)/\pi_1(M)$ is the quotient of $\pi_{d+1}(M)$ by the action of $\pi_1(M)$[^1]. The trivial classes on the second line are identified with the class of constant maps in $[S^{d+1},M]$ and with the class of the zero element in $\pi_{d+1}(M)/\pi_1(M)$. The first horizontal map is $\frac{[\hat u]}{\pi_1(M)}\mapsto [\hat u]$. We leave as an exercise to the reader to show that is a bijection. The vertical map sends $[\hat u]$ to $[u]$, where $u$ is defined as follows. Consider the equivalence relation $\sim$ on $B^d\times {{\mathbb T}}$: $$(z_1,s_1)\,\sim\,(z_2,s_2)\quad\quad\Longleftrightarrow\quad\quad (z_1,s_1)\,=\,(z_2,s_2)\quad \vee\quad z_1\,=\,z_2\, \in\, \partial B^d\,.$$ If we interpret $B^d$ as the unit ball in ${{\mathbb R}}^d$ and $S^{d+1}$ as the unit sphere in ${{\mathbb R}}^{d+2}$ we can define the homeomorphism $$\begin{aligned} Q:\frac{B^d\times {{\mathbb T}}}{\sim}&\ \longrightarrow\ S^{d+1}\\ [z,s]&\ \longmapsto\ (z,\sqrt{1-|z|^2}\cdot e^{2\pi is})\,,\end{aligned}$$ where $e^{2\pi is}$ belongs to $S^1\subset{{\mathbb R}}^2$. We set $u(z)(s):=(\hat u\circ Q)([z,s])$. For a proof that the vertical map is well-defined and it is a bijection, we refer the reader to [@Kli78 Proposition 2.1.7]. Finally, the second horizontal map is a bijection thanks to Lemma \[def-ret\]. We can now prove the parts of the Main Theorem dealing with contractible orbits. Let $[\widetilde\sigma]_b=0$, $k>c(L,\sigma)$ and fix some non-zero $\mathfrak u\in\pi_{d+1}(M)$, which exists by hypothesis. We apply Proposition \[prp-mp\] to the trivial interval ${\{k\}}$ and get the positive real numbers $\delta_{\{k\}}$, $b_{\{k\}}$ and $T_{\{k\}}$. Let $$\Gamma_{\mathfrak u}\,:=\ \Big\{\ \gamma=(x,T):\big(B^d,\partial B^d\big)\ \longrightarrow\ \big(\Lambda_0,M_0\times\{T_{\{k\}}\}\big)\ \ \Big|\ \ [x]\in F\big(\mathfrak u/\pi_1(M)\big)\ \Big\}$$ By Proposition \[prp-top\] we see that $\Gamma_{\mathfrak u}\in\big[(B^d,\partial B^d),(\Lambda_0,M_0\times\{T_{\{k\}}\})\big]$ and that $i^{M_0\times\{T_{\{k\}}\}}_{\mathscr V^{\delta_{\{k\}}}}(\Gamma_{\mathfrak u})$ is non-trivial. Therefore, we apply Theorem \[thm-for\] with $$\left[\ \begin{aligned} \mathscr H&=\Lambda_0&\quad \mathscr I&= \{k\}&\quad\widehat{\mathscr S}_k&=\widehat S_k\\ \beta_0&=b_{\{k\}}/2 &\quad \mathscr V&=\mathcal V^{\delta_{\{k\}}}&\quad\mathscr M&=M_0\times\{T_{\{k\}}\} \\ \mathscr G&=\Gamma_{\mathfrak u} & & & & \end{aligned}\ \right]$$ and we obtain a vanishing sequence $(\gamma_m)_{m\in {{\mathbb N}}}$ such that $$\lim_{m\rightarrow+\infty}\widehat{S}_k(\gamma_m)\ =\ c_{\mathfrak u}(k)\,:=\ \inf_{\gamma\in\Gamma}\ \sup_{B^d}\widehat{S}_k\circ \gamma\ \geq\ b_{\{k\}} \,.$$ The sequence of periods $(T_m)$ is bounded from above by Corollary \[per-bou\]. The sequence $(T_m)$ is also bounded away from zero by Corollary \[cor-perbel\], since $\gamma_m\notin \{S_k< b_{\{k\}}/2\}$ for $m$ big enough. Applying Proposition \[prp-conv\] we obtain a limit point of $(\gamma_m)$. Let $[\widetilde\sigma]=0$ and fix $I=[k_0,k_1]\subset (e_0(L),c(L,\sigma))$. Let $\delta_I$, $b_I$ and $T_I$ be as in Proposition \[prp-mp\]. Fix $\gamma_0\in M_0\times\{T_I\}$ and $\gamma_1\in\Lambda_0$ such that $\widehat{S}_{k_1}(\gamma_1)<0$. Such element exists thanks to Proposition \[prp-below\]. Let $u_*:[0,1]\rightarrow \Lambda$ be some path such that $u_*(0)=\gamma_0$ and $u_*(1)=\gamma(1)$ and denote by $[u_*]\in[(B^1,\partial B^1),(\Lambda_0,\{\gamma_0,\gamma_1\})]$ its homotopy class. By Proposition \[prp-mp\], $\gamma_0$ and $\gamma_1$ belong to different components of $\{\widehat{S}_{k_0}<b_I\}$. Thus, $i^{\{\gamma_0,\gamma_1\}}_{\{\widehat{S}_{k_0}<b_I\}}([u_*])$ is non-trivial. Therefore, we apply Theorem \[thm-for\] with $$\left[\ \begin{aligned} \mathscr H&=\Lambda_0&\quad \mathscr I&= I&\quad\widehat{\mathscr S}_k&=\widehat S_k\\ \beta_0&=b_I/2 &\quad \mathscr V&=\{\widehat{S}_{k_0}<b_I\}&\quad\mathscr M&=\{\gamma_0,\gamma_1\} \\ \mathscr G&=[u_*] & & & & \end{aligned}\ \right]$$ and we get a vanishing sequence $(\gamma^k_m)_{m\in{{\mathbb N}}}$ with bounded periods, for almost every $k\in I$. Moreover, we have $$\lim_{m\rightarrow+\infty}\widehat{S}_k(\gamma_m)\ =\ c_{[u_*]}(k)\,:=\ \inf_{u\in[u_*]}\ \sup_{B^1}\widehat{S}_k\circ u\ \geq\ b_I\,.$$ In particular, $\gamma^k_m\notin\{\widehat{S}_k <b_{I}/2\}$ for $m$ large enough. Hence, the periods are bounded away from zero by Corollary \[cor-perbel\]. Now we apply Proposition \[prp-conv\] to get a limit point of $(\gamma^k_m)$. Taking an exhaustion of $(e_0(L),c(L,\sigma))$ by compact intervals, we get a critical point for almost every energy in $(e_0(L),c(L,\sigma))$. Let $[\widetilde\sigma]\neq0$ and fix $I=[k_0,k_1]\subset (e_0(L),+\infty)$. Let $\delta_I$, $b_I$ and $T_I$ be as in Proposition \[prp-mp\]. Since $[\widetilde\sigma]\neq0$, there exists a non-zero $\mathfrak u\in\pi_2(M)$. We set $$\Gamma_{\mathfrak u}\,:=\ \Big\{\ \gamma=(x,T):\big(B^1,\partial B^1\big)\ \longrightarrow\ \big(\Lambda_0,M_0\times\{T_I\}\big)\ \ \Big|\ \ [x]\in F\big(\mathfrak u/\pi_1(M)\big)\ \Big\}$$ By Proposition \[prp-top\] we see that $\Gamma_{\mathfrak u}\in\big[(B^1,\partial B^1),(\Lambda_0,M_0\times\{T_I\})\big]$ and that $i^{M_0\times\{T_I\}}_{\mathcal V^{\delta_I}}(\Gamma_{\mathfrak u})$ is non-trivial. Therefore, we apply Theorem \[thm-for\] with $$\left[\ \begin{aligned} \mathscr H&=\Lambda_0&\quad \mathscr I&= I&\quad\alpha_k&=\eta_k\\ \beta_0&=b_I/2 &\quad \mathscr V&=\mathcal V^{\delta_I}&\quad\mathscr M&=M_0\times\{T_I\} \\ \mathscr G&=\Gamma_{\mathfrak u} & & & & \end{aligned}\ \right]$$ and we obtain a vanishing sequence $(\gamma^k_m)_{m\in{{\mathbb N}}}\subset \Lambda_0\setminus \{S_k<b_I/2\}$ with bounded periods, for almost every $k\in I$. Since, the periods are bounded away from zero by Corollary \[cor-perbel\], Proposition \[prp-conv\] yields a limit point of $(\gamma^k_m)$, for almost every $k\in I$. Taking an exhaustion of $(e_0(L),+\infty)$ by compact intervals, we get a contractible zero of $\eta_k$ for almost every $k>e_0(L)$. Magnetic flows on surfaces I: Taĭmanov minimizers {#sec:tai} ================================================= In this and in the next section we are going to focus on the $2$-dimensional case. Therefore, let us assume that $M$ is a closed connected oriented surface. In this case $H^2(M;{{\mathbb R}})\simeq {{\mathbb R}}$, where the isomorphism is given by integration and we identify $[\sigma]$ with a real number. Up to changing the orientation on $M$, we assume that $[\sigma]\geq0$. For simplicity, we are going to work in the setting of Section \[sub:hom\] and consider only purely kinetic Lagrangians. Namely, we take $L(q,v)=\frac{1}{2}|v|^2$, where $|\cdot|$ is induced by a metric $g$. Since $L$ depends only on $g$, we will use the notation $(g,\sigma)$ where we previously used $(L,\sigma)$. We readily see that $e_m(L)=e_0(L)=0$ and that $c(g,\sigma)=0$ if and only if $\sigma=0$ (see Proposition \[prp-man\]). We recall that the periodic orbits with positive energy are parametrized by a positive multiple of the arc-length. Thus, they are immersed curve in $M$. The space of embedded curves ---------------------------- The space of curves on a $2$-dimensional manifold $M$ has a particularly rich geometric structure. Observe, indeed, that for $n\geq3$ the curves on $M$ are generically embedded. On the other hand, if $M$ is a surface, intersections between curves and self-intersections are generically stable. Therefore, one can refine the existence problem by looking at periodic orbits having a particular shape (see the beginning of Section 1.1 in [@HS13] and references therein for a precise notion of the shape of a curve on a surface). For example, we consider the following question. For which $k$ and $\nu$ there exists a *simple* periodic orbit $\gamma\in\Lambda_\nu$ with energy $k>0$? Let us start by investigating the case $\nu=0$. If $\gamma=(x,T)$ is a contractible simple curve, there exists an embedded disc $\hat u:B^2\rightarrow M$ such that $\hat u(e^{2\pi is})=x(s)$. This map yields a path $(u,T)$ in $\Lambda_0$ from a constant path $(x_0,T)$, representing the centre of the disc, to $(x,T)$. Integrating $\eta_k$ along this path and summing the value of $S_k$ at $(x_0,T)$, we get $$\label{int-emb} \int_0^1(u,T)^*\eta_k\ +\ S_k(x_0,T)\ =\ \frac{e(x)}{2T}\ +\ kT\ -\ \int_{B^2}\hat u^*\sigma\,.$$ Since $\hat u$ is an embedding, ${\operatorname{area}}(\hat u)\leq{\operatorname{area}}(M)$ and we find a uniform bound from below $$\label{eq-lb} \int_0^1(u,T)^*\eta_k\ +\ S_k(x_0,T)\ \geq\ 0\ +\ 0\ -\ \sup_M|\sigma|\cdot{\operatorname{area}}(\hat u)\ \geq\ -\sup_M|\sigma|\cdot{\operatorname{area}}(M)\,.$$ This observation gives us the idea of defining a functional on the space of simple contractible loops and look for its global minima. First, we notice that $\int_{B^2}\hat u^*\sigma$ is invariant under an orientation-preserving change of parametrization. In order to make the whole right-hand side of independent of the parametrization, we ask that $(\gamma,\dot\gamma)\in\Sigma_k$. This implies that $$\sqrt{2k}\cdot T\ =\ \ell(x)\,,\quad\quad e(x)\ =\ \ell(x)^2\,.$$ Substituting in , we get $$\int_0^1(u,T)^*\eta_k\ +\ S_k(x_0,T)\ =\ \sqrt{2k}\cdot\ell(\partial D)\ -\ \int_D\sigma\ =:\,\mathcal T_k(D)\,,$$ where $$D=[\hat u]\in\mathcal D(M):=\left\{\begin{aligned} \mbox{embeddings } \hat u:B^2\longrightarrow M\,,\hspace{50pt}\\ \mbox{ up to orientation}\mbox{-preserving reparametrizations }\end{aligned}\right\}$$ and $\partial D$ represents the boundary of $D$ oriented in the counter-clockwise sense. We readily see that the critical points of this functional correspond to the periodic orbits we are looking for. If $D$ is a critical point of $\mathcal T_k:\mathcal D(M)\rightarrow{{\mathbb R}}$, then $\partial D$ is the support of a simple contractible periodic orbit with energy $k$. In view of this proposition and the fact that $\mathcal T_k$ is bounded from below, we consider a minimizing sequence $(D_m)_{m\in{{\mathbb N}}}\subset\mathcal D(M)$. However, the sequence $D_m$ might converge to a disc $D_\infty$ which is not embedded. For example, $D_\infty$ might have a self-tangency at some point $q$ on its boundary (see Figure \[pic\]). ![Minimizing sequence for $\mathcal T_k$ on $\mathcal D(M)$[]{data-label="pic"}](fig1){width="3.5in"} However, in this case the support of $D_\infty$ in $M$ can be interpreted as an annulus $A_\infty$ whose two boundary components touch exactly at $q$. Now we can resolve the singularity in the space of annuli and get an embedded annulus $A$ close to $A_\infty$. The key observation is that $\mathcal T_k$ can be extended to the space of annuli and that $$\label{ine-tai} \mathcal T_k(D_\infty)\ =\ \mathcal T_k(A_\infty)\ >\ \mathcal T_k(A)\,.$$ To justify the inequality in the passage above, we observe that $\ell(\partial A)<\ell(\partial A_\infty)$ from classic estimates in Riemannian geometry and that the contribution given by the integral of $\sigma$ is of higher order. This heuristic argument prompts us to give the following definitions. Let $\mathcal E(M)=\{\mbox{oriented embedded surfaces }\Pi\rightarrow M\}\cup\{\emptyset\}$ and denote by $\mathcal E_+(M)$ and $\mathcal E_-(M)$ the surfaces having the same orientation as $M$ and the opposite orientation, respectively. If $\Pi\in\mathcal E(M)$, then $\partial \Pi$ denotes the (possibly empty) multi-curve made by the boundary components of $\Pi$. If we define the length $\ell(\partial \Pi)$ as the sum of the lengths of the boundary components, we have a natural extension $$\begin{aligned} \mathcal T_k:\mathcal E(M)&\ \longrightarrow\ {{\mathbb R}}\\ \Pi&\ \longmapsto\ \sqrt{2k}\cdot\ell(\partial \Pi)\ -\ \int_\Pi\sigma\,.\end{aligned}$$ As in we find that $\mathcal T_k$ is bounded from below by $-\sup|\sigma|\cdot{\operatorname{area}}(M)$. Moreover, we observe that there is a bijection $$\label{tai-inv} \begin{aligned} \mathcal E_+(M)&\ \longrightarrow\ \mathcal E_-(M)\\ \Pi&\ \longmapsto\ M\setminus\mathring{\Pi} \end{aligned} \quad\quad\quad\mbox{such that}\quad\quad \begin{aligned} \mathcal T_k(M\setminus\mathring{\Pi})\ =\ \mathcal T_k(\Pi)\ +\ \int_M\sigma\,. \end{aligned}$$ Therefore, it is enough to look for a minimizer on $\mathcal E_-(M)$. The chain of inequalities hints at the following result. \[prp-tai\] For all $k>0$, there exists a minimizer $\Pi^k$ of $\mathcal T_k|_{\mathcal E_-(M)}$. If $\partial \Pi^k=\{\gamma^k_i\}_i$, then the $\gamma^k_i$ are periodic orbits with energy $k$. For a proof of this proposition we refer to [@Tai93] and [@CMP04]: - In the former reference, Ta[ĭ]{}manov uses a finite dimensional reduction and works on the space of surfaces $\Pi\in\mathcal E(M)$ whose boundary is made by piecewise solutions of the twisted Euler-Lagrange equations with energy $k$. Such a method was also recently extended to general Tonelli Lagrangians on surfaces in [@AM16]. - In the latter reference, the authors use a weak formulation of the problem on the space of integral currents $I_2(M)\supset \mathcal E(M)$. In order to use Proposition \[prp-tai\] to prove the existence of periodic orbits with energy $k$, we have to ensure that $\partial \Pi^k\neq\emptyset$. To this purpose, we observe that $\partial\Pi^k=\emptyset$ implies $\Pi^k\in\{\emptyset,\overline M\}$, where $\overline M$ is $M$ with the opposite orientation. We easily compute $\mathcal T_k(\emptyset)=0$ and $\mathcal T_k(\overline M)=\int_M\sigma\geq0$. Therefore, for every $k>0$ we have $$\inf_{\mathcal E_-(M)}\mathcal T_k\ \leq\ 0\quad\quad\mbox{and}\quad\quad \Big(\ \inf_{\mathcal E_-(M)}\mathcal T_k\ <\ 0\quad\Longrightarrow\quad\partial \Pi^k\ \neq\ \emptyset\ \Big).$$ Since the family of functionals $\mathcal T_k$ is monotone in $k$, we are led to define $$\tau(g,\sigma)\,:=\ \inf\Big\{\,k\ \big|\ \inf_{\mathcal E_-(M)}\mathcal T_k\,=\,0\,\Big\}\,.$$ The value $\tau(g,\sigma)$ is a non-negative real number. Moreover, $$\tau(g,\sigma)\ >\ 0\ \ \quad\Longleftrightarrow\quad\ \ \sigma_{q_0}\ <\ 0\,, \mbox{ for some }q_0\in M\,.$$ If $\sigma$ is exact, then $$\tau(g,\sigma)\ =\ c_0(g,\sigma)\,:=\ \inf_{d\theta=\sigma}\sup_{q\in M}|\theta_q|\,.$$ We leave the proof of the first statement of the proposition as an exercise to the reader. The second statement follows from [@CMP04]. We can summarize our answer to the question raised at the beginning of this section with the following theorem. \[thm-tai\] Suppose that there exists $q_0\in M$ such that $\sigma_{q_0}<0$. Then, we can find a positive real number $\tau(g,\sigma)$, coinciding with $c_0(g,\sigma)$ when $\sigma$ is exact, such that for every $k\in(0,\tau(g,\sigma))$, there exists a non-empty set of simple periodic orbits $\{\gamma^k_i\}$ having energy $k$ and satisfying $$\sum_i\ [\gamma^k_i]\ =\ 0\,\in\, H^1(M;{{\mathbb Z}})\,.$$ Magnetic flows on surfaces II: stable energy levels {#sec:sta} =================================================== In this last section we continue the study of twisted Lagrangian flows of kinetic type on surfaces by investigating the stability properties of their energy levels. To have a better geometric intuition, we are going to pull-back the twisted symplectic form to the tangent bundle. Thus, let $\flat :TM\rightarrow T^*M$ be the duality isomorphism given by $g$. We define the twisted tangent bundle as the symplectic manifold $(TM,\omega_{g,\sigma})$, where $\omega_{g,\sigma}:=d(\flat^*\lambda)-\pi^*\sigma$. We readily see that $X_{(g,\sigma)}$ is the Hamiltonian flow of $E$ with respect to the symplectic form $\omega_{g,\sigma}$. In this language, our problem is to understand when the hypersurface $\Sigma_k$ is stable in the twisted tangent bundle. We will summarize the current knowledge on the subject in the following four propositions. The first one sheds light on the relation between stability and the contact property in the generic case. \[prp-st1\] Let $k>0$. If $[\sigma]\neq0$ and $M={{\mathbb T}}^2$, $\Sigma_k$ is not of contact type. Moreover, if $X_{(g,\sigma)}|_{\Sigma_k}$ does not admit any non-trivial integral of motion, then: 1. If $[\sigma]=0$ or $M\neq{{\mathbb T}}^2$ and $[\sigma]\neq0$, $\Sigma_k$ is stable if and only if it is of contact type. 2. If $M={{\mathbb T}}^2$ and $[\sigma]\neq0$, every stabilizing form on $\Sigma_k$ is closed and it has non-vanishing integral over the fibers of $\pi$. The second proposition gives obstruction to the contact property. \[prp-st2\] The following statements hold true. 1. If $[\sigma]=0$, then $\Sigma_k$ is not of negative contact type. 2. If $[\sigma]\neq0$, then 1. if $M=S^2$, $\Sigma_k$ is not of negative contact type; 2. if $M$ has genus higher than $1$, there exists $c_h(g,\sigma)>0$ such that - $\Sigma_k$ is not of negative contact type, when $k>c_h(g,\sigma)$; - $\Sigma_{c_h(g,\sigma)}$ is not of contact type; - $\Sigma_k$ is not of positive contact type, when $k<c_h(g,\sigma)$; The third proposition deals with positive results on stability. \[prp-st3\] The following statements hold true. 1. If $[\sigma]=0$, $\Sigma_k$ is of contact type if $k>c_0(g,\sigma)$. If $M={{\mathbb T}}^2$, for every Riemannian metric $g$ there exists an exact form $\sigma_g$ for which $\Sigma_{c_0(g,\sigma_g)}$ is of contact type. 2. If $[\sigma]\neq0$ and $M\neq{{\mathbb T}}^2$, $\Sigma_k$ is of contact type for $k$ big enough. 3. If $\sigma$ is a symplectic form on $M$, then $\Sigma_k$ is stable for $k$ small enough. The last proposition deals with negative results on stability. \[prp-st4\] The following statements hold true. 1. If $[\sigma]=0$ and $M\neq{{\mathbb T}}^2$, $\Sigma_k$ is not of contact type, for $k<c_0(g,\sigma)$; 2. If $[\sigma]\neq0$ and there exists $q\in M$ such that $\sigma_{q}<0$, then 1. when $M\neq{{\mathbb T}}^2$, $\Sigma_k$ is not of contact type, for $k$ low enough; 2. when $M={{\mathbb T}}^2$, $\Sigma_k$ does not admit a closed stabilizing form, for $k$ low enough. 3. If $M=S^2$, there exists an energy level associated to some $\overline g$ and some everywhere positive form $\overline\sigma$, which is not of contact type. Before embarking in the proof of such propositions, we make the following observation. Let $k>0$ and set $s:=1/\sqrt{2k}$. Then, the flows of $\Phi^{(g,\sigma)}|_{\Sigma_k}$ and $\Phi^{(g,s\sigma)}|_{\Sigma_{1/2}}$ are conjugated up to a time reparametrization. By Section \[sub:hom\] we know that the projections to $M$ of the trajectories of $\Phi^{(g,\sigma)}|_{\Sigma_k}$ and of $\Phi^{(g,s\sigma)}|_{\Sigma_{1/2}}$ both satisfy the equation $\kappa_\gamma=s\cdot f(\gamma)$. Therefore, if $$t\ \longmapsto\ \left(\gamma(t),\frac{d\gamma}{dt}(t)\right)$$ is a trajectory of the former flow and we set $\gamma_s(t')\,:=\ \gamma(st')$, then $$t'\ \longmapsto\ \left(\gamma_s(t'),\frac{d\gamma_s}{dt'}(t')\right)\ =\ \left(\gamma(st'),\,s\cdot\frac{d\gamma}{dt}(st')\right)$$ is a trajectory of the latter flow. Therefore, given $(g,\sigma)$, instead of studying the flow $\Phi^{(g,\sigma)}$ on each energy level $\Sigma_k$, we can study the $1$-parameter family of flows $\Phi^{(g,s\sigma)}$ on $SM:=\Sigma_{1/2}$ as $s$ varies in $(0,+\infty)$. The advantage of rescaling $\sigma$ is that now we can work on a fixed three-dimensional manifold: $SM$. The tangent bundle of $SM$ has a global frame $(X,V,H)$ and corresponding dual co-frame $(\alpha,\psi,\beta)$, which we now define. Let $\mathcal H\subset SM$ be the horizontal distribution given by the Levi-Civita connection of $g$. For every $(q,v)\in SM$, $X_{(q,v)}$ and $H_{(q,v)}$ are defined as the unique elements in $\mathcal H$ such that $$d_{(q,v)}\pi\big(X_{(q,v)}\big)\ =\ v\,,\quad\quad d_{(q,v)}\pi\big(H_{(q,v)}\big)\ =\ \imath \cdot v\,.$$ Analogously, $\alpha_{(q,v)}$ and $\beta_{(q,v)}$ are defined by $$\alpha_{(q,v)}(\cdot)\ =\ g_q\big(v,d_{(q,v)}\pi(\cdot)\big)\,,\quad\quad \beta_{(q,v)}(\cdot)\ =\ g_q\big(\imath\cdot v,d_{(q,v)}\pi(\cdot)\big)\,.$$ The vector $V$ is the generator of the rotations along the fibers $\varphi\mapsto(q,\cos\varphi\, v+\sin\varphi\,\imath \cdot v)$. The form $\psi$ is the connection $1$-form of the Levi-Civita connection. If $W\in T_{(q,v)}SM$ and $w(t)=(\gamma(t),v(t))$ is a curve such that $w(0)=(q,v)$ and $\dot w(0)=W$, then $$\psi_{(q,v)}(W)\ =\ g_q\big(\nabla_{\dot\gamma(0)}v,\imath\cdot v\big).$$ Finally, we orient $SM$ using the frame $(X,V,H)$. The proof of the following proposition giving the structural relations for the co-frame is a particular case of the identities proven in [@GK02]. Let $K$ be the Gaussian curvature of $g$. We have the relations: $$d\alpha\ =\ \psi\wedge\beta\,,\quad\quad d\psi\ =\ K\beta\wedge\alpha\ =\ -K\pi^*\mu\,,\quad\quad d\beta\ =\ \alpha\wedge\psi\,.$$ Using the frame $(X,V,H)$ we can write $$X_s\,:=\ X_{(g,s\sigma)}\ =\ X\,+\,sfV\,,\quad\quad \omega_s\,:=\ \omega_{g,s\sigma}|_{SM}\ =\ d\alpha\,-\,s\pi^*\sigma\,.$$ We also use the notation $\Phi^s$ for the flow of $X_s$ on $SM$. Stability of the homogeneous systems ------------------------------------ Let us start by describing the stability properties of the homogeneous examples introduced in Section \[sub:hom\]. ### The two-sphere In this case we have $\sigma=\mu=K\mu$. Hence, $$\omega_s\ =\ d\alpha-s\pi^*\sigma\ =\ d(\alpha+s\psi)\quad \mbox{and}\quad (\alpha+s\psi)(X_s)\ =\ (\alpha+s\psi)(X+sV)\ =\ 1\,+\,s^2\,.$$ Every energy level is of positive contact type. ### The two-torus In this case we compute $$d\psi\ =\ K\mu\ =\ 0\quad \mbox{and}\quad \psi(X_s)\ =\ \psi(X+sV)\ =\ s\,.$$ Every energy level is stable. ### The hyperbolic surface In this case we have $\sigma=\mu=-K\mu$. Hence, $$\omega_s\ =\ d\alpha-s\pi^*\sigma\ =\ d(\alpha-s\psi)\quad \mbox{and}\quad (\alpha-s\psi)(X_s)\ =\ (\alpha-s\psi)(X+sV)\ =\ 1\,-\,s^2\,.$$ Every energy level $\Sigma_k$ with $k>\frac{1}{2}$ is of positive contact type. Every energy level $\Sigma_k$ with $k<\frac{1}{2}$ is of negative contact type. As follows from Proposition \[prp-st2\], $c_h(g,\sigma)=1/2$ and $\Sigma_{1/2}$ is not stable. Invariant measures on $SM$ -------------------------- A fundamental ingredient in the proof of the four propositions is the notion of invariant measure for a flow. In this subsection, we recall this notion and we observe that twisted systems of purely kinetic type always possess a natural invariant measure called the *Liouville measure*. A Borel measure $\xi$ on $SM$ is $\Phi^{s}\mathsf{-invariant}$, if $\xi(\Phi^{s}_{t}(A))=\xi(A)$, for every $t\in{{\mathbb R}}$ and every Borel set $A$. This is equivalent to asking $$\int_{SM}dh(X_s)\,\xi\ =\ 0\,,\quad\quad\forall\,h\in C^\infty(SM,{{\mathbb R}})\,.$$ The $\mathsf{rotation\ vector}$ of $\xi$ is $\rho(\xi)\in H_1(SM,{{\mathbb R}})$ defined by duality on $[\tau]\in H^1(SM,{{\mathbb R}})$: $$<[\tau],\rho(\xi)>\ =\ \int_{SM}\tau(X_s)\,\xi\,,$$ where $\tau\in\Omega^1(SM)$ is any closed form representing the class $[\tau]$. Since $X_{s}$ is a section of $\ker\omega_s$ and $\omega_s$ is nowhere vanishing, we can find a unique volume form $\Omega_s$ such that $\imath_{X_{s}}\Omega_s=\omega_s$. We can write $\Omega_s=\tau_s\wedge\omega_s$, where $\tau_s$ is any $1$-form such that $\tau_s(X_{s})=1$. We easily see that $\alpha(X+sfV)=1+0$. Hence, $\Omega_s=\alpha\wedge\omega_s=\alpha\wedge d\alpha$. Notice, indeed, that $\alpha\wedge\pi^*\sigma=0$ since it is annihilated by $V$. The $\mathsf{Liouville\ measure\ \xi_{SM}}$ on $SM$ is the Borel measure defined by integration with the differential form $\alpha\wedge d\alpha$. It is an invariant measure for $\Phi^{s}$ for every $s>0$. In order to compute the rotation vector of $\xi_{SM}$, we need a lemma which tells us when $\omega_s$ is exact. The easy proof is left to the reader. \[lem-exa\] If $\sigma$ is exact, then $\pi^*\sigma$ is exact and we have an injection $$\begin{aligned} \mbox{Primitives of }\ \sigma&\ \xrightarrow{\quad\quad}\ \mbox{Primitives of }\ \omega_s\\ \zeta&\ \xmapsto{\quad\quad}\ \alpha\,-\,s\pi^*\zeta\,. \end{aligned}$$ If $M\neq{{\mathbb T}}^2$, then $\pi^*\sigma$ is exact and we have an injection $$\begin{aligned} \mbox{Primitives of }\ \sigma\,-\,\frac{[\sigma]}{2\pi\chi(M)}K\mu&\ \xrightarrow{\quad\quad}\ \mbox{Primitives of }\ \omega_s\\ \zeta&\ \xmapsto{\quad\quad}\ \alpha\,-\,s\pi^*\zeta\,+\,s\frac{[\sigma]}{2\pi\chi(M)}\psi\,. \end{aligned}$$ If $M={{\mathbb T}}^2$ and $\sigma$ is non-exact, then $\omega_s$ is non-exact. We can now state a proposition concerning $\rho(\xi_{SM})$. \[prp-rot\] If $[\sigma]\neq0$ and $M={{\mathbb T}}^2$, then there holds $\rho(\xi_{SM})=s[\sigma]\cdot[S_qM]$, where $[S_qM]\in H_1(SM,{{\mathbb Z}})$ is the class of a fiber of $SM\rightarrow M$ oriented counter-clockwise. Otherwise, $\rho(\xi_{SM})=0$. Let $[\tau]\in H^1(SM;{{\mathbb R}})$. We notice that $$\tau(X_{s})\,\alpha\,\wedge\, d\alpha\ =\ \imath_{X_s}\Big(\tau\,\wedge\,\alpha\,\wedge\, d\alpha\Big)\ +\ \tau\,\wedge\,\imath_{X_s}\big(\alpha\,\wedge\, d\alpha\big)\ =\ 0\ +\ \tau\,\wedge\,\omega_s\,.$$ Therefore, $$<[\tau],\rho(\xi_{SM})>\ =\ \int_{SM}\tau\,\wedge\,\omega_s\ =\ s\int_{SM}\tau\,\wedge\,\pi^*\sigma\,.$$ If $M={{\mathbb T}}^2$, then $S{{\mathbb T}}^2\simeq S^1\times {{\mathbb T}}^2$ and we can use Fubini’s theorem to integrate separately in the vertical directions and in the horizontal direction. Observe that since $\tau$ is closed, the integral over a fiber $S_q{{\mathbb T}}^2$ does not depend on $q$. Thus we find $$\int_{S{{\mathbb T}}^2}\tau\,\wedge\,\pi^*\sigma\ =\ <[\tau],[S_q{{\mathbb T}}^2]>\cdot\,[\sigma]\,.$$ and the proposition is proven for the $2$-torus. When $M\neq{{\mathbb T}}^2$, $\pi^*\sigma$ is exact and, therefore, $\int_{SM}\tau\wedge\pi^*\sigma=0$. The proposition is proven also in this case. We now proceed to the proofs of the four propositions. Proof of Proposition \[prp-st1\] -------------------------------- If $M={{\mathbb T}}^2$ and $[\sigma]\neq0$, then $\omega_s$ is not exact by Lemma \[lem-exa\]. In particular, $SM$ cannot be of contact type. This proves the first statement of the proposition. Now let $\tau_s\in\Omega^1(SM)$ be a stabilizing form for $\omega_s$. Since $\ker(d\tau_s)\supset \ker\omega_s$, there exists a function $\rho_s:SM\rightarrow{{\mathbb R}}$ such that $d\tau_s=\rho_s\omega_s$. Taking the exterior differential in this equation, we get $0=d\rho_s\wedge\omega_s$. Plugging in the vector field $X_{s}$ we get $0=d\rho_s(X_{s})\omega_s$. Since $\omega_s$ is nowhere zero, we conclude that $d\rho_s(X_{s})=0$. Namely, $\rho_s$ is a first integral for the flow. By assumption, $\rho_s$ is equal to a constant. If $\rho_s=0$, then $\tau_s$ is closed, if $\rho_s\neq0$, then $\tau_s$ is a contact form. Suppose the first alternative holds. Since $\tau_s(X_s)\neq0$ everywhere, we have $$0\ \neq\ \int_{SM}\tau_s(X_s)\xi_{SM}\ =\ <[\tau_s],\rho(\xi_{SM})>\,.$$ By Proposition \[prp-rot\], this can only happen if $M={{\mathbb T}}^2$ and $<[\tau_s],[S_q{{\mathbb T}}^2]>\neq0$, which is what we had to prove. Proof of Proposition \[prp-st2\] -------------------------------- The proof of the second proposition is based on the fact that when $\omega_s$ is exact we can associate a number to every invariant measure with zero rotation vector. Suppose $\omega_s$ is exact and that $\xi$ is a $\Phi^s$-invariant measure with $\rho(\xi)=0$. We define the $\mathsf{action}$ of $\xi$ as the number $$\label{eq-act} \mathcal S_s(\xi)\,:=\ \int_{SM}\tau_s(X_s)\,\xi\,,$$ where $\tau_s$ is any primitive for $\omega_s$. Such number does not depend on $\tau_s$ since $\rho(\xi)=0$. The action of invariant measures gives an obstruction to being of contact type. \[lem-ct\] Suppose $\omega_s$ is exact and that $\xi$ is a non-zero $\Phi^s$-invariant measure with $\rho(\xi)=0$. If $\mathcal S_s(\xi)\leq0$, then $SM$ cannot be of positive contact type. If $\mathcal S_s(\xi)\geq0$, then $SM$ cannot be of negative contact type. If $SM$ is of positive contact type, there exists $\tau_s$ such that $d\tau_s=\omega_s$ and $\tau_s(X_s)>0$. Therefore, $$\mathcal S_s(\xi)\ =\ \int_{SM}\tau_s(X_s)\,\xi\ \geq\ \inf_{SM}\tau_s(X_s)\cdot\xi(SM)\ >\ 0\,.$$ For the case of negative contact type, we argue in the same way. Let us now compute the action of the Liouville measure. \[prp-act\] If $\sigma$ is exact, then $$\label{act-lio1} \mathcal S_s(\xi_{SM})\ =\ \xi_{SM}(SM)\ =\ 2\pi[\mu]\,.$$ If $M\neq{{\mathbb T}}^2$, then $$\label{act-lio2} \mathcal S_s(\xi_{SM})\ =\ \xi_{SM}(SM)\ +\ s^2\frac{[\sigma]^2}{\chi(M)}\,.$$ If $\sigma=d\zeta$, then $\alpha-s\pi^*\zeta$ is a primitive of $\omega_s$ by Lemma \[lem-exa\] and we have $$\label{eq-funex} (\alpha-s\pi^*\zeta)(X_s)_{(q,v)}\ =\ 1\ -\ s\zeta_q(v)\,,\quad\forall\,(q,v)\in SM\,.$$ Consider the *flip* $I:SM\rightarrow SM$ given by $I(q,v):=(q,-v)$. We see that $$(I^*\alpha)_{(q,v)}\ =\ \alpha_{I(q,v)}dI\ =\ g_q(-v,d\pi dI\cdot)\ =\ \alpha_{I(q,v)}\,.$$ Hence $\xi_{SM}$ is $I$-invariant. However, $\zeta\circ I(q,v)=-\zeta(q,v)$. Therefore, $$\label{eq-ex0} \int_{SM}\zeta\,\xi_{SM}\ =\ 0$$ and from the definition of action given in , we see that is satisfied. To prove the second identity, we consider a primitive $\alpha-s\pi^*\zeta+s\frac{[\sigma]}{2\pi\chi(M)}\psi$ for $\omega_s$ as prescribed by Lemma \[lem-exa\]. We compute $$\left(\alpha\ -\ s\pi^*\zeta\ +\ s\frac{[\sigma]}{2\pi\chi(M)}\psi\right)(X_s)_{(q,v)}\ =\ 1\ -\ s\zeta_q(v)\ +\ s^2\frac{[\sigma]}{2\pi\chi(M)}f(q)\,.$$ Thus, we need to estimate the integral of $f\circ\pi$ on $SM$. Let $U_i$ be an open cover of $M$ such that $SU_i\simeq S^1\times U_i$ and let $a_i$ be a partition of unity subordinated to it. We have $$\begin{aligned} \int_{SM}f(q)\,\alpha\wedge d\alpha\ =\ \int_{SM}f(q)\,\alpha\wedge\psi\wedge\beta\ &=\ -\int_{SM}f(q)\,\psi\wedge\pi^*\mu\\ &=\ -\sum_i\int_{SU_i}a_i(q)\,\psi\wedge\pi^*\sigma\\ &=\ -\sum_i\int_{S^1\times U_i}a_i(q)\,(-d\varphi\wedge\pi^*\sigma)\\ &=\ \sum_i\int_{U_i}a_i(q)\,\sigma\int_{S^1}d\varphi\\ &=\ 2\pi\sum_i\int_{U_i}a_i(q)\,\sigma\\ &=\ 2\pi[\sigma]\,,\end{aligned}$$ where $\varphi$ is an angular coordinate on $S_qU_i$ going in the clockwise direction (hence the presence of an additional minus sign in the third line). Putting this computation together with , we get the desired identity. Proposition \[prp-st2\] now follows from Lemma \[lem-ct\] and Proposition \[prp-act\] after defining $$c_h(g,\sigma)\,:=\ -\,\frac{[\sigma]^2}{4\pi\chi(M)[\mu]}\,,\quad\mbox{when $M$ has genus higher than one}\,.$$ We have seen in the homogeneous example above that $c_h(g,\sigma)=c(g,\sigma)$. The relation between $c_h$ and the Mañé critical value was studied in general by G. Paternain in [@Pat09]. There the author proves that $c_h(g,\sigma)\leq c(g,\sigma)$ and that $c_h(g,\sigma)=c(g,\sigma)$ if and only if $g$ is a metric of constant curvature and $\sigma$ is a multiple of the area form. Proof of Proposition \[prp-st3\] -------------------------------- Suppose that $\sigma$ is exact and let us consider a primitive $\alpha-s\pi^*\zeta$ given by Lemma \[lem-exa\]. We have $$(\alpha\ -\ s\pi^*\zeta)(X_s)_{(q,v)}\ =\ 1\ -\ s\zeta_q(v)\ \geq\ 1\ -\ s\sup_{M}|\zeta|\,.$$ Requiring that the right hand-side is positive is equivalent to saying that $$k\ =\ \frac{1}{2s^2}\ >\ \sup_M\frac{1}{2}|\zeta|^2\,.$$ Since this holds for every $\zeta$ which is a primitive for $\sigma$, we have that the last inequality is equivalent to $k>c_0(g,\sigma)$. Contreras, Macarini and G. Paternain also found in [@CMP04] examples of exact systems on ${{\mathbb T}}^2$, which are of contact type for $k=c_0(g,\sigma)$ (see also [@Ben14 Section 4.1.1]). We will not discuss these examples here and we refer the reader to the cited literature for more details. Let us now deal with the non-exact case. If $M\neq {{\mathbb T}}^2$, then we consider a primitive of the form $\alpha-s\pi^*\zeta+s\frac{[\sigma]}{2\pi\chi(M)}\psi$ and we compute $$\label{quantity} \Big(\alpha\ -\ s\pi^*\zeta\ +\ s\frac{[\sigma]}{2\pi\chi(M)}\psi\Big)(X_s)_{(q,v)}\ =\ 1\ -\ s\zeta_q(v)\ +\ s^2\frac{[\sigma]}{2\pi\chi(M)}f(q)\,.$$ We can give the estimate from below $$1\ -\ s\zeta_q(v)\ +\ s^2\frac{[\sigma]}{2\pi\chi(M)}f(q)\ \geq\ 1\ -\ s\sup_M|\zeta|-s^2\left|\frac{[\sigma]}{2\pi\chi(M)}\right|\cdot\sup_M|f|\,$$ and we see that this quantity is strictly positive for $s$ small enough. Suppose now that $\sigma$ is a symplectic form on $M$. We have three cases. 1. If $M=S^2$, then the quantity in is bounded from below by $$1-s\sup_M|\zeta|+s^2\frac{[\sigma]}{4\pi}\cdot\inf_Mf\,.$$ Since $[\sigma]>0$, we have that $\inf f>0$ and we see that such quantity is strictly positive for big $s$. 2. If $M$ has genus larger than $1$, then the quantity in is bounded from above by $$1\ +\ s\sup_M|\zeta|\ +\ s^2\frac{[\sigma]}{2\pi\chi(M)}\cdot\inf_Mf\,.$$ Since $\chi(M)<0$ and $\inf f>0$, such quantity is strictly negative for big $s$. 3. If $M={{\mathbb T}}^2$, then there exists a closed form $\tau\in\Omega^1(S{{\mathbb T}}^2)$ such that $\tau(V)=1$ (prove such statement as an exercise). Thus, we get $$\tau(X_s)\ =\ \tau(X)\ +\ sf\ \geq\ \inf_{SM}\tau(X)\ +\ s\inf_M f\,$$ and such quantity is positive provided $\inf f>0$ and $s$ is big enough. Proof of Proposition \[prp-st4\] -------------------------------- If $\sigma$ is exact and $k<c_0(g,\sigma)$, we can use Theorem \[thm-tai\] to find an embedded surface $\Pi\subset M$ with non-empty boundary $\partial \Pi=\{\gamma_i\}$ such that $\mathcal T_k(\Pi)<0$ and the $\gamma_i$’s are periodic orbits of $\Phi^s$ (parametrized by arc-length). Let $(\gamma_i,\dot\gamma_i)$ be the corresponding curve on $SM$ and let $\xi_i$ be the associated invariant measure. Define $\xi_{\partial \Pi}:=\sum_i\xi_i$. What is its rotation vector? Call $\pi_*:H_1(SM;{{\mathbb R}})\rightarrow H_1(M;{{\mathbb R}})$ the map induced by the projection $\pi$ in homology and observe that $$\pi_*(\rho(\xi_{\partial \Pi}))\ =\ \sum_i\pi_*(\rho(\xi_i))\ =\ \sum_i\,[\gamma_i]\ =\ [\partial \Pi]\ =\ 0\,.$$ The map $\pi_*$ is an isomorphism if $M\neq{{\mathbb T}}^2$. Thus, we conclude that $\rho(\xi_{\partial \Pi})=0$, if $M\neq{{\mathbb T}}^2$. Let us compute the action in this case. As before, we use a primitive $\alpha-s\pi^*\zeta$: $$\label{act-tai} \begin{aligned} \mathcal S_s(\xi_{\partial \Pi}) = \sum_i \int_{SM}\!\!(1-s\zeta_q(v))\xi_i = \sum_i\int_0^{\ell(\gamma_i)}\!\!\!\!\big(1-s\zeta_{\gamma_i}(\dot\gamma_i)\big){{\mathrm{d}}}t &= \sum_i\ell(\gamma_i)-s\int_0^{\ell(\gamma_i)}\!\!\!\!\gamma_i^*\zeta\\ &= \ell(\partial \Pi)-s\int_{\Pi}\sigma\\ &= s\mathcal T_k(\Pi)\,. \end{aligned}$$ By hypothesis the last quantity is negative and Lemma \[lem-ct\] tells us that $\Sigma_k$ cannot be of positive contact type. Since by Proposition \[prp-st2\], $\Sigma_k$ cannot be of negative contact type either, point *(1)* of the proposition is proved. We now move to prove point *(2a)* with the aid of a little exercise. We prove a generalization of , when $M\neq{{\mathbb T}}^2$. Let $\Pi$ be an embedded surface such that $\partial \Pi$ is a union of periodic orbits and let $\xi_{\partial \Pi}$ be the invariant measure constructed as before. Then, $$\label{eq-gen} \frac{\mathcal S_s(\xi_{\partial \Pi})}{s}\ =\ \mathcal T_k(\Pi)\ +\ \frac{\mathfrak o(\Pi)\chi(\Pi)[\sigma]}{\chi(M)}\,,$$ where $\mathfrak o(\Pi)\in\{+1,-1\}$ record the orientation of $\Pi$. To prove such identity one recalls that $\kappa_{\gamma_i}=sf(\gamma_i)$ and then uses the Gauss-Bonnet theorem (taking into account orientations) to express the integral of the geodesic curvature along $\partial \Pi$. What happens if we consider $M\setminus\Pi$? Do the two expressions for $\mathcal S_s(\xi_{\partial \Pi})$ agree? Remember relation . The problem with formula is that Theorem \[thm-tai\] does not give any information on the Euler characteristic of $\Pi$. To circumvent this problem we need the following result by Ginzburg [@Gin87] (see also [@AB15 Chapter 7]). \[prp-gin\] If $\sup f>\varepsilon$ for some $\varepsilon<0$, there exists a constant $C>0$ such that for every small enough $k$ we can find a simple periodic orbit $\gamma^k_+$ supported on $\{f>\varepsilon\}$ and such that $\ell(\gamma^k_+)\leq\sqrt{2k}C$. If $\inf f<-\varepsilon$, for some $\varepsilon>0$, there exists $C>0$ such that for every small enough $k$, there exists a simple periodic orbit $\gamma^k_-$ supported on $\{f<-\varepsilon\}$ and such that $\ell(\gamma^k_-)\leq\sqrt{2k} C$. If $f$ is negative at some point, by Proposition \[prp-gin\], there exists $\gamma^k_-$ with the properties listed above, for $k$ small. In particular, $\gamma^k_-$ bounds a small disc $D^k_-$. Since the geodesic curvature of $\gamma^k_-$ is very negative, such disc lies in $\mathcal E_-(M)$. When $M\neq{{\mathbb T}}^2$, we use and find $$\frac{S_s(\xi_{\partial D^k_-}\big)}{s}\ =\ \mathcal T_k(D^k_-)\ -\ \frac{2}{\chi(M)}[\sigma]\,.$$ By the estimate on the length of $\gamma^k_-$ we get that $|\mathcal T_k(D^k_-)|\leq Ck^2$ (see ). Therefore, $S_s(\xi_{\partial D^k_-})$ has the opposite sign of $\chi(M)$ for $k$ small enough. Combining Lemma \[lem-ct\] and Proposition \[prp-st2\], point *(2a)* is proven. Let us deal now with the case of the $2$-torus. Since $[\sigma]>0$, by Proposition \[prp-gin\] there exists also $\gamma^k_+$ bounding a disc $D_+^k$. Let $\Pi^k=D^k_-\cup D^k_+$. We claim that the measure $\xi_{\partial \Pi^k}$ has zero rotation vector. Prove the claim by showing that $(\gamma^k_+,\dot\gamma^k_+)$ is freely homotopic in $S{{\mathbb T}}^2$ to $[S_q{{\mathbb T}}^2]$, namely the class of a fiber with orientation given by $V$. Analogously, prove that $(\gamma^k_-,\dot\gamma^k_-)$ is freely homotopic to a fiber with the opposite orientation. If $\tau_s$ is a closed stabilizing form, we have that the function $\tau_s(X_s)$ is nowhere zero. Therefore, $$0\ \neq\ \int_{S{{\mathbb T}}^2}\tau_s(X_s)\,\xi_{\partial \Pi^k}\ =\ <[\tau_s],\rho(\xi_{\partial\Pi^k})>\ =\ 0\,,$$ which is a contradiction. We omit the proof of point *(3)*, for which we refer the reader to [@Ben14a]. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to express our gratitude to Ezequiel Maderna and Ludovic Rifford for organizing the research school and for the friendly atmosphere they created while we stayed in Uruguay. We also sincerely thank Marco Mazzucchelli and Alfonso Sorrentino for many engaging discussions during our time at the school. [^1]: Here a choice of an arbitrary base point $q_0\in M$ is to be understood: $\pi_{d+1}(M):=\pi_{d+1}(M,q_0)$ and $\pi_1(M):=\pi_1(M,q_0)$
--- abstract: 'Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset, i.e., amount of data available with the ground truth. Learning algorithms try to generalize solely based on the data that is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in the Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, *without inheriting anything* from the source learning model. Representations of the instances of *source* & *target* datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and *locality sensitive hashing*, and then, augmented into the model during training on the target dataset. Our approach simultaneously exploits the local *instance level information* as well as the macro statistical viewpoint of the dataset. Using this approach we have shown significant improvements for three major news classification datasets over the baseline. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.' author: - Somnath Basu Roy Chowdhury - Annervaz K M - Ambedkar Dukkipati bibliography: - 'CDL-arxiv.bib' title: 'Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing' --- Introduction & Motivation ========================= A fundamental issue with supervised learning techniques (like classification) is the requirement of enormous amount of labeled data, which in some scenarios maybe expensive to gather or may not be available. Every supervised task requires a separate labeled dataset and training state-of-the-art deep learning models is computationally expensive for large datasets. In this paper, we propose a deep transfer learning method that can enhance the performance of learning models by incorporating information from a different dataset, encoded while training for a different task in a similar domain. The approaches like transfer learning and domain adaptation have been studied extensively to improve adaptation of learning models across different tasks or datasets. In transfer learning, certain portions of the learning model are re-trained for fine-tuning weights in order to fit a subset of the original learning task. Transfer learning suffers heavily from domain inconsistency between tasks and may even have a negative effect [@rosenstein2005transfer] on performance. Domain adaptation techniques aim to predict unlabeled data given a pool of labeled data from a similar domain. In domain adaptation, the aim is to have better generalization as source and target instances are assumed to be coming from different probability distributions, even when the underlying task is same. We present our approach in an *inductive transfer learning* [@pan2010survey] framework, with a labeled *source* (domain $\mathcal{D}_S$ and task $\mathcal{T}_S$) and *target* (domain $\mathcal{D}_T$ and task $\mathcal{T}_T$) dataset, the aim is to boost the performance of target predictive function $f_T(\cdot)$ using available knowledge in $\mathcal{D}_S$ and $\mathcal{T}_S$, given $\mathcal{T}_S \neq \mathcal{T}_T$. We retrieve instances from $\mathcal{D}_S$ based on similarity criteria with instances from $\mathcal{D}_T$, and use these instances while training to learn the target predictive function $f_T(\cdot)$. We utilize the instance-level information in the source dataset, and also make the newly learnt target instance representation similar to the retrieved source instances. This allows the learning algorithm to improve generalization across the source and target datasets. We use *instance-based learning* that actively looks for similar instances in the source dataset given a target instance. The intuition behind retrieving similar instances comes from an instance-based learning perspective, where simplification of the class distribution takes place within the locality of a test instance. As a result, modeling of similar instances become easier [@aggarwal2014instance]. Similar instances have the maximum amount of information necessary to classify an unseen instance, as exploited by techniques like $k$-nearest neighbours. We derived inspiration for this method from the working of the human brain, where new memory representations are consolidated, slowly over time for efficient retrieval in future. According to [@mcgaugh2000memory], newly learnt memory representations remain in a fragile state and are affected as further learning takes place. In our procedure, we make use of encodings of instances precipitated while training for a different task using a different model. This being used for a totally different task, and adapted as needed, is in alignment with *memory consolidation* in human brain. An attractive feature of the proposed method is that the search mechanism allows us to use more than one source dataset during training to achieve inductive transfer learning. Our approach differs from the standard instance-based learning in two major aspects. First, the instances retrieved are not necessarily from the same dataset, but can be from various secondary datasets. Secondly, our model simultaneously makes use of local instance level information as well as the macro-statistical viewpoint of the dataset, where typical lazy instance based learning like $k$-nearest neighbour search make use of only the local instance level information. In order to ensure that the learnt latent representations can be utilized by another task, we try to make the representations similar. The need for this arise as we need to ensure that similar instances in two different domain have similar representations. **Motivating Example**. BBC[^1] and SkySports[^2], two popular news channels are used to illustrate the example. BBC reports news about all domains in daily life, on the other hand SkySports focuses only on sports news. However if BBC decides to restructure its sports section depending on the type of sport, we need to have a supervised classifier to achieve this goal. BBC although has a significant amount of sports news article, it lacks significant amount of labeled sports news articles in order to build a reliable classifier. Instance-based learning techniques will not perform well in such a situation. The ability of the proposed method to give competitive performance with limited training data, by making use of labeled training data from existing dataset helps in the scenario. Labeled data from SkySports can be incorporated to achieve this goal of classifying news articles. Similarly this approach can be extended to gather instances from multiple news channels other than SkySports to enhance the performance of such a classifier, with labeling fewer samples from BBC. We develop our instance retrieval based transfer learning technique, which is capable of extracting information from multiple datasets simultaneously in order to tackle the problem of limited labeled data or unbalanced labeled dataset. We also enforce constraints to ensure the learning model learns representations similar to the external source domains, thereby aiding in the classification model. To the best of our knowledge this is the first work which unifies instance-based learning in transfer learning setting. The main contribution of the work are as follows, 1. We propose an augmented neural network model for combining instance and model based learning. 2. We use Locality Sensitive Hashing for effective retrieval of similar instances efficiently in sub-linear time and fuse it to the learning model. 3. We hypothesize and illustrate with detailed experimental results, performance of the learning models can be improved by infusing instance level information from within the dataset and across datasets. In both these experiments we show an improvement of 5+% over the baseline. 4. Proposed approach is shown to be useful for training on very lean datasets, by leveraging support from large datasets. Background ========== For instance transfer to take place in a deep learning framework, natural language sentences are converted into a vector representation in a latent space. Long Short-Term Memory (LSTM) networks with randomly initialized word embeddings act as our baseline model. Once the sentences are encoded in their numerical representations we apply similarity search across source dataset instances using Locality Sensitive Hashing (LSH). In this section, we briefly summarize LSH and transfer learning to clarify the setup of our work, in an inductive transfer learning setting. ![image](TransferLearning.pdf){width="\textwidth"} Locality Sensitive Hashing (LSH) -------------------------------- Locality Sensitive Hashing [@gao2014dsh; @gionis1999similarity] is an algorithm which performs approximate nearest neighbor similarity search for high-dimensional data in sub-linear time. The main intuition behind this algorithm is to form LSH index for each point which maps “similar” points to the same bucket with higher probability. Approximate nearest neighbors of a query is retrieved by hashing it to a bucket and returning other points from the corresponding bucket. The locality sensitive hash family, $\mathcal{H}$ has to satisfy certain constraints mentioned in [@indyk1998approximate] for nearest neighbor retrieval. The LSH Index maps each point $p$ into a bucket in a hash table with a label $g(p) = (h_1(p), h_2(p), \ldots, h_k(p))$, where $h_1, h_2, \ldots, h_k$ are chosen independently with replacement from $\mathcal{H}$. We generate $l$ different hash functions of length $k$ given by $G_j(p) = (h_{1j}(p), h_{2j}(p), \cdots,$ $ h_{kj}(p))$ where $j \in {1, 2, \ldots, l}$ denotes the index of the hash table. Given a collection of data points $\mathcal{C}$, we hash them into $l$ hash tables by concatenating randomly sampled $k$ hash functions from $\mathcal{H}$ for each hash table. While returning the nearest neighbors of a query Q, it is mapped into a bucket in each of the $l$ hash tables. The union of all points in the buckets $G_j(Q), j = {1, 2, \ldots, l}$ is returned. Therefore, all points in the collection $\mathcal{C}$ is not scanned and the query is executed in sub-linear time. The storage overhead for LSH is sub-quadratic in $n$, the number of points in the collection $\mathcal{C}$. LSH Forests [@bawa2005lsh] are an improvement over LSH Index which relaxes the constraints on hash family $\mathcal{H}$ with better practical performance guarantees. LSH Forests utilizes $l$ prefix trees (LSH trees) instead of having hash tables, each constructed from independently drawn hash functions from $\mathcal{H}$. The hash function of each prefix tree is of variable length ($k$) with an upper bound $k_m$. The length of the hash label of a point is increased whenever a collision occurs to form leaf nodes from the parent node in the LSH tree. For $m$ nearest neighbour query of a point $p$, the $l$ prefix trees are traversed in a top-down manner to find the leaf node with highest similarity with point $p$. From the leaf node, we traverse in a bottom-up fashion to collect $M$ points from the forest, where $M = cl$, $c$ being a small constant. It has been shown in  [@bawa2005lsh], that for practical cases the LSH Forests execute each query in constant time with storage cost linear in $n$, the number of points in the collection $\mathcal{C}$. Transfer Learning ----------------- Traditional machine learning algorithms try to learn a statistical model which is capable of predicting unseen data points, given that it has been trained on labeled or unlabeled training samples. In order to reduce the dependency on data, the need to reuse knowledge across tasks arise. *Transfer learning* allows such knowledge transfer to take place even if the domain, tasks and distribution of the datasets are different. Transfer learning can be applied in various problem frameworks, depending on the nature of source and target domain. Based on these variations, it can be broadly classified into three categories (a) *inductive transfer learning* (b) *transductive transfer learning* and (c) *unsupervised transfer learning*. Figure \[fig:TL\] shows the various problem settings and its corresponding transfer learning setup. We will discuss the fundamental differences in the operation of these methods here. **Inductive transfer learning**. In this setup, labeled data is available in the target domain to *induce* the prediction function in target domain $\mathcal{D}_T$. The target and source tasks are different $\mathcal{T}_S \neq \mathcal{T}_T$, however they may or may not share a common domain. Inductive transfer learning can be further classified into two sub-categories where (a) labeled source instances are available and where (b) ground-truth for source instances are absent (*self-taught learning* [@raina2007self]). **Transductive transfer learning**. In this setting the source and target tasks are same $\mathcal{T}_S = \mathcal{T}_T$, while their domains are different $\mathcal{D}_S \neq \mathcal{D}_T$. This technique is also sub-divided into two categories where (a) the learning algorithm considers source and target domain to be different and have a separate feature space and where (b) the feature space is same in an attempt to reduce domain discrepancy, this is also known as *domain adaptation* [@daume2006domain]. **Unsupervised transfer learning**. In this framework, the source and domain tasks are related but different $\mathcal{T}_S \neq \mathcal{T}_T$. Both source and target domains have unlabeled instances, this techniques is used in unsupervised task settings like dimensionality reduction [@wang2008transferred], cluster approximation [@dai2008self] etc. In this paper, our contribution is presented in *inductive transfer learning* framework. Knowledge transfer in this setup takes place in four ways (a) instance-transfer (b) feature-representation-transfer (c) parameter-transfer and (d) relational-knowledge-transfer. Parameter transfer and relational-knowledge transfer are studied exhaustively in inductive transfer literature. In our proposed approach we infuse instance-level feature representation transfer across source and target domain, in order to enhance the learning process. ![image](Instance-Transfer.pdf){width="\textwidth"} Proposed Model ============== Given the data $x$ with the ground truth $y$, supervised learning models aim at finding the parameters $\Theta$ that maximizes the log-likelihood as $$\Theta = \mathop{\mathrm{arg max}}\limits_{\Theta} \log P(y| \mathrm{x}, \Theta).$$ We propose to augment the learning by infusing similar instances latent representations $\mathrm{z}_s$, from a source dataset, a latent vector from source dataset $\mathrm{z}_s$ is retrieved using the data sample $\mathrm{x}_t$ (target dataset instance). Thus, our modified objective function can be expressed as $$\mathop{\mathrm{max}}\limits_{\Theta} P(y| \mathrm{x}_t, \mathrm{z}_s, \Theta).$$ To enforce latent representations of the instances to be similar, for better generalization across the tasks, we add a suitable penalty to the objective. The modified objective then becomes, $$\Theta = \mathop{\mathrm{arg max}}\limits_{\Theta} \log{P}(y| \mathrm{x}_t, \mathrm{x}_s, \Theta) - \lambda\mathcal{L}(\mathrm{z}_s, \mathrm{z}_t),$$ where $\mathcal{L}$ is the penalty function and $\lambda$ is a hyperparmeter. The subsequent sections focus on the methods to retrieve instance latent vector $\mathrm{z}_s$ using the data sample $\mathrm{x}_t$. It is important to note that, we do not assume any structural form for $P$. Hence the proposed method is applicable to augment any supervised learning setting with any form for $P$. In the experiments we have used softmax [@bishop2006pattern] using the bi-LSTM  [@greff2015lstm] encodings of the input as the form for $P$. The schematic representation of the model is shown in Figure \[fig:approach\]. In the following sections, we will discuss the in-detail working of individual modules in Figure \[fig:approach\] and formulation of the penalty function $\mathcal{L}$ . Sentence Encoder {#encoder} ---------------- The purpose of this module is to create a vector in a latent space by encoding the semantic context of a sentence from the input sequence of words. The context vector $c$ is obtained from an input sentence which is a sequence of word vectors $\mathbf{x} = (x_1, x_2, \ldots, x_T)$, using a bi-LSTM (Sentence Encoder shown in Figure \[fig:approach\]) as $$h_t = f(x_t, h_{t-1}),$$ where $h_t \in \mathbb{R}^n$ is the hidden state of the bi-LSTM at time $t$ and $n$ is the embedding size. We combine the states at multiple time steps using a linear function g. We have, $$o = g(\{h_1, h_2, \ldots, h_T\})\:\: \mbox{and} \:\:\:c = \mathrm{ReLU}(o^TW),$$ where $W \in \mathbb{R}^{n \times m}$ and $m$ is a hyper parameter representing the dimension of the context vector. $g$ in our experiments is set as $$g(\{h_1, h_2, \ldots, h_T\}) = \frac{1}{T}{\sum_{t=1}^{T}h_t}.$$ The bi-LSTM module responsible for generating the context vector $c$ is pre-trained on the target classification task. A separate bi-LSTM module (sentence encoder for the source dataset) is trained on the source classification task to obtain instance embeddings for the target dataset. In our experiments we used similar modules for creating the instance embeddings of the source and target dataset, this is not constrained by the method and different modules can be used here. Instance Retrieval {#retrieval} ------------------ Using the obtained context vector $c_t$ ($c$ in Section \[encoder\]) corresponding to a target instance as a query, $k$-nearest neighbours are searched from the source dataset $(z_1^s, z_2^s, \ldots, z_k^s)$ using LSH. The search mechanism using LSH takes constant time in practical scenarios [@bawa2005lsh] and therefore doesn’t affect the training duration by large margins. The retrieved source dataset instance embeddings receive attention $\alpha_i^z$, using soft-attention mechanism based on cosine similarity given as, $$\alpha_{i}^z = \frac{\exp(c^T_tz_i^s)}{\sum\limits_{j=1}^{k} \exp(c^T_tz_j^s)}, $$ where $c \in \mathbb{R}^{m}$ and $z_i^s, z_j^s \in \mathbb{R}^{m}$. The fused instance embedding vector $z_s$ formed after soft attention mechanism is given by, $$z_s = \sum_{i=1}^{k} \alpha_{i}^z z_i^s,$$ where $z_s \in \mathbb{R}^{m}$. The retrieved instance is concatenated with the context vector $c$ from the classification module as $$\mathrm{s} = [c_t, z_s]\:\: \mbox{and}\:\: \mathbf{y} = \mathrm{softmax}(\mathrm{s}^TW^{(1)}),$$ where $W^{(1)} \in \mathbb{R}^{2m \times u}$, $\mathbf{y}$ is the output of the final target classification task. This model is then trained jointly with the initial parameters from the pre-trained classification module. The pre-training of the classification module is necessary because if we start from a randomly initialized context vector $c_t$, the LSH Forest retrieves arbitrary vectors and the model as a whole fails to converge. As the gradient only propagates through the attention values and penalty function it is impossible to simultaneously rectify the query and search results of the hashing mechanism. It is important to note that the proposed model adds only a limited number of parameters over the baseline model. The extra trainable weight matrix in the model is $W^{(1)} \in \mathbb{R}^{2m \times u}$, adding only $2m \times u$, where $m$ is the size of the context vector $c$ and $u$ is the number of classes. Instance Clustering {#clustering} ------------------- While training our model, instances are retrieved in an online manner using LSH. In the case of large source datasets, where the number of instances is in the range of millions, the LSH becomes really slow and training may take impractical amount of time. In order to overcome this problem, the source instances are clustered and the centroid of the clusters formed are considered as our search entities. Fast $k$-means clustering [@shindler2011fast] is used in the clustering process as the number of instances and clusters are quite large in this setup. The number of clusters is set to an upper limit of 10000, as LSH search performance is significantly fast with this search space. Figure \[cluster\] shows the t-SNE [@maaten2008visualizing] visualization for BBC dataset. Figure \[cluster\] (a) shows the latent vector space of the entire dataset with the cluster centers marked in red, Figure \[cluster\] (b) shows the cluster centers forming a sparse representation of the latent vector embeddings which are used in the experiment for classification. Penalty Function ---------------- In an instance-based learning, a test instance is assigned the label of the majority of its nearest-neighbour instances. This follows from the fact that similar instances belong to the same class distribution. Following the retrieval of latent vector embeddings from the source dataset, the target latent embedding is constrained to be similar to the retrieved source instances. In order to enforce this, we introduce an additional penalty along with the loss function (shown in Figure \[fig:approach\]). The modified objective function is given as $$\min_{\theta} L(\mathbf{y}, y_t) + \lambda||z_s - z_t||_2^2\enspace,$$ where $\mathbf{y}$ and $z_s$ are the outputs of the model and retrieved latent embedding respectively (as in Section \[retrieval\]), $y_t$ is the label, $\lambda$ is scaling factor and $z_t$ is the latent vector embedding of the target instance. $L(\cdot)$ in the above equation denotes the loss function used to train the model (depicted as **L($\cdot$)** in Figure \[fig:approach\]) and $\theta$ denotes the model parameters. The additional penalty term enables the latent vectors to be similar across multiple datasets, which aids performance in the subsequent stages. Experiments {#exp} =========== The experiments are designed in a manner to compare the performance of the baseline model with that of external dataset augmented model. Our experiments shows performance enhancement across several datasets by incorporating relevant instance information from a source dataset in varying setups. Our experiments also illustrate that our proposed model continues to perform better even when the size of training set is reduced, thereby reducing the dependence on labeled data. We also demonstrate the efficacy of our model through latent vector visualizations. **Baseline**. A simple *bi-LSTM (target-only)* model is trained without consideration for source-domain instances (no source-instance retrieval branch included into the network), this is used as the baseline. The *Instance-infused bi-LSTM* model is trained on the target domain with class labels revealed. This model serves as a tool to gain available knowledge and consolidate available representations in light of the past knowledge, assuming that source data is relevant for the downstream task at hand. Datasets -------- For our experiments, we have chosen three popular publicly-available news classification datasets. The datasets share common domain information and their details regarding the three popular datasets are mentioned here 1. **20 Newsgroups (News20)**[^3]: A collection of news group articles in English [@Lichman:2013]. The dataset is partitioned almost evenly across 20 different classes: *comp.graphics*, *comp.os.ms-windows.misc*, *comp.sys.ibm.pc.hardware*, *comp.sys.mac. hardware*, *comp. windows.x*, *rec.autos*, *rec.motorcycles*, *rec.sport. baseball*, *rec.sport. hockey, sci.crypt, sci.electronics, sci.med, sci.space, misc.forsale, talk.politics.misc, talk.politics.guns, talk. politics.mideast, talk. religion.misc, alt.atheism* and *soc. religion.christian*. 2. **BBC**[^4]: Original news article from BBC (2004-2005) in English [@greene06icml], classified into 5 classes: *business, entertainment, politics, sport* and *tech*. 3. **BBC Sports**^\[bbc\]^: Sports news articles from BBC news  [@greene06icml]. The dataset is divided into 5 major classes: *athletics, cricket, football, rugby* and *tennis*. The datasets are chosen in such a way that all of them share common domain knowledge and have small number of training examples so that the improvement observed using instance-infusion is significant. The statistics of the three real-world datasets are mentioned in Table  \[dataset\]. ---------------------------------------------------------------- ---------------------------------------------------------- --------------------------------------------------------- ---------------------------------------------------------- \[-1em\] <span style="font-variant:small-caps;">Dataset</span> <span style="font-variant:small-caps;">Train Size</span> <span style="font-variant:small-caps;">Test Size</span> \# <span style="font-variant:small-caps;">Classes</span> \[-1em\] <span style="font-variant:small-caps;">News20</span> 18000 2000 20 <span style="font-variant:small-caps;">BBC</span> 2000 225 5 <span style="font-variant:small-caps;">BBC Sports</span> 660 77 5 ---------------------------------------------------------------- ---------------------------------------------------------- --------------------------------------------------------- ---------------------------------------------------------- : Dataset Specifications[]{data-label="dataset"} \ The mentioned datasets do not have a dedicated test set, so the evaluations were performed using $k$-*fold cross validation* scheme. All performance scores that are reported in this paper are the mean performance over all the folds. [ c c c c]{}\ **Hyper-parameter** & <span style="font-variant:small-caps;">News20</span> & <span style="font-variant:small-caps;">BBC</span> & <span style="font-variant:small-caps;">BBC-Sports</span>\ \ \ Batch size & 256 & 32 & 16\ Learning rate & 0.01 & 0.01 & 0.01\ Word vector dim & 300 & 300 & 300\ Latent vector dim ($m$) & 50 & 50 & 50\ \# Nearest neighbours ($k$) & 5 & 5 & 5\ Scaling factor ($\lambda$) & $10^{-4}$ & $10^{-4}$ & $10^{-4}$\ \# Epochs per fold & 30 & 20 & 20\ [ p[6.5cm]{} c| c c c c c c]{}\ & <span style="font-variant:small-caps;">Target</span> & & &\ & <span style="font-variant:small-caps;">Source</span> & & &\ & & Accuracy & F1-Score &Accuracy & F1-Score& Accuracy & F1-Score\ \ <span style="font-variant:small-caps;">Bi-LSTM (Target Only)</span> & & 65.17 & 0.6328& 91.33& 0.9122& 84.22& 0.8395\ <span style="font-variant:small-caps;">Instance-Infused Bi-LSTM</span> & & 76.44 & 0.7586 &95.35 & 0.9531 & 88.78 & 0.8855\ <span style="font-variant:small-caps;">Instance-Infused Bi-LSTM</span> (with penalty function) & & **78.29** & **0.7773**& **96.09**& **0.9619**& **91.56**& **0.9100**\ \ Setup ----- All experiments were carried on a Dell Precision Tower 7910 server with Quadro M5000 GPU with 8 GB of memory. The models were trained using the Adam’s Optimizer [@kingma2014adam] in a stochastic gradient descent [@bottou2010large] fashion. The models were implemented using PyTorch [@tensorflow2015-whitepaper]. The word embeddings were randomly initialized and trained along with the model. For testing purposes the algorithm was tested using 10-fold cross-validation scheme. The learning rate is regulated over the training epochs, it is decreased to 0.3 times its previous value after every 10 epochs. The relevant hyper-parameters are listed in Table \[tab:hyperparameters\]. Results {#result} ------- Table \[tab:result\] shows the detailed results of our approach for all the datasets. The source and target datasets are chosen in a manner such that the source dataset is able to provide relevant information. 20Newsgroups contains news articles from all categories, so a good choice for source dataset is BBC which also encompasses articles from similar broad categories. For the same reason BBC also has 20Newsgroups as its source dataset. BBC Sports focuses on sports articles, BBC is chosen as the source dataset as the news articles share a common domain (articles come from same news media BBC). For the proper functioning of the model, the final layer of the instance-infused model is replaced while the rest of the network is inherited from the pre-trained *target only model*. We have shown improvements over the baseline by a high margin for all datasets, shown in Table \[tab:result\]. For 20Newsgroups the improvement over baseline model is 12%, BBC and BBC Sports datasets show an improvement of around 5%. As mentioned earlier, our approach is independent of the sentence encoder being used. Instead of bi-LSTM any other model can be used. As the proposed approach is independent of the source encoding procedure and the source instance embeddings are kept constant during the training procedure, we can incorporate source instances from multiple datasets simultaneously. In the subsequent experimental setups, we try varying setups to prove the robustness and efficacy of our proposed model. ---------------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------------- \[-1em\] <span style="font-variant:small-caps;">Dataset</span> <span style="font-variant:small-caps;">Accuracy</span> <span style="font-variant:small-caps;">F1-Score</span> <span style="font-variant:small-caps;">Source Dataset</span> \[-1em\] <span style="font-variant:small-caps;">News20</span> 77.51 0.7707 <span style="font-variant:small-caps;">News20</span> <span style="font-variant:small-caps;">BBC</span> 96.17 0.9606 <span style="font-variant:small-caps;">BBC</span> <span style="font-variant:small-caps;">BBC Sports</span> 90.63 0.8931 <span style="font-variant:small-caps;">BBC Sports</span> ---------------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------------- : Test Accuracy for proposed model using instances from the same target dataset[]{data-label="samesource"} \ **Instance Infusion from Same Dataset**. In this section, we study the results of using the pre-trained target dataset as the source for instance retrieval. This setting is same as the conventional instance-based learning setup. However, our approach not only uses the instance based information, but also leverage the macro statistics of the target dataset. As our proposed model is independent of the source dataset training scheme, we use the pre-trained target instances for source retrieval. The intuition behind this experimental setup is that instances from the same dataset is also useful in modeling other instances especially when a class imbalance exists in the target dataset. In this experimental setup, the *nearest neighbour retrieved is ignored* as it would be same as the instance sample being modeled during training. The performance on the three news classification datasets is shown in Table \[samesource\]. **Target Dataset Reduction with Single Source**. In this section, we discuss a set of experiments performed to support our hypothesis that the proposed model is capable for reduction of labeled instances in a dataset. In these set of experiments, we show that the cross-dataset augmented models perform significantly better than baseline models when varying fractions of the training data is used. Figure \[reduction\] shows the variation of *instance-infused bi-LSTM* and *bi-LSTM (target-only)* performance for 20Newsgroups, BBC and BBC Sports datasets. In these set of experiments 20Newsgroups had BBC, BBC had 20Newsgroup and BBC Sports had BBC as source dataset. As shown in the plot, 0.3, 0.5, 0.7, 0.9 and 1.0 fraction of the dataset are used for performance analysis. For all dataset fractions, the proposed model beats the baseline by a significant margin. The dashed line in the plots indicates the baseline model performance with 100% target dataset support. It is observed that the performance of instance-infused bi-LSTM with 70% dataset, is better than the baseline model trained on the entire dataset. This observation shows that our proposed approach is successful in reducing the dependency on the training examples by at least 30% across all datasets in our experiments. --------------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- \[-1em\] <span style="font-variant:small-caps;">Accuracy</span> <span style="font-variant:small-caps;">F1-Score</span> <span style="font-variant:small-caps;">Accuracy</span> <span style="font-variant:small-caps;">F1-Score</span> \[-1em\] <span style="font-variant:small-caps;">News20</span> 61.72 0.6133 67.32 0.6650 <span style="font-variant:small-caps;">BBC</span> 91.01 0.9108 91.41 0.9120 <span style="font-variant:small-caps;">BBC Sports</span> 81.72 0.7990 82.81 0.8027 --------------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- : Test Accuracy for proposed model using instances from multiple source datasets with 50% target dataset[]{data-label="multiplesource1"} \ **Target Dataset Reduction with Multiple Source**. In this section, we design an experimental setup in which only 0.5 fraction of the target dataset is utilized and study the influence of multiple source dataset infusion. Table \[multiplesource1\] compares the results, when single source and multiple source datasets are used for 50% dataset fraction. The results improve as and when more source datasets are used in the infusion process. This can be effectively leveraged for improving the performance of very lean datasets, by heavily deploying large datasets as source. For the single source setup, the same source datasets are used as mentioned in Section \[result\]. In multiple source experiment setup, for a given target dataset the other two datasets are used as source. \ **Visualization** In this section we show some visualization of latent space embeddings obtained using *bi-LSTM (target only)* and with *instance infusion*. Figure \[visualization\] shows the t-SNE [@maaten2008visualizing] visualization for target datasets BBC with source dataset 20Newsgroups and BBC Sports with source dataset BBC. Figure \[visualization\] (a) and (b), correspond to visualizations with BBC as target dataset and Figure \[visualization\] (c) and (d) correspond to visualizations with BBC Sports as target dataset. For Figure \[visualization\] (a) and (b), the source dataset embeddings of News20 are sparsified using *instance clustering* (described in Section \[clustering\]) (number of clusters = 2000) for better visualization. In the figure, the embeddings marked by blue denote source vector space, those represented by red denote *bi-LSTM (target only)* embeddings and embeddings represented by green correspond to those from the *instance-infused* model. Figure \[visualization\] shows that the embedding visualization change drastically using our model which in turn improves performance. It is visible from the figure that the latent vectors try to shape themselves in a manner so that the difference between the source and target distribution is reduced. We show the instance infusion is in fact accelerating the learning procedure, by analyzing how the latent vector space representation change with varying training data fractions of the target dataset. In Figure \[visualization3\], the latent vector embeddings of BBC Sports dataset with News20 support is shown for 0.3 in (a) & (b), 0.5 in (c) & (d) and 0.7 in (e) & (f), fraction of the target training dataset (BBC Sports). Figure \[visualization3\] (f) is the embeddings representation with 70% data for which best performance (among the 6 visualizations) is observed. It is evident from the figure that even with 30% and 50% of the data *instance infusion* tries to make the embedding distribution similar to Figure \[visualization3\] (f) as seen in Figure \[visualization3\] (b) and (d), when the *bi-LSTM (target-only)* instances representations in Figure \[visualization3\] (a) and (c) are quite different. This illustrates that by instance infusion the latent space evolves faster to the better performing shape compared to the scenario where no instance infusion is done. [ p[4cm]{} c c c c c c]{}\ & & &\ & Accuracy & F1-Score &Accuracy & F1-Score& Accuracy & F1-Score\ \ \ kNN-ngrams & 35.25 & 0.3566 & 74.61 & 0.7376 & 94.59 & 0.9487\ Multinomial NB-bigram & **79.21** & **0.7841** & 95.96 & 0.9575 & 95.95 & 0.9560\ SVM-bigram & 75.04 & 0.7474 & 94.83 & 0.9456 & 93.92 & 0.9393\ SVM-ngrams & 78.60 & 0.7789 & 95.06 & 0.9484 & 95.95 & 0.9594\ Random Forests-bigram & 69.01 & 0.6906 & 87.19 & 0.8652 & 85.81 & 0.8604\ Random Forests-ngrams & 78.36 & 0.7697 & 94.83 & 0.9478 & 94.59 & 0.9487\ Random Forests- tf-idf & 78.6 & 0.7709 & 95.51 & 0.9547 & **96.62** & **0.9660**\ \ \ Bi-LSTM & 65.17 & 0.6328& 91.33& 0.9122& 84.22& 0.8395\ Instance-Infused Bi-LSTM & 78.29 & 0.7773& **96.09**& **0.9619**& 91.56& 0.9100\ \ \ \ **Comparative Study**. Table \[tab:comparison\] gives the experimental results for our proposed approach, baselines and other conventional learning techniques on the 20 Newsgroups, BBC and BBC Sports datasets. Literature involving these datasets mainly focus on non-deep learning based approaches. Thereby, we compare our results with some popular conventional learning techniques. The experiments involving conventional learning were performed using *scikit-learn* [@scikit-learn] library in Python[^5]. For the k-NN-ngram experiments, the number of nearest neighbours $k$ was set to 5. In Table \[tab:comparison\], the models studied are Multinomial Naive Bayes [@kibriya2004multinomial], $k$-nearest neighbour classifier [@cunningham2007k], Support Vector Machine [@suykens1999least] (SVM) and Random Forests Classifier [@breiman2001random]. The input vectors were initialized using n-grams [@brown1992class], bi-gram or term frequency-inverse document frequency (tf-idf). For the mentioned datasets, conventional models outperform our baseline *Bi-LSTM* model, however upon *instance infusion* the deep learning based model is able to achieve competitive performance across all datasets. Moreover by instance infusion the simple bi-LSTM model approaches the classical models in performance on News20 and BBC Sports dataset, whereas on BBC Dataset the proposed instance infused bi-LSTM model beats all the mentioned models. Related Work ------------ The goal of this work is to efficiently utilize knowledge extant in a secondary dataset of a similar domain that can be closely linked to transfer learning and domain adaptation. Domain adaptation and transfer learning [@pan2010survey] aim at utilizing domain knowledge of one task for another task and also learning task independent representations. This also reduces the dependency of learning algorithms on labeled data. The challenge in these tasks lie in learning a representation that can reduce the discrepancy in probability distributions across domains. There has been an array of work in the field of domain adaptation, we mention a few relevant works here. One of the popular work in this field is Domain Adaptive Networks (DAN) [@ghifary2014domain], which penalizes the learning algorithm using Maximum Mean Discrepancy (MMD) metric used to compute the distance between source and target distribution. [@glorot2011domain] uses a two-step approach using a stacked autoencoder architecture to reduce the discrepancy between the source and target domain. [@long2016unsupervised] uses residual networks for unsupervised domain knowledge transfer. With the advent of deep networks, it is now easier to learn latent representations which is accessible across various domains. [@donahue2014decaf] studies this feature of deep networks of learning latent visualizations and how they vary across domain specific tasks. [@wanggleaning] uses an active learning method for querying most informative instances from multiple domain. [@collobert2011natural] uses deep learning approaches for multi-task learning for a variety of tasks in natural language processing domain. The setup of our framework is different from the conventional domain adaptation based methods. We are given a source domain $\mathcal{D}_s = \{(x^s_i, y^s_i)\}^{n_s}_{i=1}$ of $n_s$ and a target domain $\mathcal{D}_t = \{x^t_j, y^t_i\}^{n_t}_{j=1}$ of $n_t$ labeled instances. We aim to utilize the pre-trained model on $\mathcal{D}_s$ to access its latent vectors $\mathcal{Z}_s = \{z^s_i\}^{n_s}_{i=1}$. In our method, instances from the target dataset queries similar latent vector instances from the source dataset thereby formulating an instance retrieval based transfer learning policy. Conclusion & Future Work {#future} ======================== In this work we posit that the infusion of instance level local information along with macro statistics of the dataset, can significantly improve the performance of learning algorithms. Through extensive experimentation, we have shown that our approach can improve the performance of learning models significantly. The improvement in performance shows that this approach has potential and very useful in cases where the dataset is very lean. Although instance based learning is very well studied in machine learning literature, this has rarely been used in a deep learning setup for knowledge transfer. Moreover, approaches to exploit instance level local information and macro statistics of the dataset is an exciting topic to embark on. One thread of work which can be pursued to improve our setup, is to enhance the search paradigm to retrieve instances to reduce latency during training. In this work, we have shown extensive experiments where our method reduces dependency on labeled data, however this work may be extended to analyze performance in a purely unsupervised setup. Improved feature modification techniques can be augmented along with the search module in order to enhance the query formulation. In this work, we also assumed that the datasets share a common domain, in future work means to tackle domain discrepancy needs to be formulated to incorporate instances from a range of datasets. [^1]: http://www.bbc.com/ [^2]: http://www.skysports.com/ [^3]: http://qwone.com/ jason/20Newsgroups/ [^4]: \[bbc\]http://mlg.ucd.ie/datasets/bbc.html [^5]: https://www.python.org/
--- author: - 'Zong-Gang Mou,' - 'Paul M. Saffin,' - Anders Tranberg title: 'Simulations of Cold Electroweak Baryogenesis: Dependence on the source of CP-violation' --- Introduction {#sec:Intro} ============ Cold Electroweak Baryogenesis attempts to explain the observed baryon asymmetry in the Universe by postulating that the process of electroweak symmetry breaking was a cold spinodal transition [@Krauss:1999ng; @GarciaBellido:1999sv; @Copeland:2001qw; @Tranberg:2003gi]. This is possible if the Higgs field $\phi$ is coupled to another field, whose dynamics triggers symmetry breaking only after the Universe has cooled below the electroweak scale [@Copeland:2001qw; @vanTent:2004rc; @Enqvist:2010fd; @Konstandin:2011ds]. In such a cold transition, a baryon asymmetry is created in the presence of CP-violation, as the out-of-equilibrium conditions required for successful baryogenesis are provided by the exponentially growing IR modes of the spinodal (Higgs) field. C and P violation follow from the electroweak sector of the Standard Model. As for traditional (hot) electroweak baryogenesis, the CP-violation arising from the Standard Model CKM matrix is insufficient [@Gavela:1994dt; @Gavela:1994ds; @Brauner:2011vb]. Sources of CP-violation beyond the Standard Model must therefore be part of the scenario. In a series of recent papers [@Mou:2017atl; @Mou:2017zwe; @Mou:2017xbo], using classical lattice field theory simulations we have studied the effect of relaxing a sequence of assumptions of the original work [@GarciaBellido:2003wd; @Tranberg:2003gi; @Tranberg:2006ip; @Tranberg:2006dg]. This includes the dependence on the speed of the spinodal transition [@Mou:2017xbo], the impact of U(1) hypercharge gauge fields on the asymmetry [@Mou:2017zwe], and the effect of replacing a “by-hand" mass-flip of the Higgs field by a portal coupling to a new dynamical field $\sigma$ [@Mou:2017atl]. In the present work, we relax one final assumption, namely the introduction of CP violation through one specific dimension-6 term $$\begin{aligned} \label{eq:CP2p} S_{\rm 2,\phi}=\frac{3\delta_{2,\phi}g^2}{16\pi^2m_{\rm W}^2}\int dt\,d^3x \,\phi^\dagger\phi \textrm{Tr}\,W^{\mu\nu}\tilde{W}_{\mu\nu},\end{aligned}$$ with $W^{\mu\nu}$ the field strength tensor of the SU(2) gauge field and $\tilde{W}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}W^{\rho\sigma}$. The dimensionless constant $\delta_{2,\phi}$ is a measure of the magnitude of CP-violation, and could in principle be derived from matching this effective term to some underlying theory. $\phi^\dagger\phi$ is manifestly C and P even, and $W\tilde{W}$ is C even, but P odd. The common feature of all electroweak baryogenesis scenarios is that the baryon asymmetry arises from generating a non-zero value of Chern-Simons number $$\begin{aligned} \label{eq:ncs2} N_{\rm cs,SU(2)}(t)-N_{\rm cs,SU(2)}(0) = \frac{g^2}{16\pi^2}\int_0^t dt\, d^3x\, \textrm{Tr}\, W^{\mu\nu}\tilde{W}_{\mu\nu},\end{aligned}$$ since baryon number then changes according to the chiral anomaly $$\begin{aligned} \label{eq:anomaly} 3[N_{\rm cs,SU(2)}(t)-N_{\rm cs, SU(2)}(0)]=B(t)-B(0).\end{aligned}$$ It is clear that the term (\[eq:CP2p\]) has a very special standing, in that by partial integration and assuming that $\phi$ is approximately constant in space, one gets $$\begin{aligned} \label{eq:bias} S_{\rm 2,\phi}\simeq -\frac{3\delta_{2,\phi}}{m_{\rm w}^2} \int dt\, \partial_0(\phi^\dagger\phi)N_{\rm cs,SU(2)}.\end{aligned}$$ As soon as $\phi$ changes in time, an effective bias is introduced precisely for the Chern-Simons number which then generates a baryon asymmetry. In a more generic model, one would expect CP-violation to be present in the system, but not as an explicit bias in this way. More likely, during the transition CP-violation forces the complete set of fields to favour CP-violating configurations, and in such a background, Chern-Simons number is effectively biased to a non-zero expectation value. Modelling the Standard Model through an effective bosonic theory including only the Higgs field $\phi$ and SU(2) gauge field $W_\mu$, Eq. (\[eq:CP2p\]) is the natural lowest order CP-violating term (although not the only one, see [@Brauner:2011vb]). But including also U(1) gauge fields and a symmetry-triggering scalar $\sigma$, as necessary for achieving a cold tachyonic transition (see below), other possibilities arise, including $$\begin{aligned} \label{eq:all3} S_{\rm 2, \sigma}&=&\frac{3\delta_{2,\sigma}g^2}{16\pi m_{\rm W}^2}\int dt\, d^3x\, \xi^2\sigma^2 \, \textrm{Tr}\,W^{\mu\nu}\tilde{W}_{\mu\nu},\\ S_{\rm 1,\phi}&=&\frac{3\delta_{1,\phi}(g')^2}{32\pi m_{\rm W}^2}\int dt\, d^3x\, \phi^\dagger\phi \, B^{\mu\nu}\tilde{B}_{\mu\nu},\\ S_{\rm 1,\sigma}&=&\frac{3\delta_{1,\sigma}(g')^2}{32\pi m_{\rm W}^2}\int dt\, d^3x\,\xi^2\sigma^2 \, B^{\mu\nu}\tilde{B}_{\mu\nu},\end{aligned}$$ with $B_{\mu\nu}$ the U(1) (hypercharge) gauge field strength. New parameters $\delta_{2,\sigma}$, $\delta_{1,\phi}$, $\delta_{1,\sigma}$, are introduced representing the magnitude of CP-violation. $\xi$ is a dimensionless portal coupling to be defined below. Whereas the first of these terms again biases $N_{\rm cs, SU(2)}$ (a [*primary*]{} bias, in our terminology), the next two bias another CP-odd observable (the U(1)-Chern-Simons number) $$\begin{aligned} \label{eq:ncs1} N_{\rm cs,U(1)}(t)-N_{\rm cs,U(1)}(0) = \frac{(g')^2}{32\pi^2}\int_0^t dt\, d^3x\, B^{\mu\nu}\tilde{B}_{\mu\nu},\end{aligned}$$ which then through the field dynamics potentially biases $N_{\rm cs,SU(2)}$ (a [*secondary*]{} bias). Establishing whether, and under what conditions, such a secondary bias is able to generate sufficient baryon asymmetry is the purpose of this work. Clearly, secondary bias is the most generic source of CP-violation and, if successful, opens up new paths of model building for this baryogenesis scenario. A combination of the two was considered in [@Tranberg:2012jp; @Tranberg:2012qu; @Mou:2015aia] for the 2-Higgs doublet model where, instead of (\[eq:CP2p\]), the authors considered $$\begin{aligned} S_{\rm 2hdm}= \frac{3\delta_{\rm 2hdm}g^2}{16\pi m_{\rm W}^2}\int dt\,d^3x\,(\phi^\dagger_1\phi_2-\phi^\dagger_2\phi_1)\textrm{Tr }W^{\mu\nu}\tilde{W}_{\mu\nu}.\end{aligned}$$ This works as a primary bias, breaks both C and P, but conserves CP. In addition, it is then necessary to include C-violation in the 2-Higgs potential, effectively to bias the combination $\phi^\dagger_1\phi_2-\phi^\dagger_2\phi_1$ to be nonzero. This was seen to generate a large enough baryon asymmetry to match observations [@Tranberg:2012jp; @Tranberg:2012qu]. In the following section \[sec:model\], we present our model: the bosonic part of the electroweak sector of the Standard Model, coupled to a singlet scalar. We further discuss the four different CP-violating terms that we will consider, and present some discussion about CP-odd observables and how they are related. In section \[sec:cewbag\] we give a brief overview of Cold Electroweak Baryogenesis and show a few examples of the behaviour of the observables. In section \[sec:results\] we then compare the asymmetries resulting from each of the four CP-violating terms and when some of them are combined. We also comment on the effect of a constant (in time and space) bias of $N_{\rm cs, SU(2)}$, and lattice discretization effects. We conclude in section \[sec:conclusion\]. Model {#sec:model} ===== Building on the work of [@Mou:2017atl], we consider the bosonic part of the Standard Model electroweak sector, extended by a singlet scalar $\sigma$ coupled to the Higgs field $\phi$. The action reads: $$\begin{aligned} \label{eq:S_EW} S= \int dt\, d^3x\Bigg[ &-\frac{1}{2}\textrm{Tr}\,W^{\mu\nu}W_{\mu\nu} -\frac{1}{4} B^{\mu\nu}B_{\mu\nu} - (D_\mu\phi)^\dagger D^\mu\phi +\mu^2\phi^\dagger\phi-\lambda\left(\phi^\dagger\phi\right)^2-\frac{\mu^4}{4\lambda} \nonumber \\ &-\frac{1}{2}\partial_\mu\sigma\partial^\mu\sigma-\frac{m^2}{2}\sigma^2 - \frac{1}{2}\xi^2 \sigma^2 \phi^\dagger\phi \Bigg]+ S_{\rm CP},\end{aligned}$$ where for the SU(2) gauge field, we have $W_{\mu\nu} = \partial_\mu W_\nu - \partial_\nu W_\mu -ig[W_\mu, W_\nu]$, $W_\mu= W_\mu^a\sigma^a/2$ with $\sigma^a$ the Pauli matrices, and similarly for the U(1) hypercharge field $B_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu$. The covariant derivative is given by $$\begin{aligned} D_\mu\phi = \left(\partial_\mu -i Yg' B_\mu-i gW_\mu\right)\phi,\end{aligned}$$ with $Y=-1/2$ for the Higgs field. We have explicitly put in the Higgs vacuum expectation value $v=246$ GeV, the Higgs self-coupling $\lambda = \mu^2/v^2=m_H^2/(2v^2)\simeq 0.13$, and the gauge couplings $g=0.65$ and $g'=0.35$. This corresponds to $m_H=125$ GeV, $m_W=80$ GeV, and $m_Z=91$ GeV. In addition, we have the free parameters of the $\sigma$-$\phi$ potential, $m^2$ and $\xi$. We have chosen a very simple potential form, ignoring cubic and quartic $\sigma$ self-interactions and the cubic portal coupling. This is just for simplification and to match [@Mou:2017atl]. Engineering the $\sigma$-potential to have more features (non-zero expectation values in the vacuum, away from the vacuum) may have implications for the baryon asymmetry. We will stick to the quadratic form indicated in (\[eq:S\_EW\]). In the language of [@Mou:2017atl], we will consider a fast ($m_H/m=4$ and $\xi=2.04$) and slow ($m_H/m=32$ and $\xi=0.254$) quench at $n=8$, where $n$ indicates the total energy in the system through $$\begin{aligned} V_{\rm tot}= V_0 \left(1+\frac{1}{n^2}\right)=\frac{\mu^4}{4\lambda} \left(1+\frac{1}{n^2}\right).\end{aligned}$$ For $n=8$, the energy initially stored in the non-zero $\sigma$ field is therefore negligible (about 1%) compared to $V_0$, the potential energy density from the Higgs potential itself at $\phi=0$, $\sigma=0$. For more details of this point, we refer the reader to [@Mou:2017atl]. As advertised in the introduction, we will consider four different effective bosonic dimension-6 terms playing the role of $S_{\rm CP}$. In previous work, we found that a baryon asymmetry consistent with observations corresponds to $\delta_{2,\phi}\simeq 10^{-5}$, with some dependence on the speed of the symmetry breaking quench [@Mou:2017atl]. The full Standard Model includes all the fermions as well, with CP-violation encoded in the CKM-matrix. It is tempting to expect that when integrating these out, CP-violation would be recovered as terms of the form (\[eq:CP2p\]), (\[eq:all3\]). This is true in terms of the field content, but the structure of the effective terms is rather more complex [@Brauner:2011vb]. Also, the magnitude of the coefficients $\delta_{i,j}$ is much too small to be responsible for baryogenesis, unless the effective temperature during the transition is less that 1 GeV [@Brauner:2011vb], which does not seem to be the case [@Mou:2013kca]. So for our purposes, although we do expect that such effective terms arise from integrating some heavier degrees of freedom, they are just generic representatives of CP-violation providing primary and secondary bias. Observables {#sec:obs} ----------- As we have no fermions explicitly in the system, we rely on the chiral anomaly relation (\[eq:anomaly\]) to infer the baryon asymmetry. But in fact, in the presence of U(1) gauge fields in addition to the SU(2) gauge fields, the full chiral anomaly is the sum of two contributions $$\begin{aligned} \label{eq:anomaly2} B(t)-B(0) = 3\left[N_{\rm cs,SU(2)}(t)-N_{\rm cs,SU(2)}(0)\right]-3\left[N_{\rm cs, U(1)}(t)-N_{\rm cs, U(1)}(0)\right].\end{aligned}$$ Usually, this complication is ignored, as one is interested in permanent changes of the Chern-Simon number. For the SU(2) gauge theory, the vacuum structure consists of a series of gauge equivalent vacua with integer Chern-Simons number. Hence, going from one minimum to the next produces net baryon number, and this asymmetry can remain at late times and low temperatures. The vacuum structure for the U(1) gauge field is trivial, with a single vacuum at $N_{\rm cs,U(1)}=0$. This means that although during the process, U(1) Chern-Simons number may be biased to one side, ultimately it will relax back to zero, restoring the simple form (\[eq:anomaly\]). As a further proxy for the baryon asymmetry, we note that the Higgs field winding number $$\begin{aligned} \label{eq:nw} N_{\rm w}=\frac{1}{24\pi^2}\int d^3 x \epsilon^{ijk}\textrm{Tr}[(U^\dagger\partial_i U )(U^\dagger\partial_jU )(U^\dagger\partial_k U)],\end{aligned}$$ with $U(x)=(i\tau_2\phi^*,\phi)/\phi^\dagger\phi$, in a “pure-gauge" vacuum obeys $$\begin{aligned} N_{\rm w}= N_{\rm cs,SU(2)}.\end{aligned}$$ This follows from the minimization of the covariant derivative, when $B_{\mu}=0$. But more generally, we have the relation $$\begin{aligned} \label{eq:puregauge} N_{\rm w}\simeq N_{\rm cs,SU(2)}-N_{\rm cs,U(1)},\end{aligned}$$ a relation we will confirm numerically below. Because $N_{\rm w}$ is integer (up to lattice artefacts) and therefore a much less noisy numerical observable, we will make the identification at late times $$\begin{aligned} B(t)-B(0) = 3[N_{\rm w}(t)-N_{\rm w}(0)].\end{aligned}$$ In our simulations we will average the observables over an initially CP-symmetric ensemble of field realisations, initialised to reproduce the correlation functions of the quantum vacuum [@GarciaBellido:2002aj; @Smit:2002yg]. The dynamics themselves follow the classical equations of motion, as derived from the full lagrangian. The detailed numerical lattice implementation may be found elsewhere [@Tranberg:2003gi]. To track the progress of the transition, we will often plot the average Higgs field $$\begin{aligned} \langle\phi^2\rangle = \frac{1}{V}\int d^3x\, \phi^\dagger\phi(x),\end{aligned}$$ and $\sigma$ field $$\begin{aligned} \langle\sigma\rangle = \frac{1}{V}\int d^3x\, \sigma(x),\end{aligned}$$ also averaged over the ensemble. Cold Electroweak Baryogenesis {#sec:cewbag} ============================= Detailed expositions of many aspects of the Cold Electroweak Baryogenesis scenario is available in the literature [@Tranberg:2003gi; @vanderMeulen:2005sp]. In brief, the non-Standard Model degree of freedom $\sigma$ is assumed to start out at a value $\sigma_i>\sigma_c=\mu/\xi$, and to roll down its potential to $\sigma=0$. In doing so, the mass parameter of the Higgs field changes sign, with $$\begin{aligned} \mu^2_{\rm eff}(t) = \xi^2\sigma^2(t)-\mu^2.\end{aligned}$$ We will take $\sigma_i=\sqrt{2}\sigma_c$, in such a way that $\mu^2_{\rm eff}$ goes from $+\mu^2$ initially to $-\mu^2$ asymptotically at late times. Although the exact trajectory by which this happens will depend on the parameters of the model, ultimately this will result in electroweak symmetry breaking. While $\mu_{\rm eff}^2(t)<0$, momentum modes of the Higgs field with $k^2+\mu^2_{\rm eff}(t)<0$ grow exponentially, a process known as spinodal transition or tachyonic preheating. The result is that the energy in the initial Higgs potential is transferred to particles in the IR ($k<\mu$) of the spectrum. The instability itself, but also the subsequent redistribution of energy into the UV, are strongly out of equilibrium processes, suitable for generating a baryon asymmetry. The speed of the transition may be expressed as $$\begin{aligned} u= -\frac{1}{2\mu^3}\frac{d\mu^2_{\rm eff}}{dt}|_{\mu^2_{\rm eff}=0}\equiv \frac{1}{\mu\tau_q},\end{aligned}$$ with $\tau_q$ a characteristic quench time. We found in [@Mou:2017xbo] for the exact same model considered here the relation $\tau_q\simeq 1.3\, m^{-1}$, and so from now on, we will express the quench time in terms of the dimensionless ratio $m_H/m\simeq 0.8\,m_H\tau_q\simeq 1.1/u$. The maximum asymmetry occurs for quench times $m_H/m\simeq 30$, whereas very fast quenches with $m_H/m\simeq 0$, most favoured by model-building, give an asymmetry of the opposite sign and a factor of 3-4 smaller in magnitude [@Mou:2017atl; @Mou:2017xbo]. A more detailed analysis of the field configurations arising in such a transition shows, that an asymmetry is generated first as the Chern-Simons number is biased to one side by CP-violation, and that subsequently the Higgs winding number changes to accommodate this. And that this happens most readily when there are many points in space with $\phi^\dagger\phi(x)\simeq 0$ [@vanderMeulen:2005sp]. ![The Higgs and $\sigma$ fields and the CP-odd observables in a typical simulation, averaged over an ensemble of 50 CP-conjugate pairs.[]{data-label="fig:cewbagex"}](u1_long_early.pdf){width="12cm"} In Fig. \[fig:cewbagex\], we show the basic observables during the transition, averaged over the ensemble of initial conditions. The quench time is chosen to be $m_H/m=32$, and so until $m_Ht\simeq 25$, the Higgs field is stable at $\phi^2=0$. Then as the effective mass parameter $\mu^2_{\rm eff}$ becomes negative, the Higgs field grows from zero to near the vacuum expectation value $\phi^2/v^2=1/2$, after which it oscillates with a decreasing amplitude. Meanwhile, the SU(2) Chern-Simons number (\[eq:ncs2\]), Higgs winding number (\[eq:nw\]) and U(1) Chern-Simons number (\[eq:ncs1\]) deviate from zero average in a complicated way under the influence of CP-violation (here, (\[eq:CP2p\]). The Chern-Simons number moves first, but for $N_{\rm w}$, most of the motion happens near $m_Ht=40$ and $55$, when the Higgs field is at a minimum in its oscillation. This is when many local zeros of the Higgs field are present. By time $m_Ht=90$, the Higgs field has settled, and the Higgs winding number is completely frozen in. In principle, equilibrium Sphaleron processes could trigger a change in winding and Chern-Simons number, but at an effective temperature way below the critical temperature of the electroweak phase transitions (about $40$ GeV compared to $T_c=160$ GeV [@DOnofrio:2014rug]) this is completely negligible. ![The Higgs (left) and singlet (right) fields early in the transition for a range of transition speeds. []{data-label="fig:zeros"}](phisq_n8.pdf "fig:"){width="7cm"} ![The Higgs (left) and singlet (right) fields early in the transition for a range of transition speeds. []{data-label="fig:zeros"}](sigmasq_n8.pdf "fig:"){width="7cm"} It is a generic feature that the largest asymmetry is created for parameter values giving the largest number of Higgs zeros. In Fig. \[fig:zeros\], we show the average Higgs field squared (left) and the singlet field (right) for a number of transition speeds. We see that the Higgs field increases as the transition is triggered, but then oscillates back to a minimum.The value of this minimum decreases with increasing quench time up to $m_H/m=32$, after which it increases again. ![The CP-odd observables in a typical simulation, with $N_{\rm cs,SU(2)}$ and $N_{\rm cs,U(1)}$ separately (left) and added up (right).[]{data-label="fig:sumNcs"}](u1_long_1.pdf "fig:"){width="7cm"} ![The CP-odd observables in a typical simulation, with $N_{\rm cs,SU(2)}$ and $N_{\rm cs,U(1)}$ separately (left) and added up (right).[]{data-label="fig:sumNcs"}](u1_long_2.pdf "fig:"){width="7cm"} Returning to Fig. \[fig:cewbagex\], we find that Chern-Simons numbers individually do not seem to match the winding number very well, as would be expected for a pure-gauge field configuration. In Fig. \[fig:sumNcs\], we show the same observables in the same simulation, but for much longer time. In the left-hand plot, we see the two Chern-Simons numbers separately, whereas in the right-hand plot, we have added them up as in (\[eq:anomaly2\]). We see that the relation (\[eq:puregauge\]) applies. We have checked that for very long times, $N_{\rm cs, U(1)}$ indeed goes to zero, so that $N_{\rm w}=N_{\rm cs,SU(2)}$ is restored as a simple proxy for the baryon asymmetry. In what follows, we will use the value of $N_{\rm w}$ at the end of the simulation as our primary observable. Comparing sources of CP-violation {#sec:results} ================================= The numerical procedure is then for each of the four CP-violating terms to vary the coefficients $\delta_{i,j}$ for the two different quench speeds $m_H/m=4$ (fast) and $m_H/m=32$ (slow), but otherwise keeping parameters fixed. The lattice size $64^3$ and lattice spacing $am_H=0.375$ are kept fixed unless explicitly stated otherwise. The ensemble members are randomly generated, and we use different random seeds for different simulations. The ensembles each consist of 400 CP-conjugate pairs. For each pair of CP-conjugate configurations, we record whether the final values of $N_{\rm w}$ cancel to zero (one is minus the other). If not, we say that the pair has performed a “flip". Flipped pairs usually add up to $\pm 1$, but instances of $\pm 2$ and $3$ were observed. Statistics and errors are based on the frequency of flips. SU(2)-type CP-violation ----------------------- ![The asymmetry for the type of primary CP-violation involving SU(2) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:su2type"}](nw_su2_h_4.pdf "fig:"){width="7cm"} ![The asymmetry for the type of primary CP-violation involving SU(2) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:su2type"}](nw_su2_s_4.pdf "fig:"){width="7cm"} ![The asymmetry for the type of primary CP-violation involving SU(2) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:su2type"}](nw_su2_h_32.pdf "fig:"){width="7cm"} ![The asymmetry for the type of primary CP-violation involving SU(2) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:su2type"}](nw_su2_s_32.pdf "fig:"){width="7cm"} In Fig. \[fig:su2type\] we show the final asymmetry in $N_{\rm w}$ for the two CP-violating terms involving the SU(2) gauge fields. In our terminology, they both represent a primary bias of Chern-Simons number. We show four separate cases, corresponding to fast (top) and slow (bottom) transitions, when the SU(2) field is coupled to the Higgs field (left) and when it is coupled to the $\sigma$ field (right). Concentrating first on the SU(2)-Higgs case, we notice is that the asymmetry is positive for slow quenches, and negative for fast quenches. For both quench times, the dependence on $\delta_{2,\phi}$ is linear, but with a much larger magnitude for the slow quench. We can fit the dependence with a 1-parameter form to find $$\begin{aligned} \langle N_{\rm w}(t)-N_{\rm w}(0) \rangle&= -(3.5\pm 0.7)\times10^{-3}\delta_{2,\phi}, \quad& \big(m_H/m=4,\,\textrm{SU(2)}-\phi\big)\\ &=(48\pm 2)\times10^{-3}\delta_{2,\phi}, \quad& \big(m_H/m=32,\,\textrm{SU(2)}-\phi\big).\end{aligned}$$ When replacing the Higgs field by the $\sigma$ field, we anticipate that the prefactor of $W\tilde{W}$ ($\sigma$) is no longer (as) strongly correlated with the availability of Higgs zeros (in $\phi$). But also, because $\sigma^2$ runs from finite positive to zero (so decreases in time), we expect the bias and hence the asymmetry to have the opposite overall sign. We indeed see this, and also that for a slow transition the asymmetry is reduced by a factor of about six compared to the Higgs-SU(2) term (for values of $\delta_{2,\sigma}$ similar to the $\delta_{2,\phi}$ above). This is sensible, since the slow quench is specifically tuned to a maximum of Higgs zeros, rather than for instance where the CP-violating term is maximal. We see that for a fast transition, which does not optimize the availability of Higgs zeros, we get an asymmetry of the roughly the same magnitude, whether through Higgs-SU(2) or $\sigma$-SU(2). We may again fit with a linear relation, to find $$\begin{aligned} \label{eq:fastmax} \langle N_{\rm w}(t)-N_{\rm w}(0) \rangle&= (10\pm 1)\times10^{-3}\delta_{2,\sigma}, \quad &\big(m_H/m=4,\,\textrm{SU(2)}-\sigma\big)\\ &=-(6.9\pm0.7)\times10^{-3}\delta_{2,\sigma}, \quad &\big(m_H/m=32,\,\textrm{SU(2)}-\sigma\big).\end{aligned}$$ A rescaling of $\xi$ or $\sigma$ naively corresponds to changing $\delta_{2,\sigma}$, and so a priory, it is unclear why the asymmetries should match in magnitude for the same values of $\delta_{2,\sigma}$. But since $\xi\sigma_i=\mu=\sqrt{\lambda}v$ it is perhaps not so surprising that the order of magnitude is the same. What is remarkable is that the change in sign between fast and slow quenches remains. This really seems to be a generic feature of the process, distinguishing between fast and slow transition regimes. Generalizing to a much broader class of $\sigma$ potentials, it is possible to engineer the $\sigma$ to increase from zero to a non-zero vev. From one vev to another. Or to/from a very large/small amplitudes. In each case, one will get a different asymmetry, which then again corresponds to a differently value of $\delta_{2,\sigma}$ and possibly a flipping of the sign, depending on when whether the $\sigma$ increases or decreases in magnitude. U(1)-type CP-violation ---------------------- ![The asymmetry for the type of CP-violation involving U(1) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:u1type"}](nw_u1_h_4.pdf "fig:"){width="7cm"} ![The asymmetry for the type of CP-violation involving U(1) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:u1type"}](nw_u1_s_4.pdf "fig:"){width="7cm"} ![The asymmetry for the type of CP-violation involving U(1) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:u1type"}](nw_u1_h_32.pdf "fig:"){width="7cm"} ![The asymmetry for the type of CP-violation involving U(1) gauge fields. Coupled to the Higgs field (left) and the singlet field (right). For fast (top) and slow (bottom) transitions.[]{data-label="fig:u1type"}](nw_u1_s_32.pdf "fig:"){width="7cm"} In Fig. \[fig:u1type\] we show a similar set of results, in the case where the gauge field in the CP-violating term is U(1) hypercharge. Now we have a situation where while the transition occurs, a U(1) gauge field is generated with non-zero Chern-Simons number, which then relaxes back to zero once the transition is over and thermalization completes. But while this Chern-Simons number is non-zero, the SU(2) gauge field and the Higgs field evolve in a (C)P-breaking background, leading to flips and a net asymmetry. That could in principle also relax back to zero, but because of the vacuum structure with high potential barriers in the low-temperature phase, leading to exponential suppression of Sphaleron transitions, once equilibrium is re-established the relaxation process takes longer than the age of the Universe. As for Fig. \[fig:su2type\], we show in the two lefthand panels the case where the bias is due to a coupling to the Higgs field. And in the right-hand panels, when we couple to the $\sigma$ field. The top panels are for a fast quench, $m_H/m=4$ and the bottom panels for a slow quench $m_H/m=32$. For each panel, we show the dependence on the strength of CP-violation. We first note that the overall asymmetry of the U(1)-Higgs has the opposite sign to the SU(2)-Higgs system for positive $\delta_{i,j}$ (with our sign conventions, (\[eq:CP2p\]), (\[eq:all3\])). And the U(1)-$\sigma$ system has the opposite sign to the SU(2)-$\sigma$ system. Also, for the same values of $\delta_{i,j}$, the asymmetry in the U(1)-type systems is about an order of magnitude smaller than for the equivalent SU(2)-type terms of Fig. \[fig:su2type\]. This is a question of normalization of the variables and prefactors of the CP-violating operator, but also indicates that the values of $B\tilde{B}$ are numerically smaller. For the fast quenches, both couplings to Higgs and $\sigma$ produce no statistically significant asymmetry. This may indicate that the asymmetry is in general very small for fast quenches, but most likely it is because $m_H/m=4$ happens to be where the dependence of the asymmetry on quench-time goes through zero on its way from positive to negative. The detailed quench speed dependence for the SU(2)-Higgs system was explored in [@Mou:2017atl]. For technical reasons to do with the lattice size, we are not able to reliably simulate even faster quenches (see again [@Mou:2017xbo]). For slow quenches, we again find a clear asymmetry for both Higgs and $\sigma$-coupling, with a roughly linear dependence on the strength of CP-violation. Just as for the SU(2)-type terms, the coupling to the Higgs field produces the largest asymmetry by a factor of 4-5. In terms of linear fits we find for the Higgs-U(1) term $$\begin{aligned} \langle N_{\rm w}(t)-N_{\rm w}(0)\rangle &= -(0.7\pm 1)\times 10^{-4}\delta_{1,\phi}, \quad& \big(m_H/m=4,\,\textrm{U(1)}-\phi\big)\\ &=-(37 \pm 2)\times 10^{-4}\delta_{1,\phi}, \quad &\big(m_H/m=32,\,\textrm{U(1)}-\phi\big).\end{aligned}$$ and for the $\sigma$-U(1) $$\begin{aligned} \langle N_{\rm w}(t)-N_{\rm w}(0)\rangle &= (0.7 \pm0.5)\times10^{-4}\delta_{1,\sigma},\quad &\big(m_H/m=4,\,\textrm{U(1)}-\sigma\big)\\ &=(4 \pm 1)\times10^{-4}\delta_{1,\sigma}, \quad&\big(m_H/m=32,\,\textrm{U(1)}-\sigma\big).\end{aligned}$$ Adding up biases {#sec:adding} ---------------- ![The asymmetry from combining two CP-violating terms. Left: When only one source is on, and when two are on at the same time. Right: Comparing the sum of the two single-source asymmetries to the double-source asymmetry. []{data-label="fig:combine"}](separate_nw.pdf "fig:"){width="7cm"} ![The asymmetry from combining two CP-violating terms. Left: When only one source is on, and when two are on at the same time. Right: Comparing the sum of the two single-source asymmetries to the double-source asymmetry. []{data-label="fig:combine"}](both_nw.pdf "fig:"){width="7cm"} Having computed the asymmetry from each of the four types of CP-violation, it is natural to ask what happens when two or more terms are active at the same time. This may of course be done in any number of different combinations, which different values of the four $\delta_{i,j}$. We will show one particular case here, namely $$\begin{aligned} S_{\rm 2+1,\phi} = \frac{3\delta_{2+1,\phi}}{m_{\rm w}^2} \phi^\dagger\phi\left( \frac{g^2}{16\pi^2}\textrm{Tr } W^{\mu\nu}\tilde{W}_{\mu\nu}-\frac{(g')^2}{32\pi^2}B^{\mu\nu}\tilde{B}_{\mu\nu} \right),\end{aligned}$$ so that $\delta_{2,\phi}=-\delta_{1,\phi}=\delta_{2+1,\phi}=6.8$. By a similar argument to the one that led to (\[eq:bias\]), we hence effectively bias the combination $N_{\rm cs,SU(2)}-N_{\rm cs,U(1)}$, which again through the anomaly equation is equal to the baryon number. We realise that this a very special choice, but it is just meant as one example of combining CP-violating terms. Since we have seen that in general, $\delta_{1,j}$ must be about an order or magnitude larger than $\delta_{2,j}$ to create the same size asymmetry, we expect the contribution from the SU(2) term to dominate. In Fig. \[fig:combine\], we show the time-dependence of the Higgs winding number for three simulations, all at $m_H/m=32$. One run has only the Higgs-SU(2) term turned on (black line), another has only the Higgs-U(1) term turned on (blue line). And the third has both turned on simultaneously (red line). The bands around each curve correspond to one standard deviation on the average. In the left-hand plot, we show the individual three asymmetries, which grow and settle, with the U(1)-only asymmetry clearly the smallest, and the SU(2)-only asymmetry and SU(2)+U(1) asymmetry consistent within errors. In the right-hand plot we compare the asymmetry from the combined run to the sum of the other two runs, according to $N_{\rm cs, SU(2)}-N_{\rm cs,U(1)}$. We see that the two agree within error bars. It seems that at least in this linear regime of the individual terms, combining multiple sources of CP-violation one may simply add up their individual contributions. No significant enhancements or suppressions arise. Although note that we chose a combination of terms precisely biasing the observable, we were intersted in. Whether for more generic combinations, competing effects create more complicated non-linear effects remains to be seen. Also, because the U(1) asymmetry is of the same order of magnitude as the statistical errors, we do not have the accuracy to make very strong statements on this point. Constant bias of SU(2) Chern-Simons number {#sec:chemical} ------------------------------------------ ![The asymmetry from a constant bias for (lattice) $N_{\rm cs}$, for different lattice spacings with the same physical volume.[]{data-label="fig:adep"}](con_nw.pdf "fig:"){width="7cm"} ![The asymmetry from a constant bias for (lattice) $N_{\rm cs}$, for different lattice spacings with the same physical volume.[]{data-label="fig:adep"}](square_fit.pdf "fig:"){width="7cm"} Since $W\tilde{W}$ is already responsible for breaking CP (through breaking P) in the simulations, one may imagine simply replacing the Higgs field by a constant, to get $$\begin{aligned} S_{2}=\frac{3\delta_2g^2}{16\pi^2m_{\rm w}^2}\frac{v^2}{2}\textrm{Tr}\, W^{\mu\nu}\tilde{W}_{\mu\nu}=\frac{6\delta_2}{16\pi^2}\textrm{Tr}\, W^{\mu\nu}\tilde{W}_{\mu\nu}.\end{aligned}$$ For a classical simulation, this should however not provide any asymmetry, since $W\tilde{W}$ is a total derivative, and so drops out of the equation of motion[^1]. However the lattice implementation is not a total derivtaive at finite lattice spacing. Writing out the plaquette $$\begin{aligned} U_{x,\mu\nu}=U_{x,\mu}U_{x+\mu,\nu}U_{x+\nu,\mu}^\dagger U_{x,\nu}^\dagger= e^{-i a_\mu a_\nu F_{\mu\nu}^a\frac{\sigma^a}{2}+\mathcal{O}(a^4)}\end{aligned}$$ This gives us, for small lattice spacing $$\begin{aligned} \textrm{Tr}\, W^{\mu\nu}\tilde{W}_{\mu\nu}\simeq \frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\textrm{Tr}\, U_{x,\mu\nu}U_{x,\rho\sigma}=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\textrm{Tr}\bigg[(1-ia_\mu a_\nu F_{\mu\nu}^a\frac{\sigma^a}{2}-\frac{a_\mu^2a_\nu^2}{2}F_{\mu\nu}^a\frac{\sigma^a}{2}F_{\mu\nu}^b\frac{\sigma^b}{2}+\cdots)\nonumber\\ \times (1-ia_\rho a_\sigma F_{\rho\sigma}^a\frac{\sigma^a}{2}-\frac{a_\rho^2a_\sigma^2}{2}F_{\rho\sigma}^a\frac{\sigma^a}{2}F_{\rho\sigma}^b\frac{\sigma^b}{2}+\cdots)\bigg],\nonumber\\\end{aligned}$$ because of the antisymmetrization and the trace, what survives is $$\begin{aligned} \frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\textrm{Tr} \,U_{x,\mu\nu}U_{x,\rho\sigma}=-\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}\frac{a_\mu a_\nu a_\rho a_\sigma}{2} F^b_{\mu\nu}F_{\rho\sigma}^b+\mathcal{O}(a^6).\end{aligned}$$ We find that to reduce lattice artefacts, it is necessary to symmetrize the plaquette as $$\begin{aligned} \bar{U}_{x,\mu\nu}=\frac{1}{4}\bigg(U_{x,\mu\nu}+U_{x,-\nu \mu}+U_{x,\nu -\mu}+U_{-\mu-\nu} \bigg).\end{aligned}$$ In any case the lattice term is not a total derivative, but has corrections of relative error expected to scale as $\mathcal{O}(a^2)$. We may therefore expect CP-violating effects from this term, going to zero quadratically with the lattice spacing. In Fig. \[fig:adep\] we compare simulations at equal physical volume, but lattice spacings of $am_H=0.375, 0.5, 0.75$. We use a quench time of $m_H\tau_q=32$ and $\delta_{2}=6.8$. We show the time histories of the Higgs winding number (left) and a fit to a purely quadratic dependence on lattice spacing (right). The fit is very convincing, confirming that the lattice artefacts contribute as expected. Also, the magnitude of the artefact contribution, although non-negligible, is subdominant relative to the total asymmetry once the dynamical Higgs field is reinstated. We note that all the above simulations were done at $am_H=0.375$, where the artefacts contribution is $\simeq -0.04$. As an estimate, this can be compared to the result for $S_{2,\phi}$ at the same $\delta_{2,\phi}=6.8$ of $0.33$, a systematic error of about 15%. But it does teach us that using a larger lattice spacing could introduce systematic errors larger than the physical signal. Conclusion {#sec:conclusion} ========== In a series of papers [@Mou:2017atl; @Mou:2017zwe; @Mou:2017xbo], we have gradually relaxed simplifying assumptions on the dynamics and field content of simulations of Cold Electroweak Baryogenesis. The results show that the main findings of the original work [@Tranberg:2003gi; @Tranberg:2006ip; @Tranberg:2006dg] are correct: A baryon asymmetry is produced in a tachyonic electroweak transition, as soon as CP-violation is present (primary or secondary). This asymmetry can be consistent with observations for reasonable values of the phenomenological dimensionless CP-violating parameters $\delta_{i,j}\simeq 10^{-5}$. The overall sign depends on the speed of quench, so that fast quenches, “quench times" $m_H/m<4$, produce one sign (negative, in our conventions, for SU(2)-Higgs), and slower quenches produce the opposite sign. For very slow quenches $m_H/m > 60$, the asymmetry becomes very small. The replacement SU(2)$\rightarrow$U(1) flips the overall sign, and so does $\phi\rightarrow\sigma$. The quantity of interest for observations is the baryon-to-photon ratio, and for the parameters used here, it is given by [@Mou:2017atl] $$\begin{aligned} \eta = \frac{n_{B}}{n_\gamma}= 8.55\times 10^{-4} \langle N_{\rm w}\rangle,\end{aligned}$$ where $\langle N_{\rm w}\rangle$ refers to the specific simulations and lattice parameters described above. A sensible estimate is the to consider a fast quench for the SU(2)-Higgs term (\[eq:fastmax\]), for which we find $$\begin{aligned} \eta = -9\times 10^{-6}\delta_{2,\phi}, \end{aligned}$$ and since the observed asymmetry is approximately $\eta=6\times 10^{-10}$, we require $\delta_{2,\phi}\simeq 7\times 10^{-5}$. Or 5 times smaller if we allow ourselves to tune to the optimal quench speed $m_H/m=32$. This information can now be fed back to model building, where the largest caveat is how to engineer a cold symmetry breaking transition in the first place, while still triggering a fast enough quench. A few models exist on the market, where the $\sigma$ field may be identified with the inflaton [@vanTent:2004rc] or not [@Enqvist:2010fd] with the associated constraints from observations. And a more exotic scenario where the triggering is not due to a $\sigma$ but a supercooled phase transition [@Konstandin:2011ds; @vonHarling:2017yew]. Much more work in this direction is required. The second caveat is the origin of the CP-violation terms. The Standard Model does not provide large enough CP-violation [@Brauner:2011vb], but the Two-Higgs Doublet Model (2HDM) might. If the Standard Model (or 2HDM or Standard Model+singlet) were a low-energy effective theory of something else, additional sources of CP-violation could be present from integrating out heavy degrees of freedom. This problem is not distinct from the lack of sufficient CP-violation in traditional (hot) Electroweak Baryogenesis. However, in the hot regime around a finite-temperature electroweak phase transition, temperature is around 160 GeV [@DOnofrio:2014rug], which suppresses effective CP-violation. In the cold regime, we instead experience temperatures between near-zero (at the beginning) and up to 30-40 GeV after the transition. Ultimately, the true effective CP-violation will arise from integrating out heavy degrees of freedom in an out-of-equilibrium environment, a computation that is hard to do analytically. In time, one would want to perform fully 3-family simulations of the whole SM + extensions with fermions, on large lattices with high statistics. Although the proof of method exists [@Saffin:2011kn], the numerical effort is vast. For the moment, the highest priority seems to be to extend the set of viable and not too fine-tuned super-cooling and triggering mechanisms and scenarios, embedded in experimentally testable particle physics models. Since a fast triggering of Higgs symmetry breaking requires a sizeable coupling to whatever fundamental or composite BSM degree of freedom in whatever way, constraints from zero-temperature Higgs collider physics will be important. Standard portal couplings to what could be a Dark Sector could in turn connect baryogenesis to Darkmattergenesis, which could itself be based on a tachyonic transition or a more traditional first order phase transition. Getting all the numbers to match up (asymmetry, Dark Matter density, expansion of the Universe, evading direct detection, inflation) will likely require creativity in model building. [**Acknowledgements:**]{} PMS is supported by STFC grant ST/L000393/1. AT and ZGM are supported by a UiS-ToppForsk grant. The numerical work was performed on the Abel supercomputing cluster af the Norwegian computing network Notur. [\*]{} L. M. Krauss and M. Trodden, Phys. Rev. Lett.  [**83**]{} (1999) 1502 doi:10.1103/PhysRevLett.83.1502 \[hep-ph/9902420\]. J. Garcia-Bellido, D. Y. Grigoriev, A. Kusenko and M. E. Shaposhnikov, Phys. Rev. D [**60**]{} (1999) 123504 doi:10.1103/PhysRevD.60.123504 \[hep-ph/9902449\]. E. J. Copeland, D. Lyth, A. Rajantie and M. Trodden, Phys. Rev. D [**64**]{} (2001) 043506 doi:10.1103/PhysRevD.64.043506 \[hep-ph/0103231\]. A. Tranberg and J. Smit, JHEP [**0311**]{} (2003) 016 doi:10.1088/1126-6708/2003/11/016 \[hep-ph/0310342\]. B. J. W. van Tent, J. Smit and A. Tranberg, JCAP [**0407**]{} (2004) 003 doi:10.1088/1475-7516/2004/07/003 \[hep-ph/0404128\]. K. Enqvist, P. Stephens, O. Taanila and A. Tranberg, JCAP [**1009**]{} (2010) 019 doi:10.1088/1475-7516/2010/09/019 \[arXiv:1005.0752 \[astro-ph.CO\]\]. T. Konstandin and G. Servant, JCAP [**1107**]{} (2011) 024 doi:10.1088/1475-7516/2011/07/024 \[arXiv:1104.4793 \[hep-ph\]\]. M. B. Gavela, P. Hernandez, J. Orloff, O. Pene and C. Quimbay, Nucl. Phys. B [**430**]{} (1994) 382 doi:10.1016/0550-3213(94)00410-2 \[hep-ph/9406289\]. M. B. Gavela, M. Lozano, J. Orloff and O. Pene, Nucl. Phys. B [**430**]{} (1994) 345 doi:10.1016/0550-3213(94)00409-9 \[hep-ph/9406288\]. T. Brauner, O. Taanila, A. Tranberg and A. Vuorinen, Phys. Rev. Lett.  [**108**]{} (2012) 041601 doi:10.1103/PhysRevLett.108.041601 \[arXiv:1110.6818 \[hep-ph\]\]. Z. G. Mou, P. M. Saffin and A. Tranberg, JHEP [**1707**]{} (2017) 010 doi:10.1007/JHEP07(2017)010 \[arXiv:1703.01781 \[hep-ph\]\]. Z. G. Mou, P. M. Saffin and A. Tranberg, JHEP [**1706**]{} (2017) 075 doi:10.1007/JHEP06(2017)075 \[arXiv:1704.08888 \[hep-ph\]\]. Z. G. Mou, P. M. Saffin and A. Tranberg, JHEP [**1801**]{} (2018) 103 doi:10.1007/JHEP01(2018)103 \[arXiv:1711.04524 \[hep-ph\]\]. J. Garcia-Bellido, M. Garcia-Perez and A. Gonzalez-Arroyo, Phys. Rev. D [**69**]{} (2004) 023504 doi:10.1103/PhysRevD.69.023504 \[hep-ph/0304285\]. A. Tranberg and J. Smit, JHEP [**0608**]{} (2006) 012 doi:10.1088/1126-6708/2006/08/012 \[hep-ph/0604263\]. A. Tranberg, J. Smit and M. Hindmarsh, JHEP [**0701**]{} (2007) 034 doi:10.1088/1126-6708/2007/01/034 \[hep-ph/0610096\]. A. Tranberg and B. Wu, JHEP [**1207**]{} (2012) 087 doi:10.1007/JHEP07(2012)087 \[arXiv:1203.5012 \[hep-ph\]\]. A. Tranberg and B. Wu, JHEP [**1301**]{} (2013) 046 doi:10.1007/JHEP01(2013)046 \[arXiv:1210.1779 \[hep-ph\]\]. Z. G. Mou, P. M. Saffin and A. Tranberg, JHEP [**1506**]{} (2015) 163 doi:10.1007/JHEP06(2015)163 \[arXiv:1505.02692 \[hep-ph\]\]. Z. G. Mou, P. M. Saffin and A. Tranberg, JHEP [**1311**]{} (2013) 097 doi:10.1007/JHEP11(2013)097 \[arXiv:1307.7924 \[hep-ph\]\]. J. Garcia-Bellido, M. Garcia Perez and A. Gonzalez-Arroyo, Phys. Rev. D [**67**]{} (2003) 103501 doi:10.1103/PhysRevD.67.103501 \[hep-ph/0208228\]. J. Smit and A. Tranberg, JHEP [**0212**]{} (2002) 020 doi:10.1088/1126-6708/2002/12/020 \[hep-ph/0211243\]. M. van der Meulen, D. Sexty, J. Smit and A. Tranberg, JHEP [**0602**]{} (2006) 029 doi:10.1088/1126-6708/2006/02/029 \[hep-ph/0511080\]. M. D’Onofrio, K. Rummukainen and A. Tranberg, Phys. Rev. Lett.  [**113**]{} (2014) no.14, 141602 doi:10.1103/PhysRevLett.113.141602 \[arXiv:1404.3565 \[hep-ph\]\]. B. von Harling and G. Servant, JHEP [**1801**]{} (2018) 159 doi:10.1007/JHEP01(2018)159 \[arXiv:1711.11554 \[hep-ph\]\]. P. M. Saffin and A. Tranberg, JHEP [**1202**]{} (2012) 102 doi:10.1007/JHEP02(2012)102 \[arXiv:1111.7136 \[hep-ph\]\]. [^1]: At the quantum level, the story is different
--- author: - 'A. Gorsky,' - 'A. Milekhin,' - 'and S. Nechaev' title: 'Douglas–Kazakov on the road to superfluidity: from random walks to black holes' --- Introduction {#sect:intro} ============ The third order phase transition, known now as Douglas-Kazakov (DK) transition, was comprehensively studied for the first time in [@dk] in the large-$N$ two-dimensional Yang-Mills (YM) theory on a sphere. It was understood that this transition is the reincarnation of the Gross-Witten-Wadia phase transition [@gw; @wadia] in the lattice version of the gauge theories. In the matrix model framework, the DK transition results in changing the density eigenvalue support: below the transition point, the density enjoys the one-cut solution, while above the transition it has two-cut solution corresponding to an elliptic curve. Near the transition region, the eigenvalue density is governed by the universal Tracy-Widom distribution. In terms of the 2D YM theory on the sphere $S^2$, the phase transition occurs at some critical value of the radius of a sphere, $R_c$. It has been found that the YM partition function is dominated by the zero-instanton charge sector at weak coupling regime (below the transition point), while at strong coupling (above the transition) instantons dominate [@grm]. Similar phase transition was found also for the Wilson loop in the 2D YM theory [@olesen]. The DK-like phase transition occurs also in deformed topological theories, where the behavior is more generic. For example, similar phase transitions have been found in 2D YM theory on the cylinder and disc [@grm]. In these cases the phase transition points strongly depend on the boundary holonomies. Quite recently, the generalizations of the 2D YM to the 2D $q$-YM [@jafferis; @marino05; @qym] and to the 2D $(q,t)$-YM [@szabo13; @aganagic12] were developed, and a very rich structure of the DK-like phase transitions has been found. In these cases, the transition points again separate different phases: the “perturbative” and the “strong coupling” instanton-dominated phases. However, if in the ordinary DK transition one has single transition point, in deformed case one sees multiple transitions driven by different types of instantons. Along the text we exploit the relations among three physical problems: i) Directed vicious random walks (VW) in (1+1)D with different initial and boundary conditions [@fms; @fms2]; ii) Deformed large-$N$ topological field theories in 2D and 3D space-time [@deharo1; @deharo2; @deharo3]; and iii) Entropy of the extremal (i.e zero-temperature) charged black holes (BH) in 4D at large magnetic charge [@vafa; @osv; @ooguri]. Motivated by the DK phase transition in 2D YM, we address in this paper few physical questions: - The DK transition occurs in collective constrained thermal Langevin dynamics of Dyson particles on a circle. Is the meaning of the DK phase transition (if any) for stochastic quantization the same as for the thermal dynamics? - How could we understand physics of DK transition from different viewpoints: as “orthogonality catastrophe” and as hydrodynamics description of “closure of the gap”? - What is the physical meaning of the DK transition in the black hole? What happens with magnetically charged extremal BH at transition point and what is the physical interpretation of stochastic dynamics for extremal magnetic BH? - Is it possible to form a kind of superfluid component at strong coupling above the DK transition in the hydrodynamical description? These questions are considered in this paper for which we provide some answers and conjectures paying especial attention to the physical interpretation of the phenomena. It should be emphasized that we discuss two types of stochastic processes along the text. The first one concerns the standard *thermal Brownian dynamics* with the diffusion coefficient proportional to the temperature. This stochastic process is described by the Langevin equation, for which the probability distribution satisfies the standard Fokker-Planck (FP) equation in the Euclidean time which, in turn, can be transformed into the Schrödinger equation for the many-body quantum system of interacting point-like particles. The most popular relevant stochastic processes are: i) the Schur process (vicious walkers or free fermions); ii) the Jack process where the degrees of freedom interact as Calogero particles; and iii) the Macdonald process in which the degrees of freedom interact as particles in the Ruijsenaars-Schneyder system (see [@borodin] for review). Another important role the random processes play in the *stochastic quantization* hold for any system with finite number degrees of freedom, or a field theory [@parisi]. In stochastic quantization the quantum correlators are evaluated via the solution to classical stochastic equations in an additional “stochastic” time. The quantum correlators describe equilibrium states of the stochastic system at the infinite stochastic time. The parameters of the Fokker-Planck equation corresponding to stochastic quantization have different meanings: the diffusion coefficient is now identified with the Planck constant, and the potential in the thermal FP equation gets substituted by the action. The Fokker-Planck equation itself is related to the Schwinger-Dyson equation in the quantum field theory. For the gauge theory it is a kind of a loop equation collecting the whole set of Virasoro constraints [@marchezini]. The reason to consider the stochastic quantization (SQ) dynamics along with the thermal stochastic dynamics is partially motivated by rich supersymmetric structure of SQ. One example is the famous Parisi-Sourlas SUSY dimensional reduction [@parisisur], another involves the Nicolai map which transforms the Langevin equation into the white noise [@nicolai]. The index of the Nicolai map coincides with the Witten index in the corresponding SUSY quantum mechanics [@cecotti]. The mechanism of the SUSY breaking is elaborated only partially and it was conjectured long time ago that the states with broken SUSY correspond to non-equilibrium steady states [@feigel]. The hidden SUSY structure in stochastic equations motivates some studies where the relation with the topological theories were elaborated. In particular the unifying role of the FP equations in the context of the topological strings has been conjectured in [@dijkgraaf] and the twisted index for the Nicolai map has been suggested in [@vafacec] which involves the analogue of the $R$-twisting in time direction. We conjecture existence of DK transition relevant to stochastic quantization, in multipiparticle radial stochastic Löwner equation (SLE) (for review of SLE see [@cardyrev]). For real time stochastic Brownian dynamics, the DK transition is identified as follows. Consider $N$ ($N\gg 1$) vicious (1+1)D Brownian walkers (or free fermions) on a circle, and compute the probability to reach some prescribed final state at time $T$ starting from some fixed initial out-of-equilibrium configuration. This probability obeys the multiparticle diffusion equation similar to the Schrödinger equation in the Euclidean time. Replacing the Laplacian operator by the self-adjoint one, we can equivalently describe the system of fermionic particles by the Fokker-Planck (Dyson) equation, which in turn, can be derived from the Langevin equation with random forcing. The properly normalized partition function of the large-$N$ 2D YM theory on the sphere turns out to coincide with the reunion probability of $N$ “Brownian fermions” (Schur process) on the circle upon the proper identification of parameters [@deharo1; @fms]. The reunion probability undergoes a kind of 3rd order phase transition at some critical point and has a lot in common with the “orthogonality catastrophe” for the many-body fermionic system [@abanov2]. The DK critical behavior can be traced not only for the partition function but for other important variables. The hydrodynamic viewpoint on real time stochastic dynamics of $N$ (1+1)D vicious walkers deals with the challenging problem of computation the multi-point correlation function $\rho_N(x_1,x_2,...x_k,t)$ for $1\le k\le N$. A particularly interesting object is the resolvent in YM theory defining the one-particle correlation function (the density). It obeys the complex-valued Burgers-Hopf equation [@grm] with particular boundary conditions. It also obeys the algebraic equation following from the loop equation. We emphasize in this paper that the hydrodynamic point of view becomes very useful at least in two aspects. On one hand, it enables us to consider system of Brownian fermions with large number of particles. This problem itself has a lot in common with the overlapping of multifermionic wave functions studied in [@abanov1; @abanov2]. On the other hand, even the one-point correlation function, $\rho_N(x,t)$, which satisfies the Burgers-Hopf equation, has some far-reaching connections, the one-point correlation function in the large-$N$ vicious walks ensemble, shares limiting properties of the statistics of $(1+1)d$ single random walk with fixed area below the curve. The last problem is known to be connected with the generating function of algebraic invariants (Jones, HOMFLY) of some specific series of torus knots [@bgn]. The important question concerns the hydrodynamic identification and interpretation of DK transition. The typical example of the critical behavior can be interpreted as the overturn of the solution at finite time due to non-linearity (as in Burgers equation at small viscosity). However, the pattern of the criticality related to DK case was argued in [@grm; @nowak1] to be different. It has been shown in [@grm; @nowak1] that the DK transition takes place when the fluid density has no gaps on the circle. Since the initial state of the fluid corresponds to the local bump, it means that the transition takes place when two fronts moving in the opposite directions on the circle, collide at some finite time. At this point one deals with the quantum hydrodynamics, and the solution upon collision corresponds to the strong coupling phase dominated by instantons in YM and windings in RW case. The holographic approach, ultimately related to the stochastic quantization, adds a new facet to the problem of DK transition. It was argued that the stochastic time can be identified with the radial holographic coordinate [@periwal; @polyakov], and the Fokker-Planck equation is tightly related to the holographic RG flows, as formulated in [@RG]. The Brownian motion in the theory with the boundary, corresponds to the stochastic motion of the end of the fundamental string [@hubeny1; @teaney] extended along the radial coordinate. It turns out, that such Brownian dynamics is not thermal. Namely, if we consider the string in the bulk background of the BH, the Hawking temperature does not matter for the stochastic motion. The stochastic y is dominated by the local temperature on the string worldsheet and the horizon is formed at the *finite* radial coordinate. The stochastic dynamics can emerge for the extremal BH with vanishing temperature as well. The general discussion on the relation between the boundary and worldsheet temperatures can be found in [@ooguritem]. The similar holographic viewpoint can be applied to the multiple SLE in the radial direction, which was related to the FP equation at the boundary for the walkers interacting as in the trigonometric Calogero model [@cardy]. Note that in the framework of the stochastic quantization, the DK transition is a bit subtle issue since we are searching for the critical behavior of the many-body system at finite stochastic time, i.e. at finite value of the RG scale. From the standpoint of the DK transition, the RW-YM correspondence is generalized in our paper along the following ways: - We demonstrate that DK phase transition in 2D YM on the cylinder and disc with the fixed holonomies amounts to the phase transition in the stochastic process with prescribed out-of-equilibrium configurations of extremities (initial and final points); - We take into account the $\theta$-term in 2D YM, fixing the total instanton number, and discuss the RW counterpart of corresponding theory in terms of vicious walks with fixed chemical potential for winding on a circle; - We add a Wilson line in a particular representation to the 2D YM on the cylinder which yields the trigonometric Calogero model [@gn1; @gn2]. From the RW point of view, it involves the modified interaction between vicious walkers and corresponds to the so-called “Jack stochastic process”. Taking into account the relation between the trigonometric Calogero model and the stochastic Löwner evolution (SLE), we discuss the 3rd order DK transition in terms of SLE. The problems related to DK phase transition can be reformulated in terms of the string theory which uncovers the connection with black holes. It is known that 2D YM theory on the torus is realized as the worldvolume theory on $N$ $D4$-branes wrapping the $O(-p)\rightarrow T^2$ subspace inside $O(-p)\times O(p)\rightarrow T^2$ Calabi-Yau manifold [@vafa]. On the other hand, such $N$ $D4$-branes yield the description of the black hole in 4D with the magnetic charge $N$. Explicitly, the relation between partition functions of the 4D black hole, $Z^{4d}_{BH}$, the 2D YM, $Z^{2d}_{YM}$, and the topological string, $Z_{top}$, roughly states [@osv] Z\^[4D]{}\_[BH]{}=Z\^[2D]{}\_[YM]{}= |Z\_[top]{}|\^2 The correspondence between the 4D charged black hole and the 2D YM was generalized in [@ooguri] to the arbitrary genus $g$ of the base in CY , however it turned out that $q$-deformed 2D YM has to be considered for $g \neq 1$ case and the additional integration over the holonomies is involved. Similar relation between the partition function of $(q,t)$ 2D YM and of the refined black hole indices, has been elaborated in [@aganagic12]. The BH is extremal and to some extend can be considered as the particle with huge degeneracy, corresponding to multiplicity of the $D2-D0$ branes on $N$ $D4$ branes forming the extremal charged black holes. Using the map between $q$-deformed YM in 2D and the black hole partition function [@vafa; @ooguri], we identify the values of the black hole electric chemical potentials corresponding to the DK phase transition. Since we consider the limit of BH with large magnetic charge of special interest are phenomena and configurations specific for this limit. It was argued in [@bolognesi] that the configuration with large magnetic charge tends to form the monopole spherical shell which at some critical value of parameters gets transformed into the magnetically charged BH. This is the magnetic counterpart of the phenomena known for electrically charged BH which undergoes the superconducting phase transition and BH with fermionic environment when the electronic star gets formed at the point of 3rd order phase transition (see [@hartnoll] for the review). We shall conjecture that the DK phase transition at the BH side corresponds to the “monopole wall-magnetically charged BH” transition discussed in [@bolognesi]. In fact, the relation between the DK transition and BH physics has been discussed in [@alvarez; @wadia07] at a different angle. In these papers the large-$N$ 4D gauge theory on the manifold invoving $S^3$ is considered holographically. There is some compact direction treated as the thermal circle and the gravity background involves the BH. There is Page-Hawking transition in this setup and moreover it is a version of the small BH-string transition suggested long time ago in [@susskind; @polchinski]. The DK transition has been interpreted as small DH-string transition in [@alvarez] however the parameters of the gauge theory were not interpreted as the BH charges and chemical potentials. Our picture will differ from this scenario. To complete the connections, we have to explain the role of the stochastic dynamics in the framework of the extremal magnetic BH. One line of reasoning involves the behavior of the long strings near the BH horizon. In was conjectured in [@susskind; @polchinski] that the degeneracy of the states of the long string probably could explain the black hole entropy and, to some extend, the BH itself can be treated as the long string wrapped the stretched horizon. This logic has been extended further and it was argued argued that the long string near horizon behaves as a random walker [@zakharov]. This approach is based on the thermal scalar picture [@kru]. Another viewpoint deals with the counting of the degeneracies of the BPS states with the fixed quantum numbers in terms of properly weighted random walks in the region determined by this quantum numbers. This stochastic approach to the BPS counting worked successfully in the evaluation of the particular observables in 5D SUSY YM QCD [@bgn; @gm; @gmn]. We shall discussed some aspects of the BH-RW correspondence in the paper. Borrowing some results from the superfluidity theory [@prok] we conjecture that the DK transition can be interpreted as the emergence of the superfluid component in the strong coupling phase. To this aim it is particularly appropriate to regard the superfluidity as the specific response on the external abelian graviphoton field. Such interpretation is well glued with the consideration of the black hole, where the analogue of the superfluid component is saturated by the $D2$-brane “topological susceptibility” yielding the electrically charged component of the black hole at the horizon. Our conjecture has some similarity with another study [@nowak2]. It was suggested in [@nowak2] that the closing the gap at DK point has something to do with the formation of a kind of the chiral condensate which is determined via Casher-Banks relation by the spectrum of the Dirac operator at the origin. This condensate breaks the global chiral symmetry. The paper is organized as follows. In Section 2 we remind the key points concerning the DK phase transition in the 2d YM theory and its generalizations. We explain how the RW-YM correspondence yields the new critical behavior at the RW side for different out-of-equilibrium states. In Section 3 we generalize the duality to the interacting random walkers and comment on the relation to the SLE process. In Section 4 we consider the duality between two integrable systems and formulate how the DK transition is mapped under the duality transform. Section 5 is devoted to the hydrodynamical aspects of the DK transition. Section 6 concerns the $q$ YM-BH correspondence from the DK phase transition viewpoint. It Section 7 we conjecture that the DK phase transition corresponds to the appearance of a kind of the superfluid component at the strong coupling side. This phase corresponds to the “condensate of instantons”, “condensate of t’Hooft loops” or “condensate of $D2$-”branes“, depending on the ”observer". The results and open questions are summarized in the Discussion. Schematically the links between different subjects discussed throughout the work, are shown by arrows in the diagram depicted in the [Fig.\[fig:00\]]{}. ![Flowchart of tentative connections between different parts of the work. The numbers designate corresponding Sections.[]{data-label="fig:00"}](dk_f00_num){width="12cm"} Douglas–Kazakov phase transitions in the 2D Yang-Mills theory {#sect:dk} ============================================================= 2D Yang-Mills on a sphere ------------------------- Let us briefly review the original setting of Douglas–Kazakov (DK) phase transition developed in [@dk] for a Yang-Mills (YM) theory on a sphere $S^2$. The YM partition function on a two-dimensional surface of genus $g$ reads Z\_g(g\_[YM]{},A) = {\_[\_g]{} d(F + g\_[YM]{}\^2 \^2)}. \[eq:1\] where $A$ is the vector potential of the gauge field, $F$, and $g_{YM}$ is the coupling constant. The partition function (\[eq:1\]) for the $U(N)$ gauge group can be rewritten as a sum over representations: Z\_g(g\_[YM]{},A)=\_R ( R)\^[2-2g]{}e\^[-C\_2(R)]{}, \[eq:part\_rep\] where the representations $R$ are labelled by Young tableaux, i.e. by sets of $N$ ordered integers $\left\{n_1\ge n_2\ge \dots \ge n_N\right\}$. The functions $C_2(R)$ and $\dim\, R$ are correspondingly the quadratic Casimir (Laplace operator) and the dimension of the representation, $R$: C\_2(R)=\_[i=1]{}\^N n\_i(n\_i-2i+N+1 ), R=\_[i&gt;j]{}( 1- ). \[eq:rep\_def\] In the following we are mostly interested in $g=0$ case, corresponding to $S^2$ topology. In the large-$N$ limit the system experiences a third-order phase transition. In terms of the continuous variable $h(x)$, describing the shape of the Young tableau, h(x)=-+, x= \[eq:cont\] the effective YM action for $N\gg 1$ can be written as: S\_0\[h(x)\]=-\_0\^1\_0\^1 dx dx’|h(x)-h(x’)|+\_0\^1 dx h\^2(x)-. \[eq:action\_cont\] Since we started from a Young tableau, we have, by definition, a constraint on its shape, $h(x)$ meaning that the line numbers, $n_i$, should sequentially increase with $i$. This requirement in the limiting case gets mapped onto the condition (h)1 \[eq:young\] where $\rho(h)=\frac{\partial x(h)}{\partial h}$. The phase transition happens, when the classical solution of (\[eq:action\_cont\]) ceases to satisfy the constraint $\rho(h)\le 1$. The corresponding solution is: =0 =. \[eq:saddle\_point\] Hence the DK phase transition occurs at AA\_[cr]{}=. \[eq:crit\] ![Left: density profile before the phase transition. Right: after the phase transition[]{data-label="fermi"}](fermi){width="15cm"} Beyond $A_{cr}$, the density $\rho(h)$ of eigenvalues develops a plateau around $h=0$ - Figure \[fermi\]. Later in Section \[sect:two-sphere\] we will discuss the physical meaning of this phenomena. Right now note that the original sum has a symmetry $n_i-i \leftrightarrow n_j-j$. So we can get rid of the requirement $\rho \le 1$: we divide by $N!$ and sum over all possible $n_i$. But we still expect the phase transition to occur. How is this possible? The actual origin of the phase transition is the $\log$ term in eq. (\[eq:action\_cont\]): when a saddle point contains two coincident $h(x)$ the integrand becomes infinite and this is not a real saddle point. The DK phase transition can be easily generalized to the case of generic $\beta$–ensemble. Let us consider the partition function Z\_(g\_[YM]{},A )= \_[{n\_i}]{} ()\^[2]{}(-\_i[n\_i(n\_i - 2 i + N+1)]{}) where $\Delta(\boldsymbol{n})$ is the Vandermonde determinant, which we often encounter throughout the paper, ()=\_[i&lt;j]{}(n\_i-i-n\_j+j) Since we are interested in the large-$N$ limit, it is convenient to introduce again the continuum variable $h(x)$ as: h(x)=-+, x= \[eq:cont\_beta\] Thus, instead of (\[eq:young\]), we have a modified condition: (h) \[eq:young2\] and the third-order phase transition occurs at AA\_[cr]{}=. This transition can be thought of as a kind of a phase transition for the norm of the Laughlin state describing the fractional quantum Hall effect. $q$-deformed 2D Yang-Mills on a sphere -------------------------------------- One can generalize the 2D YM theory onto 2D $q$-YM involving the additional parameter $q=e^{-g_s}$ where $g_s$ is the string coupling constant. The 2D $q$-Yang-Mills theory on a sphere can be mapped onto the nondeformed 2D Yang-Mills on a cylinder with specific boundary holonomies. The partition function of the $q$-deformed theory can be written as a sum over representations of the gauge group, as in the non-deformed case. The essential difference is that the dimension of the representation must be replaced by the quantum dimension: Z\^[q]{}\_g(g\_s,A)= \_[R]{} (\_q R)\^[2-2g]{} q\^[C\_2(R)]{}e\^[iC\_1(R)]{}, \[eq:q\_partition\] where $C_1(R)$ is the first Casimir of the representation, and quantum dimension is given in terms of the $q$-numbers: \_q R=\_[i&gt;j]{}=\_[i&gt;j]{} \[eq:dim\_q\] The partition function for the $q$-Yang-Mills theory on a sphere coincides with the partition function for the non-deformed theory on a cylinder. Indeed, defining the non-trivial holonomies, $U_1$ and $U_2$ around the boundaries of the cylinder, the partition function can be written as: Z\^[cyl]{}(g\_[YM]{}\^2, A|U\_1, U\_2)=\_R \_R(U\_1)\_R(U\_2\^)e\^[- C\_2(R)+iC\_1(R)]{}, \[eq:partition\_cylinder\] where $\chi_R$ are the characters in the representation $R=\left\{ n_1, \dots, n_N \right\}$: \_R(U)=, \[eq:character\] and $0\le\theta_i\le 2\pi$ are the eigenvalues of the unitary matrix $U$. Recall that for the quantum dimension there exists the following expression: \_q R=\_R(q\^[-]{}), \_i = 1/2-i Thus, the partition function of $q$ YM (\[eq:q\_partition\]) coincides with the partition function on a cylinder (\[eq:partition\_cylinder\]) upon the identification [@deharo1; @deharo2; @deharo3]: \_k=ikg\_s, g\_s\^[-1]{}=g\_[YM]{}\^2. \[eq:q\_cyl\] Besides, we have to set particles equidistantly in the complex plane since the boundary holonomy is $q^{-\rho}$. The phase transition for the $q$-YM has more rich structure than for the non-deformed case and can be considered as a reduction of the 2D $(q,t)$-Yang-Mills which we are going to describe in the next section. $(q,t)$-deformed 2D Yang-Mills on a sphere {#sec:qt} ------------------------------------------ The $(q,t)$ deformed 2D YM theory on a Riemann surface has been formulated in [@aganagic12; @szabo13]. The partition function of the model is a natural extension of the Schur polynomials to the elliptic Macdonald polynomials: Z\_[q,t]{}(\_g)=\_R ( )\^[2-2g]{} q\^[p R\^2/2]{} t\^[p(,R)]{} \[zqt\] In (\[zqt\]) we have introduced two equivatiant parameters, $q=e^{-\ep_1},\, t=e^{\ep_2}$, while $(\rho,R)=\cfrac{1}{2}\sum_i n_i(N+1-2i)$, and $\dim_{q,t} R$ is a $(q,t)$-deformed dimension of the representation $R$, \[eq:dimqt\] \_[q,t]{} R=(q/t)\^[|R|/2]{} M\_R(t\^;q,t)=q\^[||R||\^2/4]{} t\^[-||R\^T||\^2/4]{} where $g_R$ is a normalization factor for the Macdonald polynomials $M_R$: g\_R= = \_[R]{} Setting $\ep_1+\ep_2=0$, we reduce the $(q,t)$-deformation to the $q$-deformation considered above. The partition function for a cylinder with holonomies $U_1, U_2$ reads as: \[eq:qt\_cyl\] Z\^[cyl]{}\_[q,t]{} = \_R M\_R(U\_1;q,t) M\_R(U\^\_2;q,t) q\^[p R\^2/2]{} t\^[p(,R)]{} The normalization factor $g_R$ essentially comes from the requirement that upon gluing two cylinders, we obtain a cylinder again, dU M\_R(U;q,t) M\_T(U;q,t) = \_[T,R]{} g\_R Now let us consider the large-$N$ limit. Sending $N$ to infinity, we also take limits $\ep_1\to 0$ and $\ep_2\to 0$, keeping $\tau_{1,2}=N \ep_{1,2}$ constant. The DK phase transition for the sphere geometry preserves the second deformation and the equation for the critical area reads [@szabo13]: A\_[crit]{}=\_2 p=-p\^2 \^2 \[eq:q-t\] This equation holds for $p>2$. It was argued in [@aganagic12] that the partition function of the $(q,t)$-deformed 2d YM on the sphere coincides with the refined partition function of the extremely charged BH. This is a first signature of emergence of black holes in the DK context, which will be extended throughout our paper. In what follows we consider the particular limit of Jack polynomials: sending $\ep_1\to 0$ and $\ep_2 \to 0$ we keep both values, $\al=-\ep_2/\ep_1$, and $p \ep_1 = g_C^2$ finite. In such a limit the Macdonald polynomials become Jack polynomials, $J_R^\al$, and Eq. (\[eq:qt\_cyl\]) gets transformed into the following expression: \[eq:al\_cyl\] Z\^\_[cyl]{}=\_R J\^\_R(U\_1) J\^\_R(U\^\_2) ( - \_i n\_i(n\_i-2 i + N +1) ) For the partition function on a sphere we have: \[eq:al\_sp\] Z\^\_[S\^2]{}=\_[[n\_i]{}]{} \_[b=0]{}\^[-1]{} (()\^2-b\^2) ( - \_i n\_i(n\_i-2 i + N +1) ) The measure in (\[eq:al\_sp\]) looks quite complicated. Fortunately, in the large-$N$ limit, all $n_i$ are of order of $N$, while $\al$ stays of order of 1. Therefore, in the leading approximation we can drop the term $b^2$ and end up with the familiar $\beta$-ensemble with $\al=\beta$. 2D Yang-Mills on a cylinder and a disc -------------------------------------- It is known [@fms; @fms2] (see also the next Section for a brief overview) that the partition function of 2D Yang-Mills theory on a sphere with an $U(N)$ gauge group coincides with the partition function of a bunch of $N$ directed one-dimensional random walks, of lengths $t$ each, on a periodic lattice (strip) along $x$-coordinate of width $L$, where the parameters $t$ and $L$ are absorbed by the YM coupling constant, $g_{YM}$. Changing the gauge group $U(N)$ to $Sp(2N)$, or to $SO(2N)$, one can mimic partition functions of vicious random walks with Dirichlet ($Sp(2N)$), or Neumann ($SO(2N)$) conditions at strip boundaries. However little is known about the dependence of the DK transition point on the distribution of extremities of vicious walkers. In this subsection we consider vicious walks on periodic lattice in two cases: (i) when both extremities of vicious walks are equally-spaced distributed within some interval $\eps$ along $x$-axis, and (ii) when initial points are equally-spaced distributed within the interval $\eps$, and final points are free. In the frameworks of the field-theoretic description, the case (i) corresponds to the 2D $U(N)$ YM theory on a *cylinder*, and the case (ii) – to the 2D $U(N)$ YM theory on a *disc*. We also show that the case (ii) at $\eps\to 0$ is reduced to the 2D $U(N)$ YM on a *sphere*. Schematically these cases are depicted in the [Fig.\[fig:dk\_f01\]]{}. ![The 2D YM on a cylinder (a) is reduced to the 2D YM a disc (b) when left extremities are kept on finite support ($\eps_2\to 0)$, and right ones are set to zero, and to the 2D YM on a sphere (c) when $\eps_1\to 0$ and $\eps_2\to 0$.[]{data-label="fig:dk_f01"}](dk_f01){width="16cm"} If the distribution of the eigenvalues $\sigma(\theta)$ is uniform on the interval of length $c$, namely, ()={ [ll]{} , &||c,\ 0, & , . \[eq:distrib\] the phase transition occurs at \^2 =. \[eq:cyl\_crit\] Taking $c$ as c=, \[eq:c\_cyl\] in the large-$N$ limit of (\[eq:q\_cyl\]) we arrive at a condition for the phase transition in $q$-YM [@ArsiwallaBoels]: e\^=1+\^2. \[eq:dk\_qym\] If one boundary shrinks to the point, we arrive at 2D YM on a disc, corresponding to one common extremity (i.e. initial or final point) of the bunch of walkers. The DK phase transition occurs in this case at the critical area defined by the condition = – see [@gm] for details. DK for free fermions, directed vicious walks, Calogero and SLE models ===================================================================== Preliminaries: Two faces of Fokker-Planck equations --------------------------------------------------- Let us briefly recall general aspects of the stochastic dynamics. For simplicity, we begin with a single-particle stochastic dissipative dynamics, described by the (1+1)-dimensional Langevin equation for $x(t)$: = -k + (t) \[eq:lan1\] where $\xi(t)$ is the white noise: = 0; = D(t-t’) \[eq:lan2\] In (\[eq:lan1\])–(\[eq:lan2\]) $D=kT$ and $W(x)$ are the diffusion coefficient and the energy of the system. The Langevin equation yields the Fokker-Planck (FP) equation for the probability density, $P(x,t)$: = D + k ( P(x,t)) \[eq:fokker1\] whose stationary solution has the Gibbs distribution \_[t]{}P(x,t)P\_[st]{}(x)\~e\^[-]{} \[eq:gibbs1\] In this version of the FP equation, the white noise has purely thermal nature. The FP equation can be written also in the momentum and phase spaces. The Fokker-Planck equation plays a different role in the stochastic quantization of generic dynamical system [@parisi]. Let us introduce an additional time-like variable, the “stochastic time”, $s$, and consider the Langevin equation for a particle dynamics with respect to $s$. The phase space of the system acquires one extra dimension, becoming (1+1+1)–dimensional as is schematically shown in the [Fig.\[fig:stoch\_time\]]{}. ![Schematic phase space of one-particle stochastic dynamics, $x(t,s)$, in time, $t$, and in stochastic time, $s$.[]{data-label="fig:stoch_time"}](stoch_time){width="10cm"} Since the phase space of our system is lifted on one extra dimension, we should replace the energy, $V(x)$, by the action, $S(x(t))$, and the diffusion coefficient, $D$, – by the Planck constant, $\hbar$, which plays the role of the “temperature” for quantum system. The corresponding FP equation reads now = + ( P(x(t),s) ) \[eq:fokker2\] Eq.(\[eq:fokker2\]) has the conventional path integral quantum measure, defined by the stationary solution of (\[eq:fokker2\]) \_[s]{}P(x(t),s)P\_[st]{}(x(t))= e\^[-]{} \[eq:stat\] The stochastic quantization approach allows to derive the quantum correlators in quantum mechanics, or in quantum field theory, directly from the solution of the FP equation in the limit $s\to\infty$. Figuratively speaking, we could think of the classical statistical mechanics of strings at finite temperature, as of stochastic evolution of a quantum particle, where the energy of a string is replaced by the action of a particle. In such interpretation, the white noise has quantum nature, and stochastic time is considered as the Schwinger proper time. Keeping in mind forthcoming holographic consideration related to thermodynamics of the black hole, we would like to have more transparent view of stochastic time. The simplest one-loop correlators in the field theory can be written as the product of bulk-boundary propagators in the Anti-de-Sitter (AdS)-like bulk theory with an additional integral over the radial coordinate of the bulk point. This representation is proved upon identification of the Schwinger proper time with the radial coordinate in the AdS bulk [@gopa]. The same identification holds if we consider the effective Lagrangian of an external field in the boundary theory [@gorly]. In that case, the external field provides the effective cut-off in the radial coordinate. This can be seen, for instance, in the representation of the scalar propagator, $G_{sc}$, in the external self-dual field $F$, for which $G_{sc}= \int^{\infty}_{eF} ds e^{-x^2 s}$ [@gorly]. The identification of the stochastic time with the radial coordinate in the bulk has been suggested in [@periwal]. The viewpoint on the stochastic time as on the “radial coordinate” towards the holographic horizon, matches perfectly the holographic picture for the Brownian motion [@teaney; @hubeny1], where the Brownian particle is considered as the end of the string extended along the radial AdS coordinate in the background of the black hole. The string fluctuates due to the emission and the growth of the string inside the bulk occurs according to the Langevin dynamics in stochastic time. Let us emphasize that the random motion along the stochastic time $s$ (i.e. along a radial coordinate) *is not* of thermal nature: for instance, it deals nothing with the Hawking temperature of the BH. What matters, is the emergence of the string worldsheet horizon at a finite value of the radial coordinate. Thus, the FP equation in the stochastic time (\[eq:fokker2\]) can be regarded as the equation describing the holographic renormalization-group (RG) flows in the boundary theory. In the next Section we shall argue that the approach developed in [@cardy] for the radial Schramm-Löwner equation (SLE) fits perfectly with this picture. The said above allows us to conjecture that the DK phase transition in the multiple particles stochastic dynamics, corresponds to two different physical processes, “thermal Brownian dynamics” and “stochastic quantum diffusion”, though formally they are very similar. 1. In the thermal Brownian motion along a “physical time”, $t$, the critical behavior in the reunion probability occurs as the finite time $t_{cr}$, being a consequence of the “orthogonality catastrophe” for chosen out-of-equilibrium boundary conditions of random walks. 2. To the contrary, in the quantum diffusion, the system evolves in the stochastic time, $s$, which has the sense of a holographic radial coordinate attributed to ends of growing interacting strings in the AdS-like system. Strings grow from a common point at the boundary and join altogether at some point in the bulk at the stochastic time $s_{cr}$. The physical time is integrated over since the action enters into the FP equation. The critical stochastic time, $s_{cr}$, at which the DK transition occurs, implies that the RG flows behave differently above and below $s_{cr}$. We end up this brief summary of different viewpoints on stochastic dynamics with two open questions. Taking into account that the Planck constant plays the role of the temperature in the stochastic quantization scheme, we can consider the conventional temperature as the inverse period of the Euclidean time. One could ask whether the Planck constant can be thought of a kind of the period in the holographic coordinate. One could also consider the quantization of the system at finite temperature, wondering about the case when both, physical and stochastic times, are involved into the FP equation, and pose the question how the DK transition looks in that case. Free fermions and vicious walks in 1+1 {#sect:two-sphere} -------------------------------------- The computation of the 2D YM partition function on the sphere with the $U(N)$ gauge group can be reformulated in terms of the computation of the reunion probability of $N$ vicious walkers [@fms; @fms2; @deharo1; @deharo2; @deharo3]. Consider Brownian dynamics of $N$ repulsive points (or particles with fermion statistics) on a periodic one-dimensional lattice with the period $L$ (i.e on a circle of perimeter $L$), starting from some initial distribution, $\mathbf{x}=\{x_1,..., x_N\}$, and after time $t$ arriving at some final distribution, $\mathbf{y}=\{y_1,...,y_N\}$. The reunion probability, $P(t,\mathbf{x},\mathbf{y})$, can be constructed from a eigenfunction, $\Psi_N(\mathbf{p}|\mathbf{x})$, written as a Slater determinant (known equivalently as Fisher, Karlin-McGregor, or Lindstrom-Gessel-Viennot formulae) on wave functions of individual free particles, $\psi(p_j|x_k)$ \_N( | )=(p\_j|x\_k), (p\_j|x\_k)=(i p\_j x\_k) 1j,kN \[eq:slater\] Here $\mathbf{x}=\left\{x_1,\dots,x_N \right\}$ are the coordinates of the particles and $\mathbf{p}=(p_1,...,p_N)$ are the corresponding momenta. Consider a configuration when all particles start from a single point and after time $t$ return to the same point on a circle. The probability of such a process reads (\[eq:slater\]) P\_N(t,|, =)|\_[0]{}=\_N|e\^[-t H\_0]{}|\_N|\_[0]{}=\_[[p]{}]{}(| ) \^\*(| )e\^[-tE()]{}|\_[x0]{}, \[eq:reunion1\] where E()=\_[j=1]{}\^N E(p\_j), E(p\_j)= p\_j\^2, p\_j=0,1,2,... \[eq:E\] and each $E(p_j)$ is the eigenvalue of the Schrödinger operator $$H_0=D \sum_{k=1}^N \frac{\partial^2}{\partial x_k^2}$$ for the non-stationary wave function, $\psi(t,x)$, on a circle with circumference $L$, $$\left\{\begin{array}{l} \disp \frac{\partial \psi(t,x)}{\partial t} = D\frac{\partial^2 \psi(t,x)}{\partial x^2} \medskip \\ \disp \psi(t,x)=\psi(t,x+L) \end{array} \right.$$ As we have mentioned before, we rely on the fact that the free Schrödinger equation in imaginary time coincides with the Fokker-Plank equation, where the diffusion coefficient $D$ is related to the mass by $D=1/2m$. Taking in (\[eq:reunion1\]) the limit $\mathbf{x}\to 0$, we arrive at P\_N( t,L| 0)=C\^[N(N-1)/2]{} \_[Z]{}\_[i&lt;j]{}\^N(p\_i-p\_j)\^2 e\^[-\_[i=1]{}\^N p\_i\^2]{}, \[eq:reunion2\] where $\delta=x_i-x_{i+1} \rightarrow 0$ and $C$ is normalization factor. For $N\to\infty$ we keep the density of particles, $\frac{L}{N}$, to be finite $O(N^0)$ and the combination $tD$ scales as $N$, since we want the total mass $m N$ to be finite. Comparing the reunion probability (\[eq:reunion2\]) with (\[eq:part\_rep\]), we see that the reunion probability upon the identification: = \[eq:ident\] coincides with the partition function (up to the normalization factor) of the Yang-Mills theory on a sphere with the $U(N)$ gauge group. Therefore, the phase for free fermions on a circle transition occurs at: = 1 \[eq:transit\] Mapping of the 2D YM theory onto vicious walkers on the cylinder goes as follows. The probability for $N$ vicious walkers with the boundary conditions $(x_1,\dots x_N)$ and $(y_1,\dots y_N)$ corresponds to the transition amplitude in the 2D YM on the cylinder. The initial distribution of the walkers corresponds to the holonomy of the gauge field at one boundary, and the final distribution – to the holonomy at another boundary of the cylinder. The said can be easily seen if we rewrite (\[eq:slater\]) in terms of Schur functions: ( [p]{} | )= \_R(e\^[i \_1]{},…,e\^[i \_N]{}) (e\^[i ]{}) where we have introduced an auxiliary Young diagram, $R: n_i=p_i+i$. We take the energy to be proportional to the quadratic Casimir of $\lam$: E(p)=( )\^2 \_[i=1]{}\^N n\_i(n\_i-2i+N+1)=( )\^2 \_[i=1]{}\^N ( p\_i+ )\^2 + const It actually means that the physical momenta are $p_i+\frac{N+1}{2}$. Therefore, we can rewrite the transition amplitude as: $$\begin{gathered} \la \boldsymbol \phi \mid e^{-t H_L} \mid \boldsymbol \theta \ra = C \Delta(e^{-i \boldsymbol \phi}) \Delta(e^{i \boldsymbol \th}) \sum_R \chi_R(e^{-i \phi_1},\dots,e^{-i \phi_N}) \chi_R(e^{i \th_1},\dots,e^{i \th_N}) \\ \times \exp\left(-\cfrac{2 \pi^2 t}{m L^2} \sum_{i=1}^N p_i^2\right) \label{eq:cyl}\end{gathered}$$ This expression exactly coincides with the 2D YM partition function on a cylinder – compare (\[eq:cyl\]) and (\[eq:partition\_cylinder\]). Thus, the DK phase transition happens at some critical value of the distance between the initial positions of the walkers. The critical Young diagram which is given by (\[eq:cont\]) corresponds to the choice of the momentum eigenstate which gives the major contribution to the transition amplitude. As we have noted above, the eigenvalue density, $\rho(h)$, develops a plateau around $h=0$. In terms of the momenta rescaled by $N$, the “height function”, $h(x)$, reads: h(p) =-x-n=--p We see that particles condense around the zero physical momenta. Moreover the plateau occurs in fermion density. Indeed, in the large-$N$ limit, it is convenient to pass to the second-quantized picture. In this picture we are dealing with a state $|\psi \rangle = |p_1,\dots,p_N \rangle$, which in the continuum limit satisfies a\_p\^a\_p | = (p) | Let us also note that this DK transforms at $g_s\rightarrow 0$ into the transition discussed at length of [@fms]. Let us speculate more about the meaning of this phase transition from the point of view of free fermions. Here we start from a out-of-equilibrium state where all the particles sit almost at one point and after a finite time we end up with the DK phase transition. Obviously the free fermion partition function does not exhibit any phase transitions. Therefore this behavior is specific to the out-of-equilibrium state we have chosen. Since particles are located near one point we are dealing with a huge density. We know that high matter densities eventually form a black hole. In fact this is exactly what we observe here. Of course, we do not have any gravitational interactions here, but still two phenomena are very similar as we now argue. Indeed, apart from just placing particles at one point we also send the number of particles $N$ to infinity. As it turns out even that is not enough: after a finite time the density of states becomes truly infinite since two fermion momenta collide(see the discussion after eq. (\[eq:crit\])). This is at this point when the “black hole” is formed. From this picture we expect that after the phase transition we must have a non-zero Hawking temperature if the BH is nonextremal. And indeed the fermion density(Figure \[fermi\]) strikingly resembles fermion density function at finite temperature, the plataeu playing the role of the Fermi surface. Of course, the profile on Figure \[fermi\] is not given by the Fermi-Dirac distribution. Even after the phase transition the quantum system is still in the pure state, so we can not expect truly thermal distribution and only a “horizon” can be expected. Interacting random walkers: Calogero model and SLE stochastic process --------------------------------------------------------------------- ### Calogero model and Jack process So far we have considered the free fermions, corresponding to $\beta=1$ in terms of $\beta$-ensembles. The DK critical behavior happens in matrix elements $W(\boldsymbol \phi|\boldsymbol \theta)=\la \boldsymbol \phi \mid e^{-t H_L} \mid \boldsymbol \theta \ra$ of the evolution operator $H_L$ between some initial and final $N$-fermion states on a circle. For generic $\beta$ the free fermions on a circle get transformed into the particles interacting via the trigonometric Calogero potential. The wave functions and the spectrum of the model is exactly known, and can be conveniently written down in terms of Jack polynomials. We consider the transition amplitude like $W(\boldsymbol \phi|\boldsymbol \theta)$ and identify the DK transition in this case for the reunion matrix element. Let us emphasize that even in the interacting case we can transform the Fokker-Plank equation into the Schrödinger equation by the self-adjoint change of variables. The potential in the FP equation and in the Schrödinger equation are related as the ordinary potential $V$ and the “superpotential” $W$: $$V({\bf x})=D(\nabla W({\bf x}))^2-\Delta W({\bf x}) \label{eq:super}$$ Indeed, it is easy to see that the Schrödinger equation $$\cfrac{\pr \psi({\bf x},t)}{\pr t}=\cfrac{1}{2m} \Delta \psi({\bf x},t) - V({\bf x}) \psi({\bf x},t) \label{eq:super2}$$ becomes the FP equation $$\cfrac{\pr P({\bf x},t)}{\pr t}=D \Delta P({\bf x},t) + D \sum_i \cfrac{\pr}{\pr x_i} \left(\cfrac{\pr}{\pr x_i} W({\bf x}) P({\bf x},t) \right) \label{eq:super3}$$ upon the substitution $D=\frac{1}{2m}$ and $P({\bf x},t)=\psi({\bf x},t) e^{-W({\bf x})}$. Considering the Jack stochastic process instead of the Schur process for free fermions, we can use the result of [@cardy] and treat the Calogero Hamiltonian as the FP operator for the stochastic radial multiple SLE process in the disc. In this case the radial multiple SLE$_{\beta}$ process in the bulk of the disc corresponds to the Jack process at the disc boundary. Note that here our discussion of two stochastic times becomes relevant: we consider the Jack process with respect to the stochastic quantized “holographic” time which has a meaning of a radial distance in the disc. Recall that the Calogero-Sutherland model can be obtained in the 2D YM theory with inserted Wilson line along the cylinder in the particular representation, which amounts to the interaction of the holonomy eigenvalues [@gn1; @gn2]. The Calogero-Sutherland potential reads V(\_i-\_j)= \^2 \_[i&lt;j]{} The coupling $\nu=\beta(\beta-1)$ corresponds to the specific choice of the representation ($\nu$’s power of the fundamental one). It is very well known that the eigenfunctions now are Jack polynomials $J^\beta_{\lam}$: ( [p]{} | )= J\^\_R(e\^[i \_1]{},…,e\^[i \_N]{}) \^(e\^[i ]{}) \[eq:jack\] in (\[eq:jack\]) we have again introduced an auxiliary Young diagram $R$, $n_i=p_i+i$. The energy, $E$, is given by the deformed quadratic Casimir of $R$: E=( )\^2 \_i n\_i(n\_i- 2 i + N +1 ) The Calogero superpotential, according to (\[eq:super\]), becomes: $$W=\gamma \sum_{i<j} \log \sin(\th_i-\th_j), \qquad 2D \gamma(\gamma-1)=\beta(\beta-1) \label{eq:calog-sup}$$ Now it is obvious that the partition function for the $\beta$-deformed YM on a cylinder (\[eq:al\_cyl\]) coincides with the transition amplitude for Calogero particles. Boundary holonomies give boundary conditions for particles and the coupling constant, $g_C^2$, coincides with $t\left(\frac{2 \pi }{L} \right)^2$. Actually, the eigenstates of the Calogero model behave as anyons, see [@poly] for a review. So, it can be said that various deformations of the 2D YM theory are described by free particles with generalized statistics. Also, it easy to see the partition function of the $\beta$-deformed YM on $S^2$ corresponds to the reunion probability for all the particles to join at one point. The DK phase transition in this case happens at the critical values of parameters, given by ( )\^2 = The Fokker-Planck equation in the dimensionless time, $s$, associated with the Jack process, reads =\_[i=1]{}\^N + \_[i=1]{}\^N (P\_N(s,)) \[eq:fp2\] Eq.(\[eq:fp2\]) is a master equation for so-called “Dyson Brownian particles” with the Langevin stochastic dynamics ${\bf x}(s)$ in quantized stochastic time, $s$, in the potential (\[eq:calog-sup\]): = - + \_i(s) \[eq:lan\] where $\eta_i(s)$ is a uncorrelated white noise, $\left<\eta_i(s)\eta_j(s')\right>= \delta(s-s')\delta(i-j)$. Writing $x_j$ in the form $x_j=e^{i\theta_j}$, and considering the hydrodynamic limit $N\to\infty$, $s\to\infty$ with $\frac{N}{s}=u={\rm const}$, one ends up with the Langevin equation =\_[jk]{} (\_j(s)-\_k(s)) + \_j(s) \[eq:fp3\] ### SLE versus Calogero Now we are in position to discuss the connection between Dyson random walks in thermodynamic limit with the multiple SLE stochastic process in the context of the DK transition. It was shown in [@cardy] that the Calogero model is related to the Brownian motion of $N$ interacting particles on the disc boundary, and simultaneously, to the multiple SLE process when $N$ curves evolve from boundary at $t=0$ inside the disc towards the origin (the origin is reached at $t\rightarrow \infty$) as is schematically shown in the [Fig.\[fig:sle\]]{}. ![Multiple radial SLE evolution for interacting Brownian trajectories, propagating from the external boundary inside the annulus towards the origin (the origin is reached at infinite time.[]{data-label="fig:sle"}](dk_f02a){width="7cm"} The SLE process can be considered as the evolution of conformal mapping $g_t(z)$ =\_[j=1]{}\^N \_j(g\_t), \_j(g\_t)= -g\_t This multiple radial SLE evolution for interacting Brownian trajectories, propagating from the external boundary inside the annulus, are shown in the [Fig.\[fig:sle\]]{}. The corresponding Langevin equation reads = -\_[ij]{} (\_i(s) -\_j(s)) + \_j(s) \[eq:sle\] where $\eta_j(s)$ is again the non-interacting Brownian dynamics of $j$th particle ($1\le j\le N$). Since (\[eq:sle\]) and (\[eq:fp3\]) coincide, it was pointed out in [@cardy] that radial SLE, Calogero - Jack stochastic dynamics, and Dyson gas dynamics for Gaussian Unitary Ensemble in hydrodynamic limit coincide, being different incarnations of the same process. The SLE time asymptotically behaves as $\log r$ where $r$ is radius inside the annulus. It is also useful to discuss briefly this relation from the CFT perspective. The solution to the “stationary” in the radial stochastic time FP equation for the Jack process was identified [@cardy] with the conformal block where $\phi_{2,1}$ is the field in the CFT with $c=1-\frac{6(4-k)^2}{4k}$. For the Calogero coupling constant one has $\beta = \frac{8}{k}$. The energy level of the quantum Calogero FP Hamiltonian, $E$, corresponds to the conformal dimension of the particular operator $\Phi$ in the center of the disc h\_ + |[h]{}\_ = E + [const]{} while the total momentum, $P$, in the Calogero stochastic model corresponds to the spin of the operator h\_ - |[h]{}\_ = P This presumably can be viewed from the holographic perspective, in which the conformal block in the boundary theory can be interpreted in terms of the $AdS_3$ bulk gravity. In would be interesting to extend this correspondence for the quantum $N$-body elliptic Calogero model. It is known that its wave functions (i.e probability from FP viewpoint) coincide with the conformal block on the marked torus with insertions of $N$ operators $\phi_{2,1}$. From the holographic perspective this conformal block, at least in the particular semiclassical light-heavy limit corresponds to the insertion of the deficit of angle state [@belavin] amounting to a kind of a horizon in the bulk at finite stochastic time. It should be emphasized that the DK phase transition discussed above, has a meaning for Calogero model at large number of particles $N$ only. Therefore, it would be interesting to map this phase transition onto the behavior of the radial SLE at large number of evolving curves. For instance, if we consider the reunion probability in the Calogero FP Hamiltonian, it would mean that all curves start at the same boundary point and have the same angular coordinate at the reunion time. The straightforward physical meaning of the DK transition in SLE$_k$ is somewhat hidden, however we can provide its intuitive interpretation in terms of the “orthogonality catastrophe” discussed above. It is also interesting to understand in details the DK transition in terms of the $AdS_3$ gravity having in mind the holographic approach. CS theory and Ruijsenaars-Schneider model ----------------------------------------- The Chern-Simons (CS) theory admits the fermionic representation which underlies the vicious walkers picture. The only difference is that the fermions in CS case have relativistic dispersion law since momenta live on the circle in this case. However there are some dualities which transfer the periodicity in the momenta space into the periodicity in the coordinate space. The partition function of the topological CS theory on $S^3$ corresponds to the walkers on a periodic line, with extremities identically distributed within the segment $g_s$. There is no phase transition in this case. When $g_s\to 0$, the topological CS theory reduces to the topological 2D YM theory. On the VW side the initial and final distributions reduce to the point. This chain of transformations is summarized in the following flowchart: ![Connection between CS, YM and VW for different distribution of extremities.](dk_f03){width="13cm"} When we switch on the nontrivial Hamiltonian in 2D YM, or in $q$-YM, the partition function becomes area–dependent, and the phase transition takes place at the critical point. At the VW side the evolution goes through the cylinder with flat boundary distributions for the $q$-YM to a point-like distributions for the pure YM case. For $p=\theta=0$ the Eq.[(\[eq:dim\_q\])]{} coincides with the partition function of CS theory on $\Sigma\times S^1$, while at $p\neq 0$ this equation matches the CS theory on the $S^1$–bundle over $\Sigma$. For $\Sigma=S^2$ this manifold is the lens space $S^3/Z_p$. In the CS theory the string coupling reads as g\_s=. Upon the Poisson resummation, the partition function for the sphere can be written in the following form Z\^[qYM]{}(g\_s, )=C \_[w,w’ W]{} (w)(w’) \_[[**n**]{}]{} \[eq:instanton\] where $W$ is the Weyl group. The insertion into CS theory two Wilson loops yields the trigonometric Ruijsenaars-Schneider (RS) system [@gn2]. The phase space of this systems gets identified with the gauge connections on the torus with the marked point where the Wilson line in the particular representation is inserted. In this case, the eigenstates are given by the Macdonald polynomials $M_R(x_1,\dots,x_N;q,t)$. They can be diagonalized via the so-called Macdonald-Ruijsenaars operators, $\hat D_N^r, r=0,\dots,N$: D\^r\_N M\_R = D\^r\_N(R) M\_R The eigenvalues are neatly described by the following generation function: \_r X\^r D\^r\_N(R) = \_[i=1]{}\^[l(R)]{} (1-X q\^[n\_i]{} t\^[i-1]{}) The value of the Hamiltonian for a particular $M_R$ is given by: H=\_i ( q\^[n\_i]{} t\^[i-1]{} + q\^[-n\_i]{} t\^[1-i]{} ) Comparing these expressions with the 2D $(q,t)$-YM partition function on a cylinder (\[eq:qt\_cyl\]), we face the following problem: the expression $q^{\|R\|^2/2} t^{(\rho,R)}$ does not coincide with any of the RS Hamiltonians. Therefore we have to study the evolution generated by this unusual Hamiltonian. Apart from this subtlety, again we can map $(q,t)$-deformed YM onto the RS system. Combination of the area and coupling constant, $p=Ag^2$, on the YM side, is mapped to the time on the RS side. The $(q,t)$-deformed YM partition function on a $S^2$ corresponds again to the reunion probability. Recalling the section \[sec:qt\], we conclude that the phase transition always occurs at time $t=p$ ($p>2$): \_2 t=-t\^2 \^2 \[eq:q-t2\] – compare to (\[eq:q-t\]). DK transition and duality ========================= From circle to line with harmonic trap. Forth order Hamiltonian. ---------------------------------------------------------------- So far we have considered the system of fermions or anyons on the circle where the DK transition has been physically interpreted as a kind of orthogonality catastrophe for the out-of-equilibrium initial and final states. It is useful to exploit the duality formulated in [@nekrasov97] which relates the trigonometric Calogero model on the circle of radius $R$ with the rational Calogero model on an infinite line in a harmonic trap. Consider two Hamiltonians for the $N$-particle system: on a circle ($H_{I,2}$), and on a line with an additional parabolic well ($H_{II,2}$), where: H\_[I,2]{}= \_[i=1]{}\^N + \_[ij]{}\^N\ H\_[II,2]{}= \_[i=1]{}\^N + \_[ij]{}\^N + \^2 \_i x\_i\^2 \[eq:nek\] The coefficient, $\omega$, in the parabolic potential is related to the radius, $R$, of the circle as $\omega=\frac{1}{R}$. It was proved that the coupling constants in two systems coincide, i.e. $g_{I}=g_{II}$, therefore if we choose, say, the coupling $g=1$ corresponding to the free fermions in one system, we obtain the free fermions in the second system as well. In what follows we consider just this case. Both systems are of the group origin and have the interpretation as constrained free dynamics on the group-like manifolds. The mapping between the corresponding phase spaces can be formulated in terms of the polar decomposition of the group elements. The phase space of the system I is $T^*G\times C^N$ subject to the moment map constraint = p -g\^[-1]{}pg - v Choosing diagonal matrix $g= \exp(\frac{1}{R} diag(q_1,\dots q_N))$ the matrix p gets identified as the Lax matrix for the system on the circle. The system II has phase space $T^*g\times C^N$ subject to moment map constraint = \[P,Q\] - v Matrix Q is diagonal and introducing $Z = P +iQ$ the map between two systems goes as follows Z=\^[1/2]{} p\^[1/2]{} g p = Z Z\^[+]{} The enumeration of the Hamiltonians in two systems gets shifted as follows [@nekrasov97] H\_[II,2]{}= H\_[I,1]{} = P\_[I]{},H\_[I,2]{}= H\_[II,4]{} \[eq:nek2\] where $H_{I,k}$, $H_{II,k}$ and $P_{I}$, $P_{II}$ are correspondingly the $k$-th Hamiltonians for the systems $I$ and $II$. That is time and coordinate directions in two systems are different and the mapping of the evolution operators reads as U\_I = (it\_[I,1]{}H\_[I,1]{} + it\_[I,2]{}H\_[I,2]{}) U\_[II]{} = (it\_[II,2]{}H\_[I,2]{} + it\_[I,4]{}H\_[I,4]{}) Having both nontrivial “times” $t_1,t_2$ in the system I we shall get the non-vanishing “times” $t_2,t_4$ for quadratic and quartic Hamiltonians in system II. This means that in system II we have the fermions perturbed by the Hamiltonian of the fourth order in coordinates and momenta. Having in mind this duality we could question on the possible DK-like transition for the system on the line in harmonic trap. How it could be formulated and what is the very meaning of the winding on the line and winding dominated phase? Naively there is no evident place for the winding in the harmonic potential on a line at all. For the system on a circle the winding is defined as $ N_I=\int \dot{q}dt_{I,2}$ hence in the dual system we have to consider the evolution of some periodic variable with respect to $t_{II,4}$. It is natural to choose the periodic angular variable on the phase plane. Let us emphasize that the “time” $t_{II,4}$ plays the role of the perturbation of the harmonic trap potential therefore the flow in the parameter space has to be considered. Fortunately these subtle points fits together to provide the natural candidate for the winding number in the type II system. Recall that in the Hamiltonian dynamics there is the so-called Hanney angle which describe the nontrivial bundle of the angle variable over the parameter space. The action-angle variables can acquire the nontrivial holonomy when we move in the parameter space. It is the semiclassical counterpart of the Berry connection known in the quantum case. Typically we have nontrivial Hanney connection when there is the intersection of the Lagrangian submanifolds in the phase space. This is exactly what we have - the natural winding in the perturbed harmonic trap is N\_[II]{}= dt\_[II,4]{} where $\alpha$ is the angular variable on the phase space in system II. We presented this argument for the single degree of freedom however it seems to work for multi-particle case as well. To suggest the interpretation of DK transition consider first the case of one degree of freedom. The condition that we start at fixed value of q in system I means that we consider the fixed angle condition in system II, since for the case of fermions without interaction the momentum and coordinate in system I gets mapped precisely into the action and angle in the system II. Now switch on $t_{II,4}$ and follow the trajectory of the point in the phase space as a function of $t_{II,4}$. The "\`reunion“ process in system I gets mapped to the process in the system II when the trajectory of the point reach the same angle coordinate after evolution in the parameter space. It is clear that the trajectory can wind the origin which yields some winding number. Certainly in the quantum formulation of this problem we have some distribution of the winding numbers and the very issue can be thought as the investigation of the ”‘ orthogonality catastrophe“’ under the ”‘evolution in the parameter space"’. To some extend it can be thought of as the example of RG dynamics. We have illustrated above the very meaning of the reunion process and the winding number in the system II for the one-body problem where the DK transition is absent. In the large-$N$ limit one can use the hydrodynamics approximation, and the system of the free fermions on the line is described by the forced Hopf equation [@abanov3]. There is the natural Hall-like droplet before switching on $t_{II,4}$ perturbation. The droplet form gets evolved under the perturbation and we can pose the "\`reunion" problem for the droplet state. The natural candidate reads as follows. We fix some angle at the phase place at the initial point and ask if the droplet upon some time interval in $t_{II,4}$ will involve two supports at fixed angle. The DK transition would mean that the sum of the individual windings of the fermions along the flow in the parameter space does not vanish in the strong coupling case. This picture resembles a lot the emergence of the two-cut support in the spectral density in the matrix model discussed in the related framework in [@gopakumar]. Certainly this issue deserves further investigation. Vicious walks on a line in a harmonic trap. Ground state. --------------------------------------------------------- In this subsection we formulate one more way to get the DK-like phase transition in the system of fermions in the harmonic trap. The straightforward view on vicious walks on a line is as follows. Consider the system of $N$ fermion worldlines (vicious walkers) on a line in a parabolic confining potential $U(x_1,...x_N)=\omega^2\sum_{j=1}^N x_j^2$. The trajectories start from the configuration $\mathbf{x}(0)=(x_1(0),...,x_N(0))$ and arrive after time $t$ at the configuration $\mathbf{x}(t)=(x_1(t),...,x_N(t))$. We are interested in the reunion probability, $P(t,\mathbf{x}(0),\mathbf{x}(t))$, which can be explicitly written as P(t,(0),(t)) = \_ \_N(|(0))\^\*\_N(|(t)) e\^[-tE()]{} \[sw:01\] where $\Psi_N(n|\mathbf{x}(t))$ is a Slater determinant \_N(|(t)) = (n\_j|x\_k(t)) \[sw:02\] and $\psi(n|x)$ is a single-particle solution of diffusion equation in a parabolic confining potential: - (n|x) + \^2 x\^2 (n|x) = E\_n (n|x) \[sw:03\] Solving (\[sw:03\]), one gets (n|x) = H\_n(x) e\^[-x\^2/2]{} \[sw:04\] where $H_n(x)$ are the Hermite orthogonal polynomials which satisfy the three-recurrence relation H\_[n+1]{}(x)=2xH\_n(x)-2nH\_[n-1]{}(x) \[sw:05\] The explicit expressions of the eigenvalue, $E_{{\bf n}}$, and corresponding eigenfunction, $\Psi_N(\mathbf{n}|\mathbf{x}(t))$, in (\[sw:02\]), are { [l]{} E\_[[**n**]{}]{}E\_[n\_1,...n\_N]{}=\_[k=1]{}\^N n\_k\ \_N(|(t)) = \_[1n\_1...n\_N]{}\^N | [cccc]{} H\_[n\_1]{}(x\_1) & H\_[n\_1]{}(x\_2) & ... & H\_[n\_1]{}(x\_N)\ H\_[n\_2]{}(x\_1) & H\_[n\_2]{}(x\_2) & ... & H\_[n\_2]{}(x\_N)\ & & &\ H\_[n\_N]{}(x\_1) & H\_[n\_N]{}(x\_2) & ... & H\_[n\_N]{}(x\_N) | e\^[-\_[n=1]{}\^N x\^2\_n(t)]{} . \[sw:slat\] Substituting (\[sw:slat\]) into (\[sw:02\]), we get the expression of the reunion probability, which essentially simplifies in the case $t\to\infty$ because in this limit only the ground state wavefunction with ${\bf n}_{gr}=(n_1=1,n_2=2,...,n_N=N)$ survives in the sum (\[sw:02\]). Thus, in the limit $t\to\infty$ one has for the reunion probability $$\begin{gathered} \Psi_N(\mathbf{x}) \equiv \lim_{t\to\infty}\Psi_N(\mathbf{n}|\mathbf{x}(t)) \propto \Psi_N({\bf n}_{gr}|\mathbf{x})\; \Psi_N^*({\bf n}_{gr}|\mathbf{x}(t)) = \left|\begin{array}{cccc} 1 & 1 & ... & 1 \\ x_1 & x_2 & ... & x_N \\ \vdots & \vdots & & \vdots \\ x^{N-1}_1 & x^{N-1}_2 & ... & x^{N-1}_N \end{array}\right|^2 e^{-\omega^2\sum\limits_{n=1}^N x^2_n} \\ = \prod_{k>j} \big(x_k-x_j\big)^2 e^{-\omega^2\sum\limits_{n=1}^N x^2_n} \label{sw:06}\end{gathered}$$ Now we can ask a question about the “conditional reunion probability”, $\Psi_N(x_1<x_2<...<x_N<L)$, where $L$ is the location of the upper boundary for the topmost fermionic path - see the [Fig.\[fig:line\]]{}. The bunch of $N$ fermion worldlines lives in the quadratic potential well, $U(x_1,...x_N)=\omega^2\sum_{j=1}^N x_j^2$, schematically shown by the gradient color in the [Fig.\[fig:line\]]{}, which prevents the fermion paths to escape far from the region around $x=0$. So, one can say that the fermion trajectories are “softly” supported from below, while “rigidly” bounded from above. ![The bunch of long fermion paths on an infinite support in a quadratic potential. The topmost line is bounded from above at the distance $L$ from the origin.[]{data-label="fig:line"}](dk_line){width="10cm"} The critical behavior in this system is formally identical to the one in the bunch of fermion paths of finite length under the constraint $x_1<x_2<...<x_N<L$ on the position of the upmost line. Comparing (\[eq:06\]) and (\[eq:reunion2\]), and taking into account that for VW ensemble the transition occurs at the point given by (\[eq:crit\]), we can conclude that the DK transition point in harmonic potential on the line in the limit $t\to\infty$ is defined by the relation N =1 \[eq:crit2\] 2D Yang-Mills theory and vicious walks in the hydrodynamic approach =================================================================== Complex Hopf equation --------------------- So far we were concerned mainly with the asymptotic behavior of the normalized partition function of a 2D Yang-Mills theory on a cylinder, sphere or a disc with the $U(N)$ gauge group, or equivalently, of a reunion probability of $N$ one-dimensional directed vicious walks on a periodic lattice with specific out-of-equilibrium distribution of extremities. We have shown that the meaning of the DK phase transition is different in the Yang-Mills and in the vicious walks terms: for 2D YM theory the DK transition is the transition between the strong and weak coupling regimes, while for the VW and SLE the DK transition can be interpreted as a kind of the “orthogonality catastrophe” [@abanov2], commented below. The duality between “quantum mechanical” (Schrödinger) and “probabilistic” (Fokker-Planck) descriptions of the reunion probability, $P(t,\mathbf{x})\equiv P(t,x_1,...x_N)$, enables us to study the DK transition not only in terms of the normalized partition function of the system, but also in terms of correlation functions. Below we pay the most attention to the one-point correlation function, $p(t,x)$, defined as follows: p(t,x)=N\^[-1]{}\_[j=1]{}\^N dx\_j P(t,x\_1,...x\_N)\_[j=1]{}\^N (x-x\_j) The one-point density can be expressed via the resolvent. Supposing $x$ to be complex and performing the Hilbert transform, $$u(z,t)=\oint dx\, \frac{p(x,t)}{z-x}$$ where $p(x,t)$ is the one-point correlation function of the ensemble (\[eq:reunion2\]), we arrive in the thermodynamic limit $N\to\infty$ to the complex-valued Burgers-Hopf equation for the function $u(z,t)$ (see [@grm; @nowak1]) -u(z,t)=0 \[eq:burg\] One can split the function $u(z,t)$ in a real and complex parts, $u(z,t)= i\pi\rho(z,t) + v(z,t)$, where $\rho(x,t)$ is the density, and $v(x,t)$ is determined as $v(x,t)=\frac{\partial \Pi}{\partial x}$ where $\Pi(x,t)$ is the conjugate momentum: {(x,t), (y,t)} = (x-y) \[eq:complex\] The equation (\[eq:complex\]) can be obtained from the collective field Hamiltonian of Das-Jevicki-type [@jevicki] H(, ) = dx (x) \[eq:das\] Still there is no viscosity in (\[eq:burg\])–(\[eq:das\]), however below we comment on its role as the effect of $\frac{1}{N}$ corrections. What kind of critical behavior is known for the Hopf equation? The most famous pattern is the so-called “overturn” phenomena. Smooth initial configuration in the Burgers-Hopf equation gets overturned at some finite time *in a free space* and corresponding solution becomes multivalued (see [@khanin] for review). However, other critical pattern can occur when considering the Hopf equation *on a circle*. The initial condition, localized at some initial point, splits into two parts and spreads along the circle in in opposite directions as the time evolves. At a finite time moment two fronts moving towards each other, collide somewhere. Such critical behavior is known in the hydrodynamic description of two Bose-Einstein condensates (BEC). There are several phenomena which could be registered at a collision of two condensates , including creation of quantum shock waves and “vortex loops”, as well as so-called “soliton trains”. Let us note that the DK transition on an infinite line in a harmonic trap considered in the previous section, cannot be interpreted as a “gap closure” and only the “overturn” phenomenon is expected at the critical point. The collision of fronts has been interpreted as the DK phase transition in [@grm], see also [@nowak1]. It was identified as follows. The density of fluid $\rho(x)$ in the hydrodynamic description is dual to the saddle point Young diagram $\rho_{Y}(x)$, \_[Y]{}(-\_0(x))=x where $\rho_0(x)$ is the solution to the equation of motion in the collective fields theory taken at a middle time. In terms of the Young diagram, the DK transition corresponds to emergence of the plateau region and two-cut elliptic solution. It was argued in [@grm] that such a behavior can be interpreted as the “gap closure” in the fluid density on the circle at strong coupling, meaning that at some finite time the whole circle becomes filled by the fluid of strong coupling phase. The time of a “gap closure” corresponds to the DK transition point. It is worth interpreting this phenomena in terms of vicious Brownian walks aka world lines of free fermions. In the case of free fermions on the circle, the critical behavior is formulated as follows. We start with the properly prepared initial configuration and ask the following question: what is the probability to reach the specific final state at some given time? It is this probability which enjoys the critical behavior at the “head collision” regime. This criticality is the manifestation of a sort of “incommensurability” between distribution of extremities and the parameters of the system (world line length, diffusion coefficient, size of the lattice). Certainly, there is no such critical behavior for finite $N$, it is purely collective effect emerging at $N\to \infty$. Related problem has been discussed in [@abanov2] where this formulation of problem has been considered in the context of the “orthogonality catastrophe”. To explain it in the hydrodynamic terms, it is convenient to use the fermion language for the integrable hierarchies. The generic $\tau$-functions allows the representation as the matrix elements of the fermion operators \_n(t\_k) = where the coherent state is parametrized by the $A_{p,q}$. There are several different representation of $\tau$ functions, however in any case it is just some fermionic matrix element where the chiral fermions live on the punctured sphere. This formulation fits with the problem under consideration. Indeed, we have matrix elements when the number of fermions is the same in the initial and final states and the fermionic initial and final states are specially prepared. It is convenient to consider the semiclassical limit, where once again we get the Burgers-Hopf equation for the dependence of the matrix element on the “time” variables [@abanov2]. In the quantum case the shock wave in the Hopf equation gets substituted by the quantum shock wave and the semiclassical behavior does not work in the critical region. If we consider the Calogero case where the fermions get substituted by the particles with anyonic statistics, the equation of motion in the collective field theory gets generalized to the Benjamen-Ono equation [@abanov1] = u + where $u^{H}(t,z)$ is the Hilbert transform u\^[H]{}(t,z)= Pdz Once again, there is a kind of “head collision” phenomena, however the interaction amounts to another “resolution of the singularity”: instead of entering into the pure quantum region producing quantum shock waves, the system remains semiclassical and the new solution, the “solitonic train” is developed. Moreover, the critical value of the time at which the regime get changed, is finite. As we have seen above, the critical time in the Calogero case, is the free one divided by the Calogero coupling constant, $t_{crit}^{(\beta)} = \beta^{-1} t_{crit}$. Hence, we can talk about the critical behavior in the coupling constant. It is this formulation as criticality in the coupling constant, which is used in the similar situation for the nonlinear Schrodinger equation. The most difficult question concerns the interpretation of strong coupling phase driven by the instantons. Since here we stay in the frameworks of the hydrodynamic description, the proper pattern for the instanton driven phase has to be recognized. There are several conjectures concerning the strong coupling phase of the hydrodynamical description in the literature. It was suggested in [@nowak1] that a kind of turbulent behavior is developed with the inverse cascades. A bit different picture has been mentioned in [@nowak2] when the strong coupling phase was related to the formation of a kind of the chiral condensate breaking the global $U(1)$ symmetry. We suggest in Section 6 that a kind of superfluid component has been created upon the collision of two shocks (i.e. in the gap closure). This conjecture is based on the relation between the superfluid density and the winding distribution. Let us emphasize the we mean the so-called the non-equilibrium superfluidity when it emerges in the state out-of -equilibrium. In our case we start with very specific state slaved in a narrow region. When the system gets released, we arrive at a very specific head collision phenomena which forces the creation of the superfluid component. Nevertheless, the term “superfluidity” we exploit below, should be used with some caution. Hydrodynamic limit with of VW with $N^{-1}$-corrections and generating function of area-preserving Brownian Excursion --------------------------------------------------------------------------------------------------------------------- It is useful to compare the effect of viscosity on the both sides of the RW-YM correspondence. In 2D YM the viscosity has been identified as the effect of $1/N$ corrections in the collective field theory [@neuberger] $\nu = \frac{1}{N}$. Let us consider that the viscosity can be considered as the $\frac{1}{N}$ effect on the RW side as well. To this aim it is convenient to consider the asymptotics [(\[eq:asymp1\])]{} which describes the scaling of top line in a bunch of directed vicious walks. Proceeding as in [@spohn-pr-fer], take the ensemble of $N$ vicious walkers, define the averaged position of the top line and consider its fluctuations near the averaged position. In such a description all vicious walkers lying below the top line play a role of a “mean field”, which pushes the top line to some “atypical” equilibrium position, around which it fluctuates. This point of view is schematically illustrated by the [Fig.\[fig:area\]]{}, where in [Fig.\[fig:area\]]{}a we have a bunch of $N$ vicious walkers of length $t$ fluctuating in within the strip of width $L$, while in [Fig.\[fig:area\]]{}b we have a single random walk of the same length, $t$ with fixed area $A$ under the curve fluctuating within the strip of width $L$. ![a) Reunion of a bunch of $N$ random walks of lengths $t$ each within a strip of width $L$, b) Probability of a Brownian excursion of length $t$ with fixed area $A$ measured in terms of filled plaquettes along the dotted lines under the curve within the strip of width $L$.[]{data-label="fig:area"}](dkaz_area){width="15cm"} It is naturally to suppose that the fluctuations of the top line in a mean-field approximation have the same scaling as the fluctuations of the “inflated” Brownian excursion with fixed area under the path. One actually can show that, following the line of reasoning of the work [@nowak1]. The solution of the inviscid Burgers equation (\[eq:burg\]) is $u_0(x,t=N) = \frac{x}{2N} \pm \frac{\sqrt{x^2-4N}}{2N}$ and gives the Wigner semicircle law centered at the point $\frac{x}{2N}$. One can smear the function $u_0(x,t)$ near the boundary value, $x=x_c = \pm 2\sqrt{N}$, adding by hands the Gaussian fluctuations, i.e. passing to the Burgers equation with a weak negative diffusivity ($0<\nu \ll 1$), + u(x,t) = - It is known that negative diffusivity is responsible for the turbulence [@burgers]. Seeking for weakly perturbed solutions of Burgers equation near the top line, ($t=N$), in the form [@nowak1] { [ll]{} x=x\_c + \^ y = 2t\^[1/2]{}+ \^ y;\ u(x,t) = + \^ w(s,t)= t\^[-1/2]{}+\^ w(y,t) . and substituting the ansatz for $u(x,t)$ into the viscous Burgers equation, one gets the equation for $w(y,t)$, which for $\alpha=2/3$, $\beta=1/3$ and appropriate boundary conditions is transformed in the limit $\nu\to 0$ into the dimensionless Ricatti equation [@nowak1] -yt\^[-3/2]{}+ w\^2 + \_y w=0, \[ricatti\] having the solution (for $t=N$) w(z) = 2(2\^[-1/3]{}z) z=yN\^[-1/2]{}. \[ricatti2\] One can recognize in [(\[ricatti2\])]{} the singular part of the grand partition function of the “area+length”–weighted Brownian excursions (Dyck paths). Thus, one can straightforwardly identify $Z(s,q)$ with $u(y,N)=N^{-1/2}+\nu^{1/3} w(y,N)$ under the following redefinitions: { [rcl]{} 1-q & & ,\ -s & & 2\^[-7/3]{}yN\^[1/6]{}. . where $q$ is the fugacity of the area $A$ and $s$ is the fugacity for length, $t$. The area fugacity defines the mean area under the trajectory which is is fixed by the total number of walkers. Indeed the viscosity involved has the $1/N$ scaling behavior. The connection between the bunch of vicious walks in the strip, and Dyck paths with the fixed area in the strip is considered in the Appendix B. We outline there the computation of the partition function $Z_{2n}(A)$ of $2n$-step directed random walk with the fixed area, $A$ in the strip of width $L$ on the square lattice tilted by $\pi/4$ as shown in the [Fig.\[fig:area\]]{}. DK transition for black hole partition function =============================================== In this Section using the relation between the magnetically charged extremal BH and 2D $q$-YM theory we will make a conjecture concerning the meaning of DK phase transition at BH side and the Browinian stochastic dynamics of the branes representing magnetic charges of BH. BH partition function in mixed representation --------------------------------------------- Consider the partition function of the extremal $N=2$ black hole with the electric, $p_i$, and magnetic, $q_i$, charges. The extremal charged SUSY BH in four dimensions is created by $N$ magnetic $D4$–branes wrapped around the non-compact four manifold $C_4 = O(-p) \rightarrow CP^1$ in the internal non-compact CY space $M= O(p-2) \times O(-p) \rightarrow CP^1$. The moduli of the extremal BH solution are fixed by their charges via attractor equations. The topological string on the manifold $M$ engineers the $U(1)$ $N=1$ SYM theory on $R^4\times S^1$ and the parameter $p$ in the CY geometry corresponds to the coefficient in front of $5D$ Chern-Simons term. The Kähler modulus of $CP^1$ corresponds to the $5D$ gauge coupling constant. The non-compact $D4$ brane corresponds, from the 5D gauge theory viewpoint, to a local infinitely heavy particle in $R^4$. $D2$ branes wrapped around $CP^1$ correspond to the gauge theory instantons which carry the electric charges $Q_E=pQ_{top}$ due to the Witten effect. Define the entropy as a function of its charges, $S\{p_i,q_i\}$ which counts the bound states multiplicity of electric $D0$ and $D2$ branes on the $D4$ brane. Formally we add the observables corresponding to the $D0$ and $D2$ branes into the topologically twisted $N=4$ SYM on the $D4$ brane with $S^2 \times C$ worldvolume. The entropy of the 4D BPS black hole [@osv] can be written in terms of asymptotic values of the moduli in vector multiplet parameterized by the complex variables $X_i$. The dual variable are defined through the prepotential $F_0$, $X_{D,i}= \frac{\partial F_0}{\partial X_i}$. The real parts of the periods are fixed by the attractor equations in terms of the electric and magnetic charges of the black holes $(p_i,q_i)$ p\_[i]{}= Re\[CX\_i\] q\_i=Re\[CX\_[D,i]{}\] where $C$ - is constant. The Bekenshtein-Hawking entropy reads as S\_[BH]{}= = iC[|C]{}([|X]{}\_i X\_[D,i]{} - X\_i[|X]{}\_[D,i]{}) and can be also written in terms of the holomorphic volume form of Calabi-Yau S\_[BH]{}= Let us introduce the variables $\phi_i,\chi_i$ as imaginary parts of the periods CX\_i= p\_i +\_i\ CX\_[D,i]{}= q\_i +\_i The variables $\phi_i,\chi_i$ have the meaning of the chemical potentials for the magnetic and electric charges. The BH entropy can be defined via the Legendre transform S\_[BH]{}(p,q) = [F]{}(,p) -\_i in terms of the prepotential taken in the mixed representation which is related to the mixed partition function of the black hole Z\_[BH]{}(\_i,p\_i)= \_[q\_i]{} (p\_i,q\_i) (-\_i q\_i) as follows Z\_[BH]{}(\_i,p\_i) = ([F]{}(\_i ,p\_i)) One can also consider the dual polarization for the partition sum of the black hole $Z_{D,BH}(\chi_i,q_i)$ which depends on the magnetic charges and the electric chemical potentials. Remind that the microcanonical entropy of the black hole is conventionally defined as $log \Omega (q_i,p_i)$ and doesn,t generically coincide with the Bekenshtein-Hawking entropy. It is important that the black hole degeneracies have an interesting interpretation as the Wigner function of the Whitham dynamical system taken at the attractor point (q\_i,p\_i)= dy e\^[ipy]{}(q+iy)(q-iy) where $\Psi(q)$ can be considered as the wave function for the Whitham theory in the particular polarization. Equivalently, it has the meaning of the wave function of the topological string on the CY manifold. It is worth emphasizing that there are $D2$ branes with different orientations. Later we shall be interested in the particular one representing the 2D YM instantons in the $D4$ worldvolume theory. The convenient analysis has been performed in [@pestun] where it was shown that the 2D instantons get lifted to the t’Hooft loops linked with $S^2$ in the $D4$ worldvolume theory. $Z_{BH}= Z_{q2dYM}$ ------------------- The key point is the identification of the BH partition function in the mixed representation with the 2D $q$-YM on some Riemann surface. The terms in the action on $D4$ brane wrapped $C_4$ involve the chemical potentials for the electric charges S= \_[C\_4]{} [Tr]{} FF + \_[C\_4]{} [Tr]{} FK where $K$ is the Kähler class of the submanifold inside $C_4$. The path integral with this action turns out to coincide with the partition function of 2dqYM and reads as Z\_[BH]{}=Z\^[qYM]{}= \_[q\_1,q\_2]{} (q\_0,q\_1,N) (-q\_0 - q\_1) where $\Omega (q_0,q_1,N)$ can be identified as the degeneracy of $D0 \to D2 \to D4$ bound states with charges $(q_0,q_1,N)$ correspondingly. The partition function involves the chemical potentials for $D0$ and $D2$ branes \_0= \_1= The topological string coupling $g_s$ is related with the 2d YM coupling $g_{YM}$ via relation g\_[YM]{}\^2= pg\_s DK for black hole ----------------- Let us comment qualitatively on the following question postponing more detailed analysis for the separate study: “What happens with the magnetically charged BH at the transition critical line?”. First consider the equation defining the critical line in the $q$-YM: A\_[crit]{} = p\^2 ( 1+ \^2 ( ) ), p&gt;2 where the magnetic $N$ $D4$ branes are wrapped around $O(-p)\rightarrow CP^1$ in CY. The critical point in pure YM is restored at $p\rightarrow \infty$ limit when $g_sp$ remains finite. As in the $q$-YM case we expect the chain of phase transitions when the different color orientations of “monopole” $D2$ branes representing instantons in the 2D YM and ’t Hooft loops in 4D $SU(N)$ theory on $D4$ branes dominate. Let us emphasize that the ’t Hooft loops are linked with the $D2$ branes wrapping $S^2$ [@pestun] that is with bulk instantons in the $5D$ gauge theory at $R^4 \times S^1$. It is interesting that we therefore get the linking of the electric and instanton degrees of freedom corresponding to two types of $D2$ branes. In terms of the “monopole” $D2$ branes the DK phase transition sounds as follows. Consider the BH with large magnetic charge $Q_m=pN$ and vary the chemical potential for the $D0$ branes which is the Kähler class of the base sphere. At some value of the chemical potential we fall into the strong coupling phase driven by the t’Hooft monopole loops in representation with some charge vector. In the next section we shall conjecture that a kind of the superfluid component is developed. From the observer at the BH the transition looks as sudden transition from the electrically neutral BH to the state with the Gaussian distribution of the charges corresponding to the ’t Hooft loops on $D4$ branes. We can speculate that the DK transition in the extremal magnetic BH corresponds to the transition discussed in [@bolognesi]. For the electrically charged BH there is so-called superconducting phase transition (see [@hartnoll] for the review) which corresponds holographically to the transition to the superconducting state in the boundary theory. The physics in the bulk behind this transition allows the following interpretation. At some value of the chemical potential it becomes favorable for the BH to polarize the bulk, and the electric charge at the horizon gets screened while the effective condensate of the charged scalar in the bulk emerges. In the magnetic case, we expect a similar picture: let us start with the magnetically charged BH and vary the parameters of the solution. At the critical point it becomes favorable to have monopole wall instead of the extremal magnetic BH. Similarly, we have magnetic polarization of the bulk, the horizon gets discharged, and a kind of the monopole condensate emerges in the bulk. The transition in the flat space takes place at v=cg\_s where $v$ is VEV of the scalar. Therefore, we have some matching with the above conjecture. The value of the scalars from the vector multiplet are fixed by the attractor mechanism. On the other hand, in [@bolognesi] the scalars from nonabelian group are fixed at the transition point as well. BH and random walks ------------------- Since we know the relation between the partition function of the $q$-YM and the Brownian reunion probability, we can question about the stochastic random walk interpretation of the black hole entropy counting. Remind that the near horizon geometry of the BH is $AdS_2\times S^2$ and the entropy counting certainly is related to the $AdS_2$ part of the geometry. Another inspiring relation concerns the interplay between the BH partition function and $c=1$ string at selfdual radius [@ooguri]. The fermion representation of $c=1$ model is related with the fermion representation of the 2D YM theory on the sphere. So far, the random walk interpretation of the BH entropy concerns the behavior of the long string near the BH horizon at the almost Hagedorn temperature. The idea goes back to [@susskind] when it was suggested that the BH entropy comes from the degeneracy of the states of a single long string wrapped the BH stretched horizon. In [@polchinski] dealing with the representation of the string gas, it was argued that the single long string behaves as the random walker, and the string with unit winding number dominates near the Hagedorn transition [@kru]. This winding mode corresponds to the thermal scalar which becomes tachionic at the Hagedorn temperature. The picture of the wound long string as the random walker has been generalized for more general geometries in [@zakharov]. In particular, the relation between the Hagedorn and Hawking temperatures plays the key role for establishing this correspondence for the non-extremal black holes [@zakharov]. The discussed picture becomes more rich in our case for extremal magnetically charged BH. First of all let us emphasize that so far we considered the Brownian $D4$ branes which undergo the stochastic dynamics. Moreover, we have large $N$ number of Brownian branes. The Brownian $D4$ branes are extended in CY space and the stochasticy comes from the interaction with the strings. Since we are looking for the extremal BH the Hawking temperature vanishes. Each Brownian brane carries a huge multiplicity due to the bound states with $D2$ and $D0$ branes and additional t’Hooft loops. Since the mixed partition function of the BH $Z(N+i\phi_0)$ can be identified with the reunion probability of $N$ stochastic fermions on a circle for the time $T$, $P_{reun} (N,T)$, to get the Bekenstein entropy as the function of the magnetic and electric charges $S(N,Q_E)$ we have to perform the Laplace transform with respect to the electric chemical potential. On the other hand since we have identified the electric chemical potential with the time of reunion on the RW side we have to perform the Laplace transform with respect to the reunion time T dT (-ET) Tr ( ) Hence we get the resolvent for the system of $N$ non-interacting fermions and the energy $E$ plays the role of the electric charge $Q_E$ at the BH side. The imaginary part of the resolvent indeed is the density of states at the fixed "\`electric charge $E$". We restrict ourselves here with these preliminary remarks concerning the mapping the BH entropy and the reunion probability postponing it to the separate study. Could we make any link of our picture with the previous long string one? The following possibility can be mentioned. We could expect that the stochastic Brownian description deals with a kind of holographic picture in $AdS_2$ geometry. The Brownian walker has been identified with the stochastic motion of the end of the string extended in radial coordinate from the boundary to the stretched horizon [@teaney; @hubeny1]. Since we have large N stochastic fermions at the boundary we have large N strings extended in the $AdS_2$ along the discussion about SLE above. In principle we could imagine that this strings joins together in the bulk forming one long string touching the boundary at $N$ points. In this case the winding of the $N$-fermion states corresponds to the winding of the single long string. However certainly much more detailed analysis is required to support this possibility. Topological susceptibility – the way to superfluidity ===================================================== Winding distribution in superfluid ---------------------------------- In this Section we conjecture that the instanton driven strong coupling phase enjoys a kind of the superfluid component. The ground for this conjecture comes from the particular representation of the density of the superfluid component [@pollock1; @pollock2; @prok] in terms of the microscopic degrees of freedom which we shall briefly remind now. In the context of superfluidity there are two ways to introduce the superfluid density. One of them is defined in terms of the effective macroscopic low energy mean field theory described in terms of the complex field $\Psi(x)$ which can be considered as the condensate wave function. Its phase $\Phi$ provides the description of the superfluid component. In particular the current is defined as J= ||\^2 ((x) - A\_g(x)) \[flux\] where $\gamma=\frac{\hbar}{m}$ and $A_g(x)$ is the auxiliary gauge field which is known as the graviphoton field. It corresponds to the particular deformation of the flat metric ds\^2=dt\^2 -(d[**x**]{}- [**A**]{}\_g dt)\^2 and its curvature defines the angular velocity vector $\Omega_i$ : \_i = \_[ijk]{} \_j A\_[g,k]{} Remind that the superfluidity can be considered as the nontrivial response to the external graviphoton field in the same sense as the superconductivity is the nontrivial response to the external electromagnetic field. The superfluid density is defined via the following term in the effective action F\_[eff]{}= dx (\_x )\^2 Since we are dealing with the one-dimensional system the validity of this definition requires the additional condition of long relaxation times of the current states labeled by the integer topological number I= \_x The superfluid velocity equals to v\_s= Contrary to the mean field effective description of the superfluid density, the microscopic definition concerns the first quantized language and involves the statistics of the total worldline winding number $M=\sum_{i=1}^{N}n_i$, where $n_i$ is the winding of the individual degree of freedom. To count the individual windings it is useful to introduce the Wilson loop of the graviphoton gauge field along the closed path A\_g dx =\_0 +2K where $K$ is an integer and $\phi_0$ is a twist. For the homogeneous system with periodic boundary conditions along the x-axis the twist corresponds to the particular gauge potential A\_[g,x]{}=\_0/L\_x where $L_x$ is the system size in this direction. Equivalently, the twist can be defined via nontrivial boundary condition for the wave function. According to (\[flux\]) the system features a uniform flux proportional to the twist $\phi_0$. In the case of the torus geometry the persistent current through the torus cross section can be expressed as J= where $F$ is the free energy of the system. The response of the system to the twist serves as second tool to describe the superfluid component and it relates the macroscopic and microscopic descriptions. Introduce the topological susceptibility \_s\^ = .|\_[\_0=0]{} The statistical distribution of the macroscopic topological number $I$ in $x$-direction in the superfluid is P(I) (- ) \[prob\] where $\rho_s$ is density of the superfluid component and T is a temperature. To fit together the microscopic and macroscopic descriptions it is convenient to exploit two representations of the $\theta$-functions which relate the statistical distributions of the macroscopic topological number $I$ and the total winding number $M$ in the microscopic description Z() \_[I=-]{}\^[I=+]{} ( -(2I -)\^2 ) = \_[M=-]{}\^[M=+]{} ( -M\^2 -iM) In what follows we shall use the occurrence of the Gaussian distribution over the total microscopic winding number $M$ in the instanton driven phase above the DK transition point as the indication of the non-vanishing superfluid density. It is remarkable that the topological susceptibility is closely related to the density of superfluid component [@pollock1; @pollock2]. In $d\ge 3$ $ \rho_s^{\phi} = \rho_s$ while in lower dimensions they do not coincide however the relation holds [@ps] \_s\^ = \_s ( 1 - ) \[density\] Therefore the non-vanishing topological susceptibility implies the non-vanishing density of the superfluid component. Let us emphasize that in the one-dimensional space the superfluidity is a subtle phenomena and not all typical superfluid phenomena hold in $1+1$. It was argued that the drug force could substitute the order parameter [@caux] however the susceptibility as the measure of the superfluid density still works. $\theta $-term versus twist in superfluid ----------------------------------------- Let us demonstrate a close analogy between the twist and the conventional $\theta$-term in the 2D YM theory, which implies that the non-vanishing topological susceptibility in the 2D YM theory can be considered as the counterpart of the superfluid density. The $\theta$-term can be added to the 2D YM action as a chemical potential for the topological charges of the abelian instantons, which are classical solutions to the equations of motion in the 2D YM on $S^2$. In the Hamiltonian framework the $\theta$-term enters in the following way: Z\_g=(\_[\_g]{} d F + g\_[YM]{}\^2 \^2 + ). and the partition function reads Z\_g(g\_[YM]{},A,)=\_\^[2-2g]{}(n) (-Ag\_[YM]{}\^2n\_i\^2 -in\_i) \[eq:Z\_theta\] where $\theta$ parameter plays the role of the chemical potential for the total $U(1)$ instanton number. What is the counterpart of the $\theta$ parameter for the vicious walkers? Comparison with the instanton representation of YM partition function yields the following term in the Lagrangian of the particle system on a circle L = \_i \_i \_k (-+ 2i k) Therefore the YM-VW mapping implies that $\theta$-term is the chemical potential for the total winding in the random walk problem similar to the twist $\phi_0$ mentioned above. Now we are at position to utilize the relation (\[density\]) in the YM and VW frameworks. The key point is that the distribution of microscopic total winding number in RW or topological charge in 2D YM can be obtained explicitly [@lw]. At $t<t_c=L^2/4N$, the probability for a system to have a non-zero winding is exponentially small, P(M=0)=1-O(e\^[-cN]{}). \[eq:prob\_subcrit\] Near the critical time, $t=t_c$, there are finite probabilities to get $M=0, M=1, M=-1$ P(M=0)=1- +...; P(M=1)= P(M=-1)= +... \[eq:prob\_crit\] where $q(s)$ is the solution to the Painleve II equation q”(s)= sq(s) +2 q(s)\^3 which has the Airy asymptotics q(s)= Ai(s),s+The variable $s$ is defined as follows t=\^2 ( 1 - ) Let us emphasize that $\left<M^2\right>\neq 0$ at the critical point. At $t>t_c$, the probability to have a specific total winding number $M$ is given by a Gaussian distribution centered at zero, P(M)=Ce\^[-M\^2]{}+O(N\^[-1]{}). \[eq:prob\_supercrit\] where $\kappa$ is an implicit function of $t$, given in terms of full elliptic integrals: =-, t=8E(k)K(k)-(1-k\^2)K\^2(k), k’ =. \[eq:kappa\] Hence we can map the strong coupling phase with the superfluid density \_s\^[-1]{}= and the very DK transition acquires the interpretation as transition to the superfluid phase. Let us emphasize that our consideration can be considered as the evidence in favor of the superfluid interpretation of DK transition however more detailed study is required. Comment on the BEC conjecture for Black Holes --------------------------------------------- Recently the new conjecture concerning the microscopic degrees of freedom at the BH horizon has been formulated [@dvali]. It was supposed that the horizon can be interpreted as the BE condensate of gravitons at the quantum phase transition point. The BE condensate can be effectively described by the Gross-Pitaevsky Hamiltonian or the nonlinear Schrödinger system which can be thought as the thermodynamical limit for the many-body Lieb-Liniger system on the circle H=\_i \_i\^2 + c\_[ij]{} (x\_i - x\_j) It is assumed that coupling c corresponds to attraction. It is known that the BES undergoes the transition from the homogeneous phase to the bright soliton inhomogeneous solution. The system can be solved via Bethe anzatz equations which reads as e\^[ik\_iL]{}= \_[ij]{} and have been recently analyzed in the thermodynamical limit [@lieb]. It was argued that the saddle point solution looks as the saddle point solution in the 2d YM matrix model representation of the partition sum gk = 2 where $g= cLN={\rm const}$. Hence the transition to the inhomogeneous solitonic phase of BEC presumably can be considered as an analogue of the DK phase transition which however is of the 2nd order in BEC case. The BEC and superfluidity have the close cousin hence it is natural to look for the possible link with the current study. We have nothing to say about the condensate of gravitons however the relation of Lieb-Liniger system with Yang-Mills theory provides some interplay with BH with large magnetic charge discussed above. The quantum Lieb-Liniger system is closely related with the deformed quantum 2d gauge topological theories [@geras]. Namely the Bethe anzatz equation defined the ground state of 2d topological YM theory coupled with the additional Higgs field in adjoint representation. It can be considered as the large k limit of Chern-Simons-Higgs system. The mass of the Higgs field corresponds to the coupling constant in Lieb-Liniger system therefore the decoupling of the Higgs field in the gauge theory corresponds to the infinite coupling in the many-body system. It is known that in this limit the Lieb-Liniger system gets reduced to the system of free fermions via the Tonks-Girardeu transform in agreement with the free fermion representation of the CS and topological 2d YM theories without the Higgs field. We would like to keep coupling finite and question if the Lieb-Liniger system has something to do with the BH horizon as suspected in [@dvali].The free fermion limit certainly provides the starting point - we deal with the BH with large magnetic charge N. As before we have to consider the ensemble of BH fixing the chemical potentials of electric charges. In this case we have to consider the refined BH ensemble considered in [@aganagic12] which involves the additional chemical potential responsible to the angular momentum of the BH. Once again a kind of refined index is evaluated at the BH side. The proper limit of BH parameters has to be identify the Lieb-Liniger system. To this aim note that the LL system is closely related to the p-adic zonal spherical functions while the Macdonald polynomials interpolate between the p-adic and quantum groups [@zabrodin]. The refined BH ensemble is related to the (q,t) deformed 2d YM theory and the LL corresponds to the particular degeneration of parameters. Collecting these arguments together we could suppose the following picture. Since the conventional gauge coupling in 2d YM is switched off $(p=0)$ [@geras] in derivation of LL system we can conclude that the chemical potential for the electric charge vanishes. However the mass of the additional Higgs field is present which implies that the chemical potential for the angular momentum is present. Hence we can conjecture that the approach of [@dvali] could be related to the BH (or heavy particle with large entropy) with large N magnetic charge and the chemical potential for the angular momentum. The critical behavior in the LL system corresponds to some critical behavior of the ensemble of the magnetic BH with chemical potential for the angular momentum. It would be interesting to clarify the physics behind this transition at BH side. Towards the Anderson localization? ---------------------------------- The challenging open question concerns the relation of the DK-type transition with the Anderson localization. It is well known that the problem of statistics of a bunch of vicious walks is deeply connected to the statistics of directed polymer in quenched random environment. In the replica calculation of the free energy, one can associate each random walk trajectory with a particular replica. The effective interaction between replicas is induced by presence of a common quenched environment. In [@kardar] it has been shown that the free energy, $F$, of a directed polymer of length $n$ averaged over the uncorrelated quenched random potential with the Gaussian distribution, scales as F(n) = f\_0 n + f n\^[1/3]{} \[eq:free\] where $f_0$ and $f$ are some constants independent on $n$. The comprehensive analysis of the distribution function $P(F)$ was published in the pioneering works [@dotsenko] (see also [@ledoussal]), where the derivation of the distribution function $P(F)$ in the frameworks of the replica approach, has been linked to the correlation function computation for some determinant process (for determinant process see, for example, [@borodin]). The finite size scaling behavior (\[eq:free\]) rewritten as $\Delta F(n) = F(n) - F_0(n)=f\, n^{1/3}$, where $F_0=f_0\, n$, can be qualitatively understood even beyond the heavy machinery, since the $n\to\infty$ tail of the distribution function, P(F) \~e\^[-f n\^[1/3]{}]{} \[eq:distr\] shares very transparent interpretation in terms of random walks in quenched environment, as communicated to us by Victor Dotsenko [@dotsenko_private]. Namely, the probability $P(\Delta F)$ in the one-dimensional case only can be interpreted as the survival probability of a random walk in a quenched environment of randomly distributed traps. The typical size of a “trap-free cavity” is the typical distance between neighboring vicious walks. Now, the asymptotics (\[eq:distr\]) follows immediately from the Balagurov-Vaks derivation of the random walk survival probability on a line in an array of traps with the Poisson distribution. Qualitatively, the result (\[eq:distr\]), can be obtained using the “optimal fluctuation” approach [@lifshitz], and is nothing else as a “Lifshitz tail” for some localization problem. The connection to the one-dimensional Anderson localization becomes even more profound if considering the Laplace transform of (\[eq:distr\]): P(s) = \_[n=0]{}\^ P(F) e\^[-s n]{} dn \~(-) \[eq:laplace\] where $c=c(f)>0$ is some numeric constant. The physical interpretation of the Laplace transform (\[eq:laplace\]) is very straightforward: we expect the behavior (\[eq:laplace\]) for the distribution of free energies in a *canonical* ensemble of noninteracting directed polymers with exponential distribution of lengths (controlled by the fugacity $s$), fluctuating in a quenched random environment. Conclusion ========== Our study provides the moderate step towards clarification the universal aspects behind the DK phase transitions. The three systems were considered: large-$N$ perturbed topological gauge theories, stochastic dynamics of large $N$ random walkers with and without interction and entropy of extremal 4D BH at large $N$ magnetic charge. Because of their diversity, the study provides the complimentary insights and intuition. We have no fully satisfactory picture but our findings make some new understanding of phenomena. We tried to interpret in physical terms several seemingly different phenomena dealing with the DK-type transition, paying especial attention to the construction of the unified picture. At the random walk side we discussed the dependence of the DK phase transition on the out-of-equilibrium boundary conditions and reformulate the problem as a kind of the orthogonality catastrophe for the system of the free fermions. The generalization of the free walkers case to the stochastic process with integrable interaction has been done which is equivalent to the consideration of the matrix model with generic $\beta$-ensemble. We treated the Calogero and Ruijsenaars-Schneider Hamiltonians as the Fokker-Planck Hamiltonians for the stochastic many-body systems. We have treated the thermal and quantum noise at the equal footing since in both cases the Langevin and FP equations rule the game. However, the interpretation of the DK transition is very different in two cases. In the thermal case we consider the process in the real time and the DK transition admits an interpretation as one or another version of the orthogonality catastrophe for the fermion gas in the simplest case. In the framework of the stochastic quantization we deal with the stochastic or holographic time and the DK transition acquires the new a bit counter-intuitive interpretation. It claims that some phenomena happens at the *finite* holographic or stochastic time. It means the RG has some nontrivial feature at the finite RG scale. Since the conventional stochastic quantization implies that to get the correct measure in the path integral we have to move stochastic RG time to infinity, the nontrivial phenomena at finite RG time warns us that the limiting procedure has to be done with additional care. We believe that quantization of the generic system via the matrix model and topological strings [@krefl; @marinoq] provides proper language for these issues. Indeed in this approach the quantization is performed within the $\beta$-ensemble of the large N matrix model which as we have seen could have the DK-like transition. On the other hand the obstacle at the finite RG holographic time is usually treated as the appearance of the BH horizon. However as we have mentioned it is necessary to distinguish between the worldsheet horizon and the horizon of the BH background and it is the worldsheet horizon which seems to be responsible for the DK transition. Anyway the holographic interpretation of the DK transition certainly deserves the further study especially due to relation with the multiple radial SLE stochastic process. We made a conjecture about the nature of the DK transition for the extremal BH hole with large magnetic charge. Certainly, the additional work is required to verify it. Potentially the question seems to be very important since on the RW side we deal with the free fermions and one could ask the question if the formation of the BH horizon for the magnetic BH can be described in simple fermionic terms. Since the only phenomena which happens with free fermions, is the DK transition it is natural to suspect that it has something to do with the horizon formation and elimination. The possible relation with the conjectured appearance of the superfluid component, adds additional flavor to the problem and probably some superfluid property of the BH horizon in the membrane paradigm can be suspected. The perturbed topological theories provide the playground for the knot invariants and the colored Hopf links occur in the stochastic process related with the amplitude in CS theory with particular boundary conditions [@deharo3]. It would be interesting to relate the DK critical behavior with the critical behavior found in [@bgn] for the ensemble or torus knots. They can be connected since in the former case the single random walker propagates in the region fixed by the large-$N$ “background”. The appearance of knot invariants seems very interesting in the context of the magnetically charged BH, where the algebraic invariants are related via the random walk picture to the BH entropy. This would be an analog of the relation of torus knot invariants with the particular observables in 5D SQCD via instanton-torus knot duality [@gmn]. In the 5D SQCD we have considered the point-like object inserted in $R^4$ propagating along the $M$-theory circle, while in the knot-BH study we should consider the 4d BH inserted at some point in $R^3$ and propagating in time. Finally note the question concerning the $e^{-N}$ corrections which we almost have not touched. They contain an interesting physics in all their incarnations. In particular, they correspond to the creation of baby Universes at BH side [@baby], to the obstacle for the chiral factorization in the 2D YM theory and to the effects of the fermion tunnelling in the context of matrix models. It would be interesting to discuss these issues in the context of the DK transition and in more general framework of the stochastic quantization. The reader has certainly realized that the unifying theme for the different approaches and viewpoints considered in the paper is the formation of a kind of horizon starting from the state in out-of-equilibrium during the stochastic evolution. We believe that results of the paper and the conjectures made can be of use for further-coming studies in this direction. We are grateful to K. Bulycheva for the collaboration at the early stage of the work. We would like to thank A. Abanov, Victor Dotsenko, D. Kharzeev, N. Nekrasov, G. Oshanin, J. Policastro, N. Prokof’ev, R. Santachiara, G. Schehr, N. Sopenko, P. Wiegmann, and A. Zhitnitsky for the useful discussions. The work is supported by Russian Science Foundation grant 14-050-00150. Appendix. Multiple vicious walkers and the Fokker-Planck equation ================================================================= It has been pointed out in [@schehr] that besides the reunion probability $P_N(T,\mathbf{0},\mathbf{0})$, one can similarly compute the joint probability distribution, $P_N(\tau_1,\mathbf{x},\mathbf{0},\mathbf{0})$ of $N$ one-dimensional vicious walks on the infinite line at some fixed time, $\tau_1=aT$, where $0\le a\le 1$ where extremities at each end of the paths are contracted to a point. The function $P_N(\tau_1,\mathbf{x},\mathbf{0},\mathbf{0})$ reads [lll]{} P\_N(\_1,,,) & = & P\_N(\_1,,) P\_N(\_2,,)\ & & (a(1-a)T\^2)\^[-N\^2/2]{} \_[i&lt;j]{}(x\_i-x\_j)\^2 e\^[-(+)\_[i=1]{}\^N x\_i\^2]{} \[eq:joint\] Let us introduce the rescaled variable $z=T a(1-a)$, and consider slightly more general probability distribution, namely \_N(z,,,) z\^[-N\^2/2]{} \_[i&lt;j]{}|x\_i-x\_j|\^[2]{} e\^[-\_[i=1]{}\^N x\_i\^2]{} \[eq:joint2\] which can be rewritten as \_N(z,,,)\^[(eq)]{}\_N(z,(z)) z\^[-N\^2/2]{} e\^[-2V()]{}; V()= -\_[i&lt;j]{}|x\_i-x\_j|+ \_[i=1]{}\^N x\_i\^2 \[eq:joint3\] It has been shown in [@beenakker] that $\tilde{P}^{(eq)}_N(z,\mathbf{x}(z))$ in (\[eq:joint3\]) is the stationary Gibbs measure for stochastic evolution of the nonstationary distribution function, $\tilde{P}_N(t,z,\mathbf{x})$ in time, $t$, \^[(eq)]{}\_N(z,)=\_[t]{} \_N(t,z,) \[eq:eq\] where the function $\tilde{P}_N(t,z,\mathbf{x})$ satisfies the Fokker-Planck equation: =\_[i=1]{}\^N + \_[i=1]{}\^N (\_N(t,z,)) \[eq:fp\] Equation (\[eq:fp\]) can be rewritten as =[L]{} \_N(t,z,); =\_[j=1]{}\^N(e\^[-2V()]{}e\^[2V()]{}) \[eq:L2\] Writing $\tilde{P}_N(t,z,\mathbf{x})$ as \_N(t,z,) = e\^[-V()/2]{} (t,z,) \[eq:P-W\] and substituting this expression into (\[eq:fp\]), we get for the function $\tilde{W}(t,z,\mathbf{x})$ the nonstationary Schrödinger equation =\_[j=1]{}\^N-U() \_N(t,z,) \[eq:schr\] where the potential $U(\mathbf{x})$ has a supersymmetric structure U()= \_[j=1]{}\^N()\^2- \_[j=1]{}\^N \[eq:supersymm\] For the the potential $V(\mathbf{x})$ defined in (\[eq:joint3\]) we have U()=(-1)\_[i&lt;j]{}+\_[i=1]{}\^N x\_i\^2- \[eq:U2\] One can rewrite the Schrödinger equation (\[eq:schr\]) with the potential (\[eq:U2\]) in dimensionless time, $\sigma=t\beta^{-1}$. Such change of variables sets the diffusion coefficient equal to $\frac{1}{2}$ as it should be in one-dimensional lattice system: = \_[j=1]{}\^N -(\_[i&lt;j]{}\^N+ \_[i=1]{}\^N - )\_N(,z,) \[eq:calogero\] where $c(N)=\frac{N((N-1)\beta+1)}{2}$ is independent on $\sigma$ and $\mathbf{x}$. Let us rescale $x_{j}=\sqrt{z} y_j$ for all $1\le j\le N$. Due to (\[eq:joint3\]), (\[eq:eq\]) and (\[eq:P-W\]), the equilibrium distribution of (\[eq:calogero\]) at $\sigma\to\infty$ satisfies the equation \_[j=1]{}\^N =(\_[i&lt;j]{}\^N+ \_[i=1]{}\^N - c(N))\_N(,) \[eq:eq2-1\] whose asymptotic solution is $\tilde{W}^{(eq)}_N(z,\mathbf{y}) = \lim\limits_{\sigma\to\infty}\tilde{W}_N(s,z,\mathbf{y}) = e^{-\beta V(\mathbf{y})}$ with V()= \_[i=1]{}\^N y\_i\^2-\_[i&lt;j]{}|y\_i-y\_j| + [const]{} \[eq:eq2-2\] Consider now the solution of the Fokker-Planck equation at intermediate times $\sigma$, at which the distribution function $\tilde{W}_N(\sigma,z,\mathbf{x})$ is not yet completely equilibrated. Performing the Laplace transform of the function $\tilde{W}_N(\sigma,z,\mathbf{y})$ with respect to $\sigma$, $$\tilde{W}_N(p,z,\mathbf{y})=\int_0^{\infty} \tilde{W}_N(\sigma,z,\mathbf{y})\,e^{-p\sigma}\, d\sigma$$ we arrive at the following Laplace-transformed Schrödinger equation (\[eq:calogero\]) for the function $\tilde{W}_N(p,z,\mathbf{y})$: \_[j=1]{}\^N =(\_[i&lt;j]{}\^N+ \_[i=1]{}\^N -(c(N)-z p) )\_N(p,z,) \[eq:noneq\] Now we can note that the time $\sigma$ is connected in average to the parameter $z$. Namely, comparing (\[eq:eq2-1\]) and (\[eq:noneq\]), we see that upon identification c(N)-z p=z’c(N) the equations (\[eq:eq2-1\]) and (\[eq:noneq\]) become equivalent. Thus, the *avearge* time, $\left<\sigma\right>\sim p^{-1}$ can be identified with the parameter $z'$ as follows = \[eq:time\] Since $z=Ta(1-a)$, we can rewrite (\[eq:time\]) as = , 1-a’(1-a’)&gt;0 \[eq:sigma\] Thus, if for the infinite time $\sigma\to\infty$ (i.e. for the complete equilibration) we have chosen the crossection of the bunch of vicious walks at $aT$, then for the partial equilibration of the system at time $\left<\sigma\right>\sim p^{-1}$ we have to select a crossection $a'$. The hidden supersymmetry of the Fokker-Planck equation associated with the self-adjoint representation (\[eq:L2\]) and corresponding Schrödinger equation (\[eq:schr\]) with the SUSY potential (\[eq:supersymm\]) has been discovered in [@parisisur]. The Gibbs measure corresponds to the zero mode, which implies that the SUSY at $t\to\infty$ is unbroken. However it was argued in [@feigel] that in some range of parameters, the SUSY may fail, resulting in absence of stationary solution of the FP equation. The reason for the SUSY failure is the non-normalizability of the solution. It was argued in [@feigel] that the solution in the SUSY broken phase corresponds to the steady state with the nontrivial current. Whether the SUSY breaking can be identified with the DK transition, is an open and intriguing question. Statistics of area-weighted Brownian excursions paths ===================================================== By definition, the Dyck path of length $2n$ on the square lattice starts at the origin $(0,0)$, ends at point $(n,n)$ and consists of the union of sequential elementary “$\uparrow$” and “$\to$” steps, such that the path always stays above the diagonal of the square. The number of all Dyck paths of length $2n$ is given by the Catalan number, $C_n=\frac{\disp 1}{\disp n+1}\left(\begin{matrix} 2n \\ n \end{matrix}\right)\bigg|_{n\gg 1}\sim \disp \frac{2^{2n}}{n^{3/2}}$. The “magnetic” Dyck paths can be viewed as a trajectory of a charged particle on a square lattice in an external transversal magnetic field (after applied Wick rotation), where the motion of a particle is subject to two restrictions: it moves only up and right and never intersects the diagonal. Calculating the action for such a particle, we see that $q=e^{iH}$ (where $H$ is the external magnetic field) is the fugacity of the area, $A$, under the Dyck path, and $s=e^{m}$ (where $m$ is the mass of the charged particle) is the fugacity of the path length, $n$. The information about the statistics of area-weighted Dyck paths can be easily extracted from the partition function of the grand canonical ensemble, F(s\^2,q) = \_[All Dyck paths]{} q\^A s\^[2n]{} \[eq:all\_dycks1\] This model can be referred to as the “chiral Hofstadter system”, considered in [@ouvry]. To proceed with calculations, turn the lattice by $\pi/4$ and write the recursion relation for the partition function $Z_k(x;q)$ on a half-line, $x\ge 0$, where $x$ is the height of the path at the step $k$. The area, $A$, below the path is counted as a sum of filled plaquettes (i.e. “heights”) between the path and the $x=0$-axis, as shown in the [Fig.\[fig:area\]]{}. Each plaquette carries the weight $q$. The partition function of area-weighted Dyck paths, $Z_k(x,q)$, on a finite strip satisfies the recursion: { [l]{} Z\_[k+1]{}(x,q) = q\^[x-1]{} Z\_k(x+1,q) + Z\_k(x-1,q)\ Z\_k(0,q)=Z\_k(L,q)=0\ Z\_[k=0]{}(x,q)=\_[x,1]{} . \[eq:01\] Recall that by definition [(\[eq:all\_dycks1\])]{}, $\disp F(s^2,q) = \sum_{n=0}^{\infty} Z_{2n}(1,q) s^{2n}$. Define ${\bf Z}_{2n}(q)=\big(Z_{2n}(1,q),Z_{2n}(2,q),...,Z_{2n}(L,q)\big)^{\intercal}$ and rewrite equation [(\[eq:01\])]{} in a matrix form: \_[2n]{}(q) = T\^[2n]{}(q) [**Z**]{}\_0 where T\_[L]{}(q)=( [lllllll]{} 0 & 1 & 0\ 1 & 0 & q\ 0 & 1 & 0\ & & & &\ & & & & 0 & q\^[L-1]{}\ & & & & 1 & 0 ); \_0=( [c]{} 1\ 0\ 0\ \ 0\ 0 ) \[eq:02\] We are interested in the value $Z_{2n}(1,q)$ since at the very last step the trajectory returns to the initial point. Evaluating powers of the matrix $T_{L}$, we can straightforwardly check that the values of $Z_{2n}(1;q)$ at $L\to\infty$ are given by Carlitz–Riordan $q$-Catalan numbers [@carlitz1; @carlitz2], namely, $Z_{2n}(1,q) = C_{n}(q), (n=1,2,3,...)$. Recall that $C_n(q)$ satisfy the recursion C\_n(q) = \_[k=0]{}\^[n-1]{}q\^k C\_k(q) C\_[n-k-1]{}(q) \[eq:04\] which is the $q$-extension of the standard recursion for Catalan numbers. The generating function F(s\^2,q)=\_[n=0]{}\^s\^[2n]{} C\_n(q) obeys the functional relation F(s\^2,q) = 1 + s\^2 F(s\^2,q) F(s\^2q,q) \[eq:05\] The solution of [(\[eq:05\])]{} at finite $L$ can be written as a continued fraction expansion truncated at the etage $L$: F\_[L]{}(s\^2,q) = \[eq:06\] At $L\to\infty$ one has F(s\^2,q)=\_[L]{}F\_[L]{}(s\^2,q)= where $A_q(s)$ is the $q$-Airy function, A\_q(s)=\_[k=0]{}\^; (t;q)\_k=\_[k=0]{}\^[k-1]{} (1-t q\^k) \[eq:07\] In the works [@prel0; @rich1; @rich2] it has been shown that in the double scaling limit $q\to 1^-$ and $s\to \frac{1}{2}^{-}$ the function $F(s,q)$ has the following asymptotic form F(z) \~F\_[reg]{}+(1-q)\^[1/3]{} (4z); z=, \[eq:asymp1\] where $F_{\rm reg}$ is the regular part at $\big(q\to 1^-,\, s\to \frac{1}{2}^{-}\big)$ and $\disp {\rm Ai}(z)=\frac{1}{\pi} \int_{0}^{\infty} \cos(\xi^3/3+\xi z)\, d\xi$ is the Airy function. The function $F(s,1)$ is the generating function for the undeformed Catalan numbers: F(s\^2,q=1)= \[eq:asymp2\] The generating function $F(s^2,1)$ is defined for $0<s<\frac{1}{2}$, and at the point $s=\frac{1}{2}$ the first derivative of $F(s^2,1)$ experiences a singularity which is interpreted as the critical behavior. The limit $q=1$, $s\to\frac{1}{2}^-$ can be read also from the asymptotic expression for $F(s,q)$, Eq.(\[eq:asymp1\]) at $q\to 1^-,\, s\to \frac{1}{2}^{-}$: $F(s,q)\big|_{q\to 1^-}\sim F_{\rm reg}-2\sqrt{1-4s^2}$, where $F_{\rm reg}=2$ at $s=\frac{1}{2}$. Note that the first non-singular term in this expression does not contain $q$, so it is no matter in which order the limit in (\[eq:asymp1\]) is taken. However to define the double scaling behavior and derive the Airy-type asymptotic, the simultaneous scaling in $s$ and $q$ is required. The Cristoffel-Darboux formula applied to the generating function of the recursion (\[eq:01\]) allows one to compute the kernel $K(s,t)$ for the (1+1)D magnetic random walk in the generating function form: $$\begin{gathered} K(s,t) = \lim_{L\to\infty} K_L(s,t) \equiv \lim_{L\to\infty} \sum_{n=1}^L F_n(s^2)F_n(t^2) \\ = \left. (s t)^2 q^{L-1} \frac{F_L(s,q)F_{L+1}(t,q) - F_{L+1}(s,q)F_{L}(t,q)}{t^2-s^2} \right|_{L\to\infty} \\ \propto \frac{s^2 A_q(s^2)A_q(t^2/q)-t^2A_q(t^2)A_q(s^2/q)}{t^2-s^2} \label{eq:kern}\end{gathered}$$ The kernel $K(s,t)$ in (\[eq:kern\]) has the structure typical for kernels of determinantal point processes (see, for review [@boo]) with one essential difference: in (\[eq:kern\]) the kernel determines the correlations at different fugacities $s$ and $t$ conjugated to different times in *one point*, while in determinantal processes the kernel $K(x,y)$ determines the correlation *in different points*, $x$ and $y$, at the same time. [999]{} M. R. Douglas and V. A. Kazakov, “Large N phase transition in continuum QCD in two-dimensions,” Phys. Lett. B [**319**]{}, 219 (1993) \[hep-th/9305047\]. D. J. Gross and E. Witten, “Possible Third Order Phase Transition in the Large N Lattice Gauge Theory,” Phys. Rev. D [**21**]{}, 446 (1980). S. R. Wadia, “$N$ = Infinity Phase Transition in a Class of Exactly Soluble Model Lattice Gauge Theories,” Phys. Lett. B [**93**]{}, 403 (1980). doi:10.1016/0370-2693(80)90353-6 D. J. Gross and A. Matytsin, “Some properties of large N two-dimensional Yang-Mills theory,” Nucl. Phys. B [**437**]{}, 541 (1995) \[hep-th/9410054\]. B. Durhuus and P. Olesen, “The Spectral Density for Two-dimensional Continuum [QCD]{},” Nucl. Phys. B [**184**]{}, 461 (1981). doi:10.1016/0550-3213(81)90230-3 D. Jafferis and J. Marsano, “A DK phase transition in q-deformed Yang-Mills on S\*\*2 and topological strings,” hep-th/0509004. N. Caporaso, M. Cirafici, L. Griguolo, S. Pasquetti, D. Seminara and R. J. Szabo, “Topological Strings, Two-Dimensional Yang-Mills Theory and Chern-Simons Theory on Torus Bundles,” Adv. Theor. Math. Phys.  [**12**]{}, 981 (2008) \[hep-th/0609129\].\ N. Caporaso, M. Cirafici, L. Griguolo, S. Pasquetti, D. Seminara and R. J. Szabo, “Topological strings and large N phase transitions. I. Nonchiral expansion of q-deformed Yang-Mills theory,” JHEP [**0601**]{}, 035 (2006) \[hep-th/0509041\].\ N. Caporaso, M. Cirafici, L. Griguolo, S. Pasquetti, D. Seminara and R. J. Szabo, “Topological strings and large N phase transitions. II. Chiral expansion of q-deformed Yang-Mills theory,” JHEP [**0601**]{}, 036 (2006) \[hep-th/0511043\]. X. Arsiwalla, R. Boels, M. Marino and A. Sinkovics, “Phase transitions in q-deformed 2-D Yang-Mills theory and topological strings,” Phys. Rev. D [**73**]{}, 026005 (2006) \[hep-th/0509002\].\ N. Caporaso, L. Griguolo, M. Marino, S. Pasquetti and D. Seminara, “Phase transitions, double-scaling limit, and topological strings,” Phys. Rev. D [**75**]{}, 046004 (2007) \[hep-th/0606120\]. M. Aganagic and K. Schaeffer, “Refined Black Hole Ensembles and Topological Strings,” JHEP [**1301**]{}, 060 (2013) \[arXiv:1210.1865 \[hep-th\]\]. Z. Kökényesi, A. Sinkovics and R. J. Szabo, “Refined Chern-Simons theory and (q, t)-deformed Yang-Mills theory: Semi-classical expansion and planar limit,” JHEP [**1310**]{}, 067 (2013) \[arXiv:1306.1707 \[hep-th\]\]. P. Forrester, S. Majumdar and G. Schehr, “Non-intersecting Brownian walkers and Yang-Mills theory on the sphere” arXiv e-print (arXiv:1009.2362) G. Schehr, S. Majumdar, F. Comtet nd P. Forrester, “Reunion Probability of N Vicious Walkers: Typical and Large Fluctuations for Large N” (arXiv:1210.4438) S. de Haro and M. Tierz, “Brownian motion, Chern-Simons theory, and 2-D Yang-Mills,” Phys. Lett. B [**601**]{}, 201 (2004) \[hep-th/0406093\]. S. de Haro, “Chern-Simons theory, 2d Yang-Mills, and Lie algebra wanderers,” Nucl. Phys. B [**730**]{}, 312 (2005) \[hep-th/0412110\]. S. de Haro, “Chern-Simons theory in lens spaces from 2-D Yang-Mills on the cylinder,” JHEP [**0408**]{}, 041 (2004) \[hep-th/0407139\]. C. Vafa, “Two dimensional Yang-Mills, black holes and topological strings,” hep-th/0406058. H. Ooguri, A. Strominger and C. Vafa, “Black hole attractors and the topological string,” Phys. Rev. D [**70**]{}, 106007 (2004) \[hep-th/0405146\]. M. Aganagic, H. Ooguri, N. Saulina and C. Vafa, “Black holes, q-deformed 2d Yang-Mills, and non-perturbative topological strings,” Nucl. Phys. B [**715**]{}, 304 (2005) \[hep-th/0411280\]. A. Borodin and L. Petrov “Integrable probability: From representation theory to Macdonald processes” Probability Surveys 11 (2014), pp. 1-58, arXiv 1310.8007 G. Parisi and Y. Wu, Sc.Sin, 1981,24, 483 G. Marchesini, “A Comment on the Stochastic Quantization: The Loop Equation of Gauge Theory as the Equilibrium Condition,” Nucl. Phys. B [**191**]{}, 214 (1981). doi:10.1016/0550-3213(81)90296-0 G. Parisi and N. Sourlas, “Supersymmetric Field Theories and Stochastic Differential Equations,” Nucl. Phys. B [**206**]{}, 321 (1982). doi:10.1016/0550-3213(82)90538-7 M. V. Felgel’man and A. M. Tsvelik “Hidden supersymmetry of stochastic dissipative dynamics” Zh. Eksp. Teor. Fiz. 83, 1430-1443 (1982) H. Nicolai, “On a New Characterization of Scalar Supersymmetric Theories,” Phys. Lett. B [**89**]{}, 341 (1980). S. Cecotti and L. Girardello, “Functional Measure, Topology and Dynamical Supersymmetry Breaking,” Phys. Lett. B [**110**]{}, 39 (1982). R. Dijkgraaf, D. Orlando and S. Reffert, “Relating Field Theories via Stochastic Quantization,” Nucl. Phys. B [**824**]{}, 365 (2010) S. Cecotti and C. Vafa, “2d Wall-Crossing, R-Twisting, and a Supersymmetric Index,” arXiv:1002.3638 \[hep-th\]. J. L. Cardy, “SLE for theoretical physicists,” Annals Phys.  [**318**]{}, 81 (2005) doi:10.1016/j.aop.2005.04.001 \[cond-mat/0503313\]. G. Lifschytz and V. Periwal, “Schwinger-Dyson = Wheeler-DeWitt: Gauge theory observables as bulk operators,” JHEP [**0004**]{}, 026 (2000) doi:10.1088/1126-6708/2000/04/026 \[hep-th/0003179\]. D. Polyakov, “AdS / CFT correspondence, critical strings and stochastic quantization,” Class. Quant. Grav.  [**18**]{}, 1979 (2001) doi:10.1088/0264-9381/18/10/311 \[hep-th/0005094\]. E. Bettelheim, A. G. Abanov and P. Wiegmann, “Quantum Shock Waves: The case for non-linear effects in dynamics of electronic liquids,” Phys. Rev. Lett.  [**97**]{}, 246401 (2006) doi:10.1103/PhysRevLett.97.246401 \[cond-mat/0606778\].\ E. Bettelheim, A. G. Abanov and P. Wiegmann, “Nonlinear Dynamics of Quantum Systems and Soliton Theory,” J. Phys. A [**40**]{}, F193 (2007) doi:10.1088/1751-8113/40/8/F02 \[nlin/0605006 \[nlin-si\]\]. A. G. Abanov and P. B. Wiegmann, “Quantum hydrodynamics, quantum Benjamin-Ono equation, and Calogero model,” Phys. Rev. Lett.  [**95**]{}, 076402 (2005) doi:10.1103/PhysRevLett.95.076402 \[cond-mat/0504041\].\ A. G. Abanov, E. Bettelheim and P. Wiegmann, “Integrable hydrodynamics of Calogero-Sutherland model: Bidirectional Benjamin-Ono equation,” J. Phys. A [**42**]{}, 135201 (2009) doi:10.1088/1751-8113/42/13/135201 \[arXiv:0810.5327 \[cond-mat.str-el\]\]. J. P. Blaizot, M. A. Nowak “Large N(c) confinement and turbulence,” Phys. Rev. Lett.  [**101**]{}, 102001 (2008) doi:10.1103/PhysRevLett.101.102001 \[arXiv:0801.1859 \[hep-th\]\]. J. P. Blaizot, M. A. Nowak and P. Warchoł, “Burgers-like equation for spontaneous breakdown of the chiral symmetry in QCD,” Phys. Lett. B [**724**]{}, 170 (2013) doi:10.1016/j.physletb.2013.06.022 \[arXiv:1303.2357 \[hep-ph\] K. Bulycheva, A. Gorsky and S. Nechaev, “Critical behavior in topological ensembles,” Phys. Rev. D [**92**]{}, no. 10, 105006 (2015) J. de Boer, E. P. Verlinde and H. L. Verlinde, “On the holographic renormalization group,” JHEP [**0008**]{}, 003 (2000) doi:10.1088/1126-6708/2000/08/003 \[hep-th/9912012\]. S. Nakamura and H. Ooguri, “Out of Equilibrium Temperature from Holography,” Phys. Rev. D [**88**]{}, no. 12, 126003 (2013) doi:10.1103/PhysRevD.88.126003 \[arXiv:1309.4089 \[hep-th\]\]. D. T. Son and D. Teaney, “Thermal Noise and Stochastic Strings in AdS/CFT,” JHEP [**0907**]{}, 021 (2009) doi:10.1088/1126-6708/2009/07/021 \[arXiv:0901.2338 \[hep-th\]\].\ J. Casalderrey-Solana, K. Y. Kim and D. Teaney, “Stochastic String Motion Above and Below the World Sheet Horizon,” JHEP [**0912**]{}, 066 (2009) doi:10.1088/1126-6708/2009/12/066 \[arXiv:0908.1470 \[hep-th\]\]. J. de Boer, V. E. Hubeny, M. Rangamani and M. Shigemori, “Brownian motion in AdS/CFT,” JHEP [**0907**]{}, 094 (2009) doi:10.1088/1126-6708/2009/07/094 \[arXiv:0812.5112 \[hep-th\]\]. V. E. Hubeny and M. Rangamani, “A Holographic view on physics out of equilibrium,” Adv. High Energy Phys.  [**2010**]{}, 297916 (2010) doi:10.1155/2010/297916 \[arXiv:1006.3675 \[hep-th\]\]. J. L. Cardy, “Calogero-Sutherland model and bulk boundary correlations in conformal field theory,” Phys. Lett. B [**582**]{}, 121 (2004) \[hep-th/0310291\]. A. Gorsky and N. Nekrasov, “Relativistic Calogero-Moser model as gauged WZW theory,” Nucl. Phys. B [**436**]{}, 582 (1995) A. Gorsky and N. Nekrasov, “Hamiltonian systems of Calogero type and two-dimensional Yang-Mills theory,” Nucl. Phys. B [**414**]{}, 213 (1994) doi:10.1016/0550-3213(94)90429-4 \[hep-th/9304047\]. A. L. Fitzpatrick, J. Kaplan and M. T. Walters, “Virasoro Conformal Blocks and Thermality from Classical Background Fields,” JHEP [**1511**]{}, 200 (2015) doi:10.1007/JHEP11(2015)200 \[arXiv:1501.05315 \[hep-th\]\].\ K. B. Alkalaev and V. A. Belavin, “Holographic interpretation of 1-point toroidal block in the semiclassical limit,” arXiv:1603.08440 \[hep-th\]. L. Susskind, “Some speculations about black hole entropy in string theory,” In \*Teitelboim, C. (ed.): The black hole\* 118-131 \[hep-th/9309145\]. G. T. Horowitz and J. Polchinski, “A Correspondence principle for black holes and strings,” Phys. Rev. D [**55**]{}, 6189 (1997) doi:10.1103/PhysRevD.55.6189 \[hep-th/9612146\]. M. Kruczenski and A. Lawrence, “Random walks and the Hagedorn transition,” JHEP [**0607**]{}, 031 (2006) \[hep-th/0508148\]. T. G. Mertens, H. Verschelde and V. I. Zakharov, “Random Walks in Rindler Spacetime and String Theory at the Tip of the Cigar,” JHEP [**1403**]{}, 086 (2014) \[arXiv:1307.3491 \[hep-th\]\].\ T. G. Mertens, H. Verschelde and V. I. Zakharov, “Near-Hagedorn Thermodynamics and Random Walks: a General Formalism in Curved Backgrounds,” JHEP [**1402**]{}, 127 (2014) \[arXiv:1305.7443 \[hep-th\]\]. J. Bec and K. Khanin, Burgers Turbulence, Phys. Rep., [**447**]{}, 1 (2007) A. Gorsky and A. Milekhin, “Condensates and instanton – torus knot duality. Hidden Physics at UV scale,” Nucl. Phys. B [**900**]{}, 366 (2015) doi:10.1016/j.nuclphysb.2015.09.015 \[arXiv:1412.8455 \[hep-th\]\]. A. Gorsky, A. Milekhin and N. Sopenko, “The Condensate from Torus Knots,” JHEP [**1509**]{}, 102 (2015) doi:10.1007/JHEP09(2015)102 \[arXiv:1506.06695 \[hep-th\]\]. S. Bolognesi, “Magnetic Bags and Black Holes,” Nucl. Phys. B [**845**]{}, 324 (2011) doi:10.1016/j.nuclphysb.2010.12.008 \[arXiv:1005.4642 \[hep-th\]\].\ S. Bolognesi and D. Tong, “Monopoles and Holography,” JHEP [**1101**]{}, 153 (2011) doi:10.1007/JHEP01(2011)153 \[arXiv:1010.4178 \[hep-th\]\]. T. Azuma, P. Basu and S. R. Wadia, “Monte Carlo Studies of the GWW Phase Transition in Large-N Gauge Theories,” Phys. Lett. B [**659**]{}, 676 (2008) doi:10.1016/j.physletb.2007.11.088 \[arXiv:0710.5873 \[hep-th\]\]. L. Alvarez-Gaume, P. Basu, M. Marino and S. R. Wadia, “Blackhole/String Transition for the Small Schwarzschild Blackhole of AdS(5)x S\*\*5 and Critical Unitary Matrix Models,” Eur. Phys. J. C [**48**]{}, 647 (2006) doi:10.1140/epjc/s10052-006-0049-x \[hep-th/0605041\]. S. A. Hartnoll, “Horizons, holography and condensed matter,” arXiv:1106.4324 \[hep-th\]. J. A. Minahan and A. P. Polychronakos, “Classical solutions for two-dimensional QCD on the sphere,” Nucl. Phys. B [**422**]{}, 172 (1994) \[hep-th/9309119\]. K. Liechty and D.Wang , “ Nonintersecting Brownian motions on the unit circle.”, arxiv:1312.7390 S. Giombi and V. Pestun, “The 1/2 BPS ’t Hooft loops in N=4 SYM as instantons in 2d Yang-Mills,” J. Phys. A [**46**]{}, 095402 (2013) \[arXiv:0909.4272 \[hep-th\]\]. E. Pollock and D. Ceperley. “Path-Integral Computation of Superfluid Densities”, Phys.Rev. B36, 8343 (1987) D. Ceperley and E. Pollock, “Path-Integral computation of the Low-Temperature Properties of Liquid 4He”, Phys.Rev.Lett.56, 351 (1986) B. Svistunov, E. Babaev and N. Prokof’ev, “Superfluid state of matter”, CRC Press, (2015) N. Prokof’ev and B. Svistunov , “Two definitions of Superfluid Density” DOI: 10.1103/PhysRevB.61.11282 G. Dvali and C. Gomez, “Black Holes as Critical Point of Quantum Phase Transition,” Eur. Phys. J. C [**74**]{}, 2752 (2014) \[arXiv:1207.4059 \[hep-th\]\]. D. Flassig, A. Franca and A. Pritzel , “Large-N ground state of the Lieb-Liniger model and Yang-Mills theory on a two-sphere” arXiv;1508.01515 A. A. Gerasimov and S. L. Shatashvili, “Two-dimensional gauge theories and quantum integrable systems,” Proceedings of Symposia in Pure Mathematics, May 25-29 2007, University of Augsburg, Germany \[arXiv:0711.1472 \[hep-th\]\].\ A. A. Gerasimov and S. L. Shatashvili, “Higgs Bundles, Gauge Theories and Quantum Groups,” Commun. Math. Phys.  [**277**]{}, 323 (2008) \[hep-th/0609024\]. A. Jevicki and B. Sakita, “The Quantum Collective Field Method and Its Application to the Planar Limit,” Nucl. Phys. B [**165**]{}, 511 (1980). doi:10.1016/0550-3213(80)90046-2 A. Gorsky and V. Lysov, “From effective actions to the background geometry,” Nucl. Phys. B [**718**]{}, 293 (2005) doi:10.1016/j.nuclphysb.2005.04.020 R. Gopakumar, “From free fields to AdS,” Phys. Rev. D [**70**]{}, 025009 (2004) doi:10.1103/PhysRevD.70.025009 \[hep-th/0308184\]. A. P. Polychronakos, “Generalized statistics in one-dimension,” hep-th/9902157. N. Nekrasov, “On a duality in Calogero-Moser-Sutherland systems,” hep-th/9707111. H. Neuberger, “Burgers’ equation in 2D SU(N) YM,” Phys. Lett. B [**666**]{}, 106 (2008) doi:10.1016/j.physletb.2008.06.064 \[arXiv:0806.0149 \[hep-th\]\].\ H. Neuberger, “Complex Burgers’ equation in 2D SU(N) YM,” Phys. Lett. B [**670**]{}, 235 (2008) doi:10.1016/j.physletb.2008.11.009 \[arXiv:0809.1238 \[hep-th\]\]. P.L. Ferrari, M. Praehofer, and H. Spohn, Stochastic Growth in One Dimension and Gaussian Multi-Matrix Models, In proceedings of the *14th International Congress on Mathematical Physics* (ICMP 2003), World Scientific (Ed. J.-C. Zambrini) (2006), 404-411, arXiv:math-ph/0310053 J.M. Burgers, *The Nonlinear Diffusion Equation: Asymptotic Solutions and Statistical Problems*, (Springer: 1974) A. Yu. Cherny, J. Caux and J. Brand, “Theory of superfluidity and drag force in the one-dimensional Bose gas” Frontiers of Physics 7, 54 (2012) D. Krefl, “Non-Perturbative Quantum Geometry,” JHEP [**1402**]{}, 084 (2014) doi:10.1007/JHEP02(2014)084 M. Kardar, Replica Bethe ansatz studies of two-dimensional interfaces with quenched random impurities Nucl. Phys. B [**290**]{}, 582 (1987) Victor Dotsenko, Bethe anzats derivation of the Tracy-Widom distribution for one-dimensional directed polymers, arXiv:1003.4899 P.Calabrese and P. Le Doussal, Phys. Rev. Lett. 106, 250603 (2011); arXiv:1204.2607 Victor Dotsenko, private communication I. M. Lifshitz, S. A. Gredeskul, and L. A. Pastur, *Introduction to the theory of disordered systems* (Wiley-Interscience, 1988)\ W. Kirsch and I. Veselic, Lifshitz Tails for a Class of Schrödinger Operators with Random Breather-Type Potential, Lett. Math. Phys. [**94**]{}, 27 (2010). A. Grassi, Y. Hatsuda and M. Marino, “Topological Strings from Quantum Mechanics,” arXiv:1410.3382 \[hep-th\]. P. G. O. Freund and A. V. Zabrodin, “Macdonald polynomials from Sklyanin algebras: a conceptual basis for the p-adics quantum group connection.,” Commun. Math. Phys.  [**147**]{}, 277 (1992) doi:10.1007/BF02096588 \[hep-th/9110066\]. M. Kulkarni and A. G. Abanov, “Cold Fermi-gas with long range interaction in a harmonic trap,” Nucl. Phys. B [**846**]{}, 122 (2011) doi:10.1016/j.nuclphysb.2010.12.015 \[arXiv:1006.0966 \[cond-mat.quant-gas\]\]. S. Dutta and R. Gopakumar, “Free fermions and thermal AdS/CFT,” JHEP [**0803**]{}, 011 (2008) doi:10.1088/1126-6708/2008/03/011 \[arXiv:0711.0133 \[hep-th\]\]. R. Dijkgraaf, R. Gopakumar, H. Ooguri and C. Vafa, “Baby universes in string theory,” Phys. Rev. D [**73**]{}, 066002 (2006) doi:10.1103/PhysRevD.73.066002 \[hep-th/0504221\].\ M. Aganagic, H. Ooguri and T. Okuda, “Quantum Entanglement of Baby Universes,” Nucl. Phys. B [**778**]{}, 36 (2007) doi:10.1016/j.nuclphysb.2007.04.006 \[hep-th/0612067\]. G. Schehr, S.N. Majumdar, A. Comtet, and J. Randon-Furlin, Exact distribution of the maximal height of p vicious walkers, arXiv:0807.0522 C. W. J. Beenakker and B. Rejaei, Random-matrix theory of parametric correlations in the spectra of disordered metals and chaotic billiards, arXiv:cond-mat/9310068 S. Mashkevich and S. Ouvry, Area Distribution of Two-Dimensional Random Walks on a Square Lattice J. Stat. Phys. [**137**]{}, 71-78 (2009) L. Carlitz and J. Riordan, Two element lattice permutation numbers and their $q$-generalization, Duke J. Math. [**31**]{}, 371-388 (1964) J. Fürlinger and J. Hofbauer, $q$-Catalan numbers, J. Comb. Th. A [**40**]{}, 248-264 (1985) T. Prellberg and R. Brak, Critical exponents from nonlinear functional equations for partially directed cluster models, J. Stat. Phys., [**78**]{}, 701-730 (1995) C. Richard and A. J. Guttmann, and I. Jensen, Scaling function and universal amplitude combinations for self-avoiding polygons, J.Phys. A: Math. Gen. [**34**]{}, L495-L501 (2001) C. Richard, Scaling Behaviour of Two-Dimensional Polygon Models, J. Stat. Phys., [**108**]{}, 459-493 (2002) A. Borodin, A. Okounkov, G. Olshanski, Asymptotics of Plancherel measures for symmetric groups, J. Amer. Math. Soc. [**13**]{} (2000), 481
--- abstract: 'We have given theoretical expressions for the forces exerted on a so-called Wilhelmy plate, which we modeled as a quasi-2D flat and smooth solid plate immersed into a liquid pool of a simple liquid. All forces given by the theory, the local forces on the top, the contact line and the bottom of the plate as well as the total force, showed an excellent agreement with the MD simulation results. The force expressions were derived by a purely mechanical approach, which is exact and ensures the force balance on the control volumes arbitrarily set in the system, and are valid as long as the solid-liquid (SL) and solid-vapor (SV) interactions can be described by mean-fields. In addition, we revealed that the local forces around the bottom and top of the solid plate can be related to the SL and SV interfacial tensions $\gsl$ and $\gsv$, and this was verified through the comparison with the SL and SV works of adhesion obtained by the thermodynamic integration (TI). From these results, it has been confirmed that $\gsl$ and $\gsv$ as well as the liquid-vapor interfacial tension $\glv$ can be extracted from a single equilibrium MD simulation without the computationally-demanding calculation of the local stress distributions and the TI.' author: - Yuta Imaizumi - Takeshi Omori - Hiroki Kusudo - Carlos Bistafa - Yasutaka Yamaguchi title: ' Wilhelmy equation revisited: a lightweight method to measure liquid-vapor, solid-liquid and solid-vapor interfacial tensions from a single molecular dynamics simulation ' --- The following article has been submitted to *The Journal of Chemical Physics*. Introduction {#sec:intro} ============ The behavior of the contact line (CL), where a liquid-vapor interface meets a solid surface, has long been a topic of interest in various scientific and engineering fields because it governs the wetting properties. [@deGenne1985; @Ono1960; @Rowlinson1982; @Schimmele2007; @Drelich2019] By introducing the concept of interfacial tensions and contact angle $\theta$, Young’s equation [@Young1805] is given by $$\gsl-\gsv+\glv \cos\theta = 0, \label{eq:Young}$$ where $\gsl$, $\gsv$ and $\glv$ denote solid-liquid (SL), solid-vapor (SV) and liquid-vapor (LV) interfacial tensions, respectively. The contact angle is a common measure of wettability at the macroscopic scale. Young’s equation  was first proposed based on the wall-tangential force balance of interfacial tensions exerted on the CL in 1805 before the establishment of thermodynamics, [@Gao2009] while recently it is often re-defined from a thermodynamic point of view instead of the mechanical force balance. [@deGenne1985] Wetting is critical especially in the nanoscale with a large surface to volume ratio, *e.g.,* in the fabrication process of semiconductors, [@Tanaka1993] where the length scale of the structure has reached down to several nanometers. From a microscopic point of view, @Kirkwood1949 first provided the theoretical framework of surface tension based on the statistical mechanics, and molecular dynamics (MD) and Monte Carlo (MC) simulations have been carried out for the microscopic understanding of wetting through the connection with the interfacial tensions. [@Nijmeijer1990_theor; @Nijmeijer1990_simul; @Tang1995; @Gloor2005; @Ingebrigtsen2007; @Das2010; @Weijs2011; @Seveno2013; @Surblys2014; @Nishida2014; @Lau2015; @Yamaguchi2019; @Kusudo2019; @Bey2020; @Grzelak2008; @Leroy2009; @Leroy2010; @Kumar2014; @Leroy2015; @Ardham2015; @Kanduc2017; @Kanduc2017a; @Jiang2017; @Surblys2018; @Ravipati2018] Most of these works on a simple flat and smooth solid surface indicated that the apparent contact angle of the meniscus or droplet obtained in the simulations corresponded well to the one predicted by Young’s equation  using the interfacial tensions calculated through a mechanical manner and/or a thermodynamic manner, where Bakker’s equation and extended one about the relation between stress distribution around LV, SL or SV interface and corresponding interfacial tension have played a key role. [@Yamaguchi2019] On the other hand, on inhomogeneous or rough surfaces, the apparent contact angle did not seem to correspond well to the predicted one, [@Leroy2010; @Giacomello2016; @Zhang2015; @Zhang2019] because the pinning force exerted from the solid must be included in the wall-tangential force balance. [@Kusudo2019] The Wilhelmy method [@Wilhelmy1863] has been applied as one of the most common methods to experimentally measure the LV interfacial tension, surface tension, or the contact angle. [@Volpe2018] In this method, the force on a solid sample vertically immersed into a liquid pool is expressed from the force balance by $$\Lztotal = l \glv \cos \theta + mg - \rho gV, \label{eq:Wilhelmy_full}$$ where $\Lztotal$ is the total downward force (load) measured on the sample, the contact angle $\theta$ is defined on the liquid side, $l$ is the CL perimeter, $m$ is the sample mass, $V$ denotes the volume of the sample immersed in a liquid of density $\rho$, and $g$ stands for the acceleration of gravity. The history of the Wilhelmy method and practical issues mainly from a macroscopic point of view are well summarized in a review article. [@Volpe2018] In the nanoscale, the gravitational force and buoyancy respectively as the 2nd and 3rd terms on the RHS of Eq.  are negligible, and it follows that $$\xiztot \approx \glv \cos\theta, \label{eq:Wilhelmy}$$ where the force per CL length $\xiztot$ is defined by $$\xiztot \equiv \frac{\Lztotal}{l}. \label{eq:def_xiztot}$$ From Eq. , one can estimate unknown $\glv$ from $\xiztot$ and $\theta$ determined by the apparent meniscus shape, or unknown $\theta$ from $\xiztot$ and $\glv$ as a known physical property. Apparently, the sign of $\xiztot$ is directly related to the wettability, the force is downward for a wettable solid sample with $\theta < \pi/2$. It is often modeled, typically with a macroscopic schematic illustrating the balance of forces acting on the solid sample, as if the solid sample is ‘pulled’ locally at the CL toward the direction tangential to the LV interface. In such a model, the wall-tangential component of this force $l \glv \cos \theta$ in Eq.  seems to act on the solid locally at the CL; however, it is not correct from a microscopic point of view[@Marchand2012; @Das2011; @Weijs2013]. As a straightforward example, consider the case with $\theta = \pi/2$: such model claims that the local wall-tangential force from the fluid around the CL must be zero because $\cos \theta=0$, whereas the fluid density $\rho$ along the wall-tangential direction $z$ changes with $\ptl \rho/\ptl z \ne 0$ around the CL, which should form an inhomogeneous force field for the solid in the $z$-direction. Probably due to the difficulty of the direct experimental measurement, few studies have been carried out specifically about the local force on the solid in comparison with Young’s equation so far. Among them, Das et al.[@Das2011] and Weijs et al.[@Weijs2013] proposed a model that describes the local force on the solid around the CL per unit length as $\glv(1+\cos \theta)$, which was based on the density functional theory with the sharp kink approximation. [@Merchant1992; @Getta1998] This model was later examined by MD simulations for a simple liquid. [@Seveno2013] In this work, we revisited the forces exerted on the Wilhelmy plate with non-zero thickness and derived theoretical expressions of the local forces on the CL and on the top and bottom of the plate as well as the total force on the plate. The derivations were done by a purely mechanical approach, which ensured the force balance on the arbitrarily set control volumes, and the connection to the thermodynamics was given by the extended Bakker equation. [@Yamaguchi2019] We also verified the present theoretical results by MD simulations. As a major outcome of the expressions of the local forces, we will show in this article that all the interfacial tensions involved in the system, $\glv$, $\gsl$ and $\gsv$, can be measured from a single equilibrium MD simulation without computationally-demanding calculations. Method {#sec:method} ====== MD Simulation ------------- ![\[Fig:system\] Equilibrium molecular dynamics (MD) simulation systems of a quasi-2D meniscus formed on a hollow rectangular solid plate dipped into a liquid pool of a simple Lennard-Jones (LJ) fluid: the Wilhelmy MD system. ](./fig01-system.eps){width="0.9\linewidth"} We employed equilibrium MD simulation systems of a quasi-2D meniscus formed on a hollow rectangular solid plate (denote by ‘solid plate’ hereafter) dipped into a liquid pool of a simple fluid as shown in Fig. \[Fig:system\]. We call this system the ‘Wilhelmy MD system’ hereafter. Generic particles interacting through a LJ potential were adopted as the fluid particles. The 12-6 LJ potential given by $$ \Phi^\mathrm{LJ}(r_{ij}) = 4\epsilon \left[ \left(\frac{\sigma}{r_{ij}}\right)^{12} - \left(\frac{\sigma}{r_{ij}}\right)^{6} + c_{2}^\mathrm{LJ}\left(\frac{r_{ij}}{\rc}\right)^2 + c_{0}^\mathrm{LJ} \right], \label{eq:LJ}$$ was used for the interaction between fluid particles, where $r_{ij}$ is the distance between the particles $i$ at position $\bm{r}_{i}$ and $j$ at $\bm{r}_{j}$, while $\epsilon$ and $\sigma$ denote the LJ energy and length parameters, respectively. This LJ interaction was truncated at a cut-off distance of $\rc=3.5 \sigma$ and quadratic functions were added so that the potential and interaction force smoothly vanished at $\rc$. The constant values of $c_{2}^\mathrm{LJ}$ and $c_{0}^\mathrm{LJ}$ were given in our previous study. [@Nishida2014] Hereafter, fluid and solid particles are denoted by ‘f’ and ‘s’, respectively and corresponding combinations are indicated by subscripts. A rectangular solid plate in contact with the fluid was prepared by bending a honeycomb graphene sheet, where the solid particles were fixed on the coordinate with the positions of 2D-hexagonal periodic structure with an inter-particle distance $r_\mathrm{ss}$ of 0.141 nm. The zigzag edge of the honeycomb structure was set parallel to the $y$-direction with locating solid particles at the edge to match the hexagonal periodicity. The right and left faces were set at $x=\pm x_\mathrm{s}$ parallel to the $yz$-plane, and the top and bottom faces were parallel to the $xy$-plane. Note that the distance between the left and right faces $2x_\mathrm{s}\approx 1.7$ nm was larger than the cutoff distance $\rc$. The solid-fluid (SF) interaction, which denotes SL or SV interaction, was also expressed by the LJ potential in Eq. , where the length parameter $\sigma_\mathrm{sf}$ was given by the Lorentz mixing rule, while the energy parameter $\epsilon_\mathrm{sf}$ was changed in a parametric manner by multiplying a SF interaction coefficient $\eta$ to the base value $\epsilon^{0}_\mathrm{sf}=\sqrt{\epsilon_\mathrm{ff}\epsilon_\mathrm{ss}}$ as $$\label{eq:def_eta} \epsilon_\mathrm{sf} = \eta \epsilon^{0}_\mathrm{sf}.$$ This parameter $\eta$ expressed the wettability, $\eta$ and the contact angle of a hemi-cylindrically shaped equilibrium droplet on a homogeneous flat solid surface had a one-to-one correspondence [@Nishida2014; @Yamaguchi2019; @Kusudo2019], and we set the parameter $\eta$ between 0.03 and 0.15 so that the corresponding cosine of the contact angle $\cos \theta$ may be from $-0.9$ to $0.9$. The definition of the contact angle is described later in Sec. \[sec:resdis\]. Note that due to the fact that the solid-solid inter-particle distance $r_\mathrm{ss}$ shown in Table \[tab:table1\] were relatively small compared to the LJ length parameters $\sigma_\mathrm{ff}$ and $\sigma_\mathrm{fs}$, the surface is considered to be very smooth, and the wall-tangential force from the solid on the fluid, which induces pinning of the CL, is negligible. [@Yamaguchi2019; @Kusudo2019] In addition to these intermolecular potentials, we set a horizontal potential wall on the bottom (floor) of the calculation cell fixed at $z=\zflr$ about 5.3 nm below the bottom of the solid plate, which interacted only with the fluid particles with a one-dimensional potential field $\Phi_\mathrm{flr}^\mathrm{1D}$ as the function of the distance from the wall given by $$\label{eq:potentialbath} \Phi_\mathrm{flr}^\mathrm{1D}(z'_{i})= 4\pi \rho_{n} \epsilon^{0}_\mathrm{sf} \sigma_\mathrm{sf}^{2} \left [ \frac{1}{5} \left( \frac{\sigma_\mathrm{sf} }{z'_{i} } \right)^{10} \!\!\!\! - \frac{1}{2} \left( \frac{\sigma_\mathrm{sf} }{z'_{i} } \right)^{4} + c_{2}^\mathrm{flr} \left(\frac{z'_{i}}{\zc^\mathrm{flr}}\right)^2 + c_{1}^\mathrm{flr} \left(\frac{z'_{i}}{\zc^\mathrm{flr}}\right) + c_{0}^\mathrm{flr} \right], \quad z'_{i}\equiv z_{i} - z_\mathrm{flr}$$ where $z_{i}$ is the $z$-position of fluid particle $i$. This potential wall mimicked a mean potential field created by a single layer of solid particles with a uniform area number density $\rho_{n}$. Similar to Eq. , this potential field in Eq.  was truncated at a cut-off distance of $\zc^\mathrm{flr}=3.5 \sigma_\mathrm{sf}$ and a quadratic function was added so that the potential and interaction force smoothly vanished at $\zc^\mathrm{flr}$. As shown in Fig. \[Fig:system\], fluid particles were rather strongly attracted on this plane because this roughly corresponded to a solid wall showing complete wetting. With this setup, the liquid pool was stably kept even when the liquid pressure is low with a highly wettable solid plate. Furthermore, we set another horizontal potential wall on the top (ceiling) of the calculation cell fixed at $z=\zceil$ about 4.7 nm above the top of the solid plate exerting a repulsive potential field $\Phi_\mathrm{ceil}^\mathrm{1D}$ on the fluid particles given by $$\label{eq:potentialbath_top} \Phi_\mathrm{ceil}^\mathrm{1D}(z''_{i})= 4\pi \rho_{n} \epsilon^{0}_\mathrm{sf} \sigma_\mathrm{sf}^{2} \left [ \frac{1}{5} \left( \frac{\sigma_\mathrm{sf} }{z''_{i} } \right)^{10} + c_{2}^\mathrm{ceil} \left(\frac{z''_{i}}{\zc^\mathrm{ceil}}\right)^2 + c_{1}^\mathrm{ceil} \left(\frac{z''_{i}}{\zc^\mathrm{ceil}}\right) + c_{0}^\mathrm{ceil} \right], \quad z''_{i}\equiv \zceil - z_{i},$$ where a cut-off distance of $\zc^\mathrm{ceil}= \sigma_\mathrm{sf}$ was set to express a repulsive potential wall. The periodic boundary condition was set in the horizontal $x$- and $y$-directions, where the system size in the $y$-direction $l_{y}\approx3.66$ nm matched the hexagonal periodicity of the graphene sheet. The temperature of the system was maintained at a constant temperature of $T_\mathrm{c}$ at 90 K, which was above the triple point temperature, [@Mastny2007] by velocity rescaling applied to the fluid particles within 0.8 nm from the floor wall regarding the velocity components in the $x$- and $y$-directions. Note that this region was sufficiently away from the bottom of the solid plate and no direct thermostating was imposed on around the solid plate, so that this temperature control had no effects on the present results. With this setting, a quasi-2D LJ liquid of a meniscus-shaped LV interface with the CL parallel to the $y-$direction was formed as an equilibrium state as exemplified in Fig. \[Fig:system\], where a liquid bulk with an isotropic density distribution existed above the bottom wall by choosing a proper number of fluid particles $N_\mathrm{f}$ as shown in Fig. \[Fig:distribtution\]. We checked that the temperature was constant in the whole system after the equilibration run described below. Note also that in the present quasi-2D systems, effects of the CL curvature can be neglected. [@Boruvka1977; @Marmur1997line; @Ingebrigtsen2007; @Leroy2010; @Weijs2011; @Nishida2014; @Yamaguchi2019; @Kusudo2019] The velocity Verlet method was applied for the integration of the Newtonian equation of motion with a time increment of 5 fs for all systems. The simulation parameters are summarized in Table \[tab:table1\] with the corresponding non-dimensional ones, which are normalized by the corresponding standard values based on $\epsilon_\mathrm{ff}$, $\sigma_\mathrm{ff}$ and $m_\mathrm{f}$. The physical properties of each equilibrium system with various $\eta$ values were calculated as the time average of 40 ns, which followed an equilibration run of more than 10 ns. property value unit non-dim. value ---------------------------- ---------------------------------------- ------ ---------------- $\sigma_\mathrm{ff}$ 0.340 nm 1 $\sigma_\mathrm{sf}$ 0.357 nm 1.05 $\epsilon_\mathrm{ff}$ $1.67 \times 10^{-21}$ J 1 $\epsilon^{0}_\mathrm{sf}$ $1.96\times 10^{-21}$ J 1.18 $\epsilon_\mathrm{sf}$ $\eta \times \epsilon^{0}_\mathrm{sf}$ $\eta$ 0.03 – 0.15 - - $m_\mathrm{f}$ $6.64 \times 10^{-26}$ kg 1 $T_\mathrm{c}$ 90 K 0.703 $N_\mathrm{f}$ 10000 - 15000 - - Results and discussion \[sec:resdis\] {#sec:result} ===================================== Contact angle and force on the solid plate ------------------------------------------ ![ (a) Distribution of the time-averaged fluid density, (b) half side snapshot, and (c) distributions of the time-averaged downward force density acting on the solid plate and solid-fluid (SF) potential energy for the system with a SF interaction parameter $\eta$ of 0.15. []{data-label="Fig:distribtution"}](./fig02-dens-force-distributions.eps){width="0.9\linewidth"} We calculated the distribution of force exerted from the fluid on the solid particles by dividing the system into equal-sized bins normal to the $z$-direction, where the height of the bin $\delta z$ of 0.2115 nm was used considering the periodicity of the graphene structure. We defined the average force density $\dxizdz$ as the time-averaged total downward (in $-z$-direction) force from the fluid on the solid particles in each bin divided by $2l_y \delta z$, where $l_y$ is the system width in the $y$-direction. Except at the top and bottom of the solid plate, $\dxizdz$ corresponds to the total downward force from both sides divided by the sum of surface area of both sides, the downward force per surface area. We also calculated the average SF potential energy per area $\usf$ as well, which was obtained by substituting the downward force by the SF potential energy. Figure \[Fig:distribtution\] shows the distribution of time-averaged fluid density $\rho$ around the solid plate for the system with solid-fluid interaction parameter $\eta=0.15$ and a snapshot of the system. The time-averaged distributions of the downward force acting on the solid plate $\dxizdz$ and the SF potential energy $\usf$ are also displayed in the right panel. Multi-layered structures in the liquid, called the adsorption layers, were formed around the solid plate and the potential wall on the bottom, and liquid bulk with a homogeneous density is observed away from the potential wall, the solid plate and the LV interface. The downward force $\dxizdz$ on the solid plate in Fig. \[Fig:distribtution\] (c) was positive around the top as filled with brown, zero below the top up to around the CL, and had smoothly distributed positive values around the CL as filled with blue. As further going downward, it became zero again below around the CL, and showed sharp change from positive to negative values as filled with red. On the SV interface between the plate top and CL and on the SL interface between the CL and the plate bottom, the time-averaged downward force was zero. Regarding the SF potential energy, $\usf$ was constant in the region where $\dxizdz=0$. This is because the time-averaged fluid density in these regions was homogeneous in the $z$-direction, $ \ptl \rho / \ptl z =0 $ was satisfied within the range where the intermolecular force from the fluid on the solid particles effectively reaches, and no surface-tangential force in the $z$-direction was exerted on the solid. This point will be described more in detail in Subsec. \[subsec:analytical\_xiz\]. Such two regions with zero downward force were formed for all systems in the present study, and thus, the total downward force as the integral of $\dxizdz$ can be clearly separated into three local parts, $\xiztop$ around the top, $\xizcl$ around the contact line, and $\xizbot$ around the bottom. As indicated in Fig \[Fig:distribtution\] (c), $\xiztop$ and $\xizcl$ are positive, downward forces, and $\xizbot$ is negative, an upward force. Note that the distributions of $\dxizdz$ and $\usf$ around the top and bottom had less physical meaning because they included the top and bottom faces in the bin, and these parts for $\usf$ are not displayed in the figure. However, the local integral of $\dxizdz$ indeed gave the physical information about the force around the top and bottom parts. Note also that $\xi_{z}$ has the same dimension as the surface tension of force per length. The LV interface had a uniform curvature away from the solid plate to minimize LV interface area as one of the principal properties of surface tension. Considering the symmetry of the system, the hemi-cylindrical LV interface with a uniform curvature is symmetrical between the solid plates over the periodic boundary in the $x$-direction. Regarding SF interface position $\xsf$, which was different from the wall surface position $x_s$, we defined it at the limit that the fluid could reach. With this definition, Young’s equation holds for quasi-2D droplets on a smooth and flat solid surface, as shown in our previous study. [@Yamaguchi2019] The $\xsf$ value was determined as $\xsf = 1.15$ nm from the density distribution, whereas the curvature radius $R$ was determined through the least-squares fitting of a circle on the density contour of $\rho=$400 kg/m$^{3}$ at the LV interface excluding the region in the adsorption layers near the solid surface. [@Nishida2014; @Yamaguchi2019; @Kusudo2019] We defined the apparent contact angle $\theta$ by the angle at $x=\xsf$ between the SF interface and the least-squares fit of the LV interface having a curvature $\chi\equiv \pm 1/R$, with $R$ being the curvature radius. Note that the sign $\pm$ corresponds to the downward or upward convex LV-interfaces, respectively. The relation between the SF interaction coefficient $\eta$ and cosine of the contact angle $\cos \theta$ is shown in Appendix \[sec:appendix\_eta\_costheta\], and the following results are shown based on $\cos \theta$ instead of $\eta$. ![MD results of the local downward forces exerted around the top, the contact line and the bottom of the solid plate and their sum as a function of the cosine of the contact angle. Corresponding half-snapshots and density distributions for three cases are also displayed on the top. \[Fig:local-forces\]](./fig03-local-forces.eps){width="0.6\linewidth"} Figure \[Fig:local-forces\] shows the above-defined local downward forces $\xiztop$, $\xizcl$ and $\xizbot$ and their sum $\xiztot \equiv \xiztop + \xizcl + \xizbot$ on the cosine of the contact angle $\cos \theta$ obtained by MD simulations. Corresponding half-snapshots and density distributions are also displayed on the top. Regarding the force around the top $\xiztop$, it was almost zero except for cases with small contact angle. This is obvious because almost no vapor particles were adsorbed on the top of the solid plate for non-wetting cases as seen in the top panel for $\eta = 0.03$. However, in the case of large $\cos \theta$, $\xiztop$ had non-negligible positive value, downward force comparable to $\xiztot$, because an adsorption layer was also formed at the SV interface as seen in the top panel for $\eta = 0.15$. In terms of the force around the contact line $\xizcl$, it was positive even with negative $\cos \theta$ value, meaning that the solid particle around the CL was always subject to a downward force from the fluid. On the contrary to $\xiztop$ and $\xizcl$, which were both positive, $\xizbot$ was negative and its magnitude increased as $\cos \theta$ increased, meaning that upward force to expel the bottom side was exerted from the liquid, and that the upward force was larger for larger SL interaction $\eta$. Finally, the sum of the above three $\xiztot$ seems to be proportional to $\cos\theta$. We will show later that it actually deviates from a simple Wilhelmy relation . Analytical expressions of the forces on the solid {#subsec:analytical_xiz} ------------------------------------------------- ![ Top, contact-line (middle), and bottom parts of the solid plate subject to downward forces $\xiztop$, $\xizcl$ and $\xizbot$ from the fluid, respectively, and the control volumes (CVs) surrounding the fluid particles in contact with these solid parts subject to upward force $\Fztop$, $\Fzcl$ and $\Fzbot$ from the solid. \[Fig:controlvolume\] ](./fig04-controlvolume.eps){width="0.6\linewidth"} ### Definition of the solid-fluid forces In order to elucidate the origin of the forces exerted on the solid, we examined the details of the forces $\xiztop$, $\xizcl$ and $\xizbot$ from the fluid as well as the force balance on the control volumes (CVs) surrounding the fluid around the solid plate with taking the stress distribution in the fluid into account as in our previous study. [@Yamaguchi2019; @Kusudo2019] We supposed three CVs surrounding the fluid around the solid plate as shown with dotted lines in Fig. \[Fig:controlvolume\]: a CV on the top in dark-yellow dotted line, one around the CL in blue dotted line, and one on the bottom in red dotted line. All the CVs have their right face at the boundary of the system in the $x$-direction at $x=x_\mathrm{end}$ at which symmetry of the physical values is satisfied, and the faces in contact with the solid is set at the limit that the fluid could reach. The remaining left sides of the top and bottom CVs are set in the center of the system where the symmetry condition is satisfied. The $z$-normal faces are set respectively at $z=\zvblk$, $\zsv$, $\zsl$ and $\zlblk$, where $\zvblk$ and $\zlblk$ are at the vapor and liquid bulk heights, whereas $\zsv$ and $\zsl$ are set at the heights of SV and SL interfaces, respectively as shown in Fig. \[Fig:local-forces\] at which $\dxizdz = 0$ is satisfied. These heights can be set rather arbitrary as long as the above conditions are satisfied. We define the forces from solid to liquid by $\Fztop$, $\Fzcl$ and $\Fzbot$ on the top, middle and bottom CVs, respectively. In addition, we also categorize the right-half of the solid plate into top, middle and bottom parts shown with dark-yellow, blue, and red solid lines, respectively with $\zsv$ and $\zsl$ as the boundaries as shown in Fig. \[Fig:controlvolume\]. where forces $\xiztop$, $\xizcl$ and $\xizbot$ in the $z$-direction are exerted from the fluid, respectively. Specifically note that $\xizcl \neq \Fzcl$, $\xizbot \neq \Fzbot$ and $\xiztop \neq \Fztop$, because, for instance, $\Fzcl$ also includes the forces from the top and bottom parts of the solid, whereas $\xizcl$ includes the forces from the top and bottom CVs. In other words, the force between the middle solid part and middle fluid CV is in action-reaction relation, but $\Fzcl$ and $\xizcl$ include different extra forces above. This will be described more in detail in the following. ![Region for the double integral of the mean field regarding the interaction between solid plate and fluid at height $\zs$ and $\zf$, respectively. The geometrical relation is shown in the inset. Three height ranges of ‘top,’ ‘cl,’ and ‘bot’ corresponding to those in Fig. \[Fig:controlvolume\] are depicted in color. Cutoff distance $\zc$ for $|\zf-\zs|$ is set depending on the lateral position $\xf - \xs$, and the solid-liquid interactions between height ranges are categorized as filled regions or as ones surrounded by solid lines. \[Fig:meanpot\_integ\] ](./fig05-dbl-integral){width="0.8\linewidth"} ### Capillary force $\xizcl$ around the contact line based on a mean-field approach \[subsubsec:meanfield\] We start from formulating the wall tangential force on the solid particles $\xizcl$ on the right face of the solid plate. Taking into account that the solid is supposed to be smooth for the fluid particles because the interparticle distance parameters $\sigma_\mathrm{ff}$ and $\sigma_\mathrm{sf}$ are sufficiently large compared to $r_\mathrm{ss}$ between solid particles, $\xizcl$ can be analytically modeled by assuming the mean fields of the fluid and solid. The mean number density per volume $\rnf(\zf,\xf)\ (=\rho/m_\mathrm{f})$ of the fluid is given as a function of the two-dimensional position $(\zf,\xf)$ of the fluid, whereas a constant mean number density per area $\rns$ of the solid is used considering the present system with a solid plate of zero-thickness without volume; however, the following derivation can easily be extended for a system with a solid with a volume and density per volume in the range $x \leq \xs$ as long as the density is independent of $\zs$. We start from the potential energy on a solid particle at position $(\xs,\ys,\zs)$ due to a fluid particle at $(\xf,\yf,\zf)$ given by Eq. . We define $$\xpf = -\xds \equiv \xf-\xs, \quad \ypf = -\yds \equiv \yf-\ys, \quad \zpf = -\zds \equiv \zf-\zs \label{eq:def_relpos}$$ in the following. Assuming that the fluid particles are homogeneously distributed in the $y$-direction with a number density $\rnf(\zf,\xf)$ per volume, the mean potential field from an infinitesimal volume segment of $\mathrm{d}\zf \times \mathrm{d}\xf$ on the solid particle is defined by using $\rnf(\zf,\xf)$ and the mean local potential $\phi(\zpf, \xpf)$ as $\rnf(\zf,\xf)\dzf \dxf \cdot \phi(\zpf, \xpf)$, where $\phi(\zpf, \xpf)$ is given by $$\phi(\zpf, \xpf) \equiv \int_{-\infty}^{\infty} \Phi_\mathrm{LJ}(r)\dypf \label{eq:def_localphi}$$ with $$r = \sqrt{\xpf^{2} + \ypf^{2} + \zpf^{2}},\quad \sigma = \sigma_\mathrm{sf},\quad \epsilon = \epsilon_\mathrm{sf}.$$ This schematic is shown in the inset of Fig. \[Fig:meanpot\_integ\]. Then, the local tangential force $f_{z}^\mathrm{s}(\zpf,\xpf)\dzf \dxf \dzs$ exerted on an infinitesimal solid area-segment of $\mathrm{d}\zs$ from the present fluid volume-segment is given by: $$\begin{aligned} \nonumber f_{z}^\mathrm{s}(\zs,\zf,\xf) \mathrm{d}\zf \mathrm{d}\xf \mathrm{d}\zs &= -\frac{\partial}{\partial \zs} \left[\rnf(\zf,\xf)\phi(\zpf, \xpf) \right] \mathrm{d}\zf \mathrm{d}\xf \cdot \rns\mathrm{d}\zs \\ &= -\rns \rnf(\zf,\xf) \frac{\partial \phi(\zpf, \xpf)}{\partial \zs} \mathrm{d}\zf \mathrm{d}\xf \mathrm{d}\zs, \label{eq:forcesegment}\end{aligned}$$ where $$f_{z}^\mathrm{s}(\zs,\zf,\xf) = -\rns \rnf(\zf,\xf) \frac{\partial \phi(\zpf, \xpf)}{\partial \zs} \label{eq:forcedensity}$$ denotes the tangential force density on the solid. Note that $\mathrm{d}\xf$ and $\mathrm{d}\xpf$ are identical because $\xs$ is a constant. Since $\Phi_\mathrm{LJ}(r)$ is truncated at the cutoff distance $\rc$ in the present case, $$\begin{gathered} \phi \left(\zpf, \xpf\right) =0,\quad \frac{\ptl \phi\left(\zpf, \xpf\right)}{\ptl \zs} = 0 \label{eq:phi_limit} \\ \mathrm{for} \quad |\zpf| \geq \sqrt{\rc^{2} - \xpf^{2}} \equiv \zc(\xpf) \quad \mathrm{or} \quad \xpf \geq \rc \nonumber\end{gathered}$$ holds, where $\zc(\xpf)$ as a function of $\xpf$ denotes the cutoff with respect to $\zpf$. Indeed this cutoff is not critical as long as $\phi\left(\zpf, \xpf\right)$ quickly vanishes with the increase of $r$, but we continue the derivation including the cutoff for simplicity. With the definition of $x_\mathrm{SF}$ as the limit that the fluid could reach, it follows that $$\rnf=0 \quad \mathrm{for} \quad \xf < \xsf.$$ In addition, considering that $\phi(\zpf, \xpf)$ is an even function with respect to $\zpf$, $$\phi\left(\zpf, \xpf\right) = \phi(-\zpf, \xpf), \label{eq:phi_even}$$ it follows for the mean local potential $\phi$ that $$\frac{\partial \phi(\zpf, \xpf) }{\partial \zs} = -\frac{\partial \phi(-\zpf, \xpf) }{\partial \zs}, \label{eq:phi_der_oddfunc}$$ and $$\frac{\partial \phi(\zpf, \xpf) }{\partial \zs} = -\frac{\partial \phi(\zpf, \xpf) }{\partial \zf}, \label{eq:exchange_zs_zf}$$ where Eq.  is applied for the latter, which corresponds to the action-reaction relation between solid and fluid particles under a simple two-body interaction, $$f_{z}^\mathrm{f}(\zs,\zf,\xf) = -f_{z}^\mathrm{s}(\zs,\zf,\xf) = -\rns \rnf(\zf,\xf) \frac{\partial \phi(\zpf, \xpf)}{\partial \zf} \label{eq:forcedensity_f}$$ holds for the tangential force density on the fluid $f_{z}^\mathrm{f}$. Based on these properties, we now derive the analytical expression of $\xizcl$ as the triple integral of the local tangential force $f_z^\mathrm{s}$ in Eq.  around the CL, where the fluid density $\rnf$ decreases with the increase of $\zf$ within a certain range. Let this range be $\zsl + \zc \leq \zf \leq \zsv - \zc $ satisfying $$\frac{\ptl \rnf}{\ptl \zf} < 0 \quad (\zsl + \zc \leq \zf \leq \zsv - \zc ), \label{eq:ptl_rnf_lt_0}$$ and let $\rnf$ outside this range be given as a unique function of $\xf$ by $$\rnf(\zf,\xf) =\left\{ \begin{array}{cc} \rho_{V}^\mathrm{f(SL)}(\xf) & (\zsl - \zc < \zf < \zsl + \zc) \\ \rho_{V}^\mathrm{f(SV)}(\xf) & (\zsv - \zc < \zf < \zsv + \zc) \end{array} \right. \label{eq:rnf_at_SL_SV}$$ as shown in Fig. \[Fig:meanpot\_integ\]. Then, $\xizcl$ is expressed by $$\xizcl\equiv -\int_{\xsf}^{\xs+\rc} \left[ \int_{\zsl}^{\zsv}\left( \int_{-\zc}^{\zc} f_{z}^\mathrm{s}(\zs,\zpf,\xf) \mathrm{d}\zpf \right)\mathrm{d}\zs \right] \mathrm{d}\xf \label{eq:xizcl_tripleint}$$ as the triple integral of the force density $f_{z}^\mathrm{s}$ in Eq. , where the integration range of the double integral regarding $\zf$ and $\zs$ corresponds to the region filled with blue in Fig. \[Fig:meanpot\_integ\]. To obtain the double integral as the square brackets in Eq.  for the blue-filled region in Fig. \[Fig:meanpot\_integ\], we calculate at first that in the region surrounded by the solid-blue line, add those in the vertically-hatched regions, and subtract those in the horizontally-hatched regions. Note that $\rnf(\zf,\xf) = \rho_{V}^\mathrm{f(SL)}(\xf)$ and $\rnf(\zf,\xf) = \rho_{V}^\mathrm{f(SV)}(\xf)$ are assumed for the hatched regions in the bottom-left and in the top-right regions, respectively based on Eq. . The double integral for the region surrounded by the solid-blue line is $$\begin{aligned} \int_{\zsl}^{\zsv}\left( \int_{-\zc}^{\zc} f_{z}^\mathrm{s}\mathrm{d}\zds \right)\mathrm{d}\zf &= -\rns \int_{\zsl}^{\zsv}\!\!\!\! \rnf(\zf,\xf) \left( \int_{-\zc}^{\zc} \frac{\partial \phi(\zpf, \xpf)}{\partial \zs}\mathrm{d}\zds \right)\mathrm{d}\zf \nonumber \\ &= 0, \label{eq:dint_blue_fill}\end{aligned}$$ by using Eq. . Indeed, from Eq. , the reaction force $-\Fzcl$ from solid on the fluid around the CL in the blue-dotted line in Fig. \[Fig:controlvolume\] is obtained by further integrating Eq.  with respect to $\xf$, $$\begin{aligned} \int_{\xsf}^{\xs+\rc} \left[ \int_{\zsl}^{\zsv}\left( \int_{-\zc}^{\zc} f_{z}^\mathrm{s}\mathrm{d}\zds \right)\mathrm{d}\zf \right] \mathrm{d}\xf &= -\int_{\xsf}^{\xs+\rc} \left[ \int_{\zsl}^{\zsv}\left( \int_{-\zc}^{\zc} f_{z}^\mathrm{f}\mathrm{d}\zds \right)\mathrm{d}\zf \right] \mathrm{d}\xf \nonumber \\ &= -\Fzcl \nonumber \\ &= 0. \label{eq:Fzcl=0}\end{aligned}$$ The final equality means that no tangential force acts on the fluid there as mentioned in our previous study. [@Yamaguchi2019] Regarding the bottom-left vertically-hatched region in Fig. \[Fig:meanpot\_integ\], the double integral is $$\begin{aligned} \int_{-\zc}^{0} \left( \int_{-\zpf}^{\zc} f_{z}^\mathrm{s}\mathrm{d}\zds \right)\mathrm{d}\zpf &= -\rns\rho_{V}^\mathrm{f(SL)}(\xf) \int_{-\zc}^{0} \left( \int_{-\zpf}^{\zc} \frac{\partial \phi(\zpf, \xpf)}{\partial \zs}\mathrm{d}\zds \right)\mathrm{d}\zpf \nonumber \\ &= \rns\rho_{V}^\mathrm{f(SL)}(\xf) \int_{-\zc}^{0} \phi(\zpf, \xpf)\mathrm{d}\zpf , \label{eq:dint_bl_vert}\end{aligned}$$ where $\phi(\zc, \xpf)=0$ and Eq.  is used for the 2nd equality. This region physically corresponds to the interaction between blue solid part and fluid in the red-dotted part in Fig. \[Fig:controlvolume\]. For the bottom-left horizontally-hatched region in Fig. \[Fig:meanpot\_integ\], it follows that $$\begin{aligned} \int_{0}^{\zc}\left( \int_{-\zc}^{-\zpf} f_{z}^\mathrm{s} \mathrm{d}\zds\right) \mathrm{d}\zf &= -\rns\rho_{V}^\mathrm{f(SL)}(\xf) \int_{0}^{\zc} \left( \int_{-\zc}^{-\zpf} \frac{\partial \phi(\zpf, \xpf)}{\partial \zs}\mathrm{d}\zds \right)\mathrm{d}\zpf \nonumber \\ &= -\rns\rho_{V}^\mathrm{f(SL)}(\xf) \int_{0}^{\zc} \phi(\zpf, \xpf)\mathrm{d}\zpf. \label{eq:dint_bl_horiz}\end{aligned}$$ This region corresponds to the interaction between red solid part and fluid in the blue-dotted part in Fig. \[Fig:controlvolume\]. Hence, the net force due to the double integral in the bottom-left hatched regions in Eqs.  and with also integrating in the $\xf$-direction, which we define by $\usl$, results in $$\usl\equiv \rns\int_{0}^{\rc} \left(\rho_{V}^\mathrm{f(SL)}(\xpf) \int_{-\zc(\xpf)}^{\zc(\xpf)} \phi(\zpf, \xpf)\mathrm{d}\zpf\right)\mathrm{d}\xpf. \label{eq:def_usl}$$ As a physical meaning, $\usl$ represents the SL potential energy density potential energy per SL-interfacial area at the SL interface away from the CL and from the bottom of the solid plate. Regarding the top-right hatched regions, the net force results in $-\usv$ with the SV potential energy density area given by $$\usv\equiv \rns\int_{0}^{\rc} \left(\rho_{V}^\mathrm{f(SV)}(\xpf) \int_{-\zc(\xpf)}^{\zc(\xpf)} \phi(\zpf, \xpf)\mathrm{d}\zpf\right)\mathrm{d}\xpf, \label{eq:def_usv}$$ which can be derived in a similar manner. Thus, it follows for the force $-\xizcl$ from the fluid on the solid around the CL that $$-\xizcl = -\Fzcl + \usl - \usv, \quad \xizcl = \Fzcl - \usl + \usv, \label{eq:xizcl_Fzcl}$$ therefore, by using $\Fzcl=0$ in Eq. , $$ \xizcl = - \usl + \usv = (- \usl) - (-\usv) \label{eq:xizcl_eq_potdif}$$ is derived as the analytical expression of $\xizcl$, where the final expression is appended considering that the potential energy densities $\usl$ and $\usv$ are both negative. ![ Dependence of the SL and SV potential density energy densities $\usl$ and $\usv$ as the potential energies per interfacial area on the cosine of the contact angle $\cos \theta$, and comparison between the force on the solid around the CL $\xizcl$ and difference of potential energy density $- \usl + \usv$. []{data-label="Fig:usl_usv"}](./fig06-xicl-MD-theor){width="0.8\linewidth"} Figure \[Fig:usl\_usv\] shows the dependence of the SL and SV potential energy density $\usl$ and $\usv$, respectively as the potential energies per interfacial area, on the cosine of the contact angle $\cos \theta$, and comparison between the force on the solid around the CL $\xizcl$ and difference of potential energy density $- \usl + \usv$. Very good agreement between $\xizcl$ and $- \usl + \usv$ is observed within the whole range of the contact angle, and this indicates that Eq. \[eq:xizcl\_eq\_potdif\] is applicable for the present system with a flat and smooth surface. It is also qualitatively apparent from Eq.  that $\xizcl$ is positive regardless of the contact angle because the SF potential energy is smaller at the SL interface than at the SV interface. It is also interesting to note that for the very wettable case with large $\cos \theta$, large wettability parameter $\eta$, $\xizcl$ decreased with the increase of $\cos \theta$. This can be explained as follows: the change of $-\usv$ and $-\usl$ are both due to the change of $\eta$ and the fluid density especially in the first adsorption layer, while the density change of the SL adsorption layer due to $\eta$ is rather small. Thus, for higher $\eta$ value, the effect of density increase of the SV adsorption layer on $-\usv$ upon the increase of $\eta$ overcomes the increase of $-\usl$. ### Total force $\xiztot$ and local forces $\xizbot$ and $\xiztop$ on the bottom and the top Before proceeding to the analytical expression of $\xizbot$ and $\xiztop$, we derive their relations with $\Fzbot$ and $\Fztop$. Through the comparison between the regions of double integration for $\xizbot$ and $\Fzbot$ with respect to $\zf$ and $\zs$ in Fig. \[Fig:meanpot\_integ\], the red-filled region and one surrounded by solid-red line, it is clear that the difference between $\xizbot$ and $\Fzbot$ corresponds to the integral of hatched regions around $\zsl$ in the bottom-left. Thus, it follows that $$\xizbot = \Fzbot + \usl \label{eq:xizbot_Fzbot}$$ and $$\xiztop = \Fztop - \usv. \label{eq:xiztop_Fztop}$$ Note that the sum of Eqs. , and satisfies $$\xiztot = F_{z}^\mathrm{top} + F_{z}^\mathrm{cl} + F_{z}^\mathrm{bot}. \label{eq:xiztot=Fztot}$$ Considering that feature, we examine the total force $\xiztot$ and local ones $\xizbot$ and $\xiztop$ on the bottom and the top. We consider the distribution of the two-dimensional fluid stress tensor $\bm{\tau}$ averaged in the $y$-direction by the method of plane (MoP) [@Thompson1984; @Yaguchi2010] based on the expression by @Irving1950 (IK), with which exact force balance is satisfied for an arbitrary control volume bounded by a closed surface. The stress tensor component $\tau_{\alpha\beta}(x,z)$ denotes the stress in $\beta$-direction exerted on an infinitesimal surface element with an outward normal in $\alpha$-direction at position $(x,z)$. In the formulation of the MoP based on the IK-expression, $\tau_{\alpha\beta}(x,z)$ consists of the time-average of the kinetic and inter-molecular interaction contributions due to the molecular motion passing through the surface element and the intermolecular force crossing the surface element, respectively. For a single mono-atomic fluid component whose constituent particles interact through a pair potential as in the present study, all force line segments between two fluid particles, which cross the surface element, are included in the second. Note that technically for the MoP, the SF interaction can also be included in the inter-molecular force contribution, but only the FF interaction as the internal force is taken into account as the stress, and SF contribution is considered as an external force in this study. [@Nijmeijer1990_simul; @Schofield1982; @Rowlinson1993; @Yamaguchi2019; @Kusudo2019] With this setting, the stress is zero at the SF boundary for all CVs because no fluid particle exists beyond the boundary to contribute to the stress component as the kinetic nor at inter-molecular interaction contribution. Hence, the force balance on each CV containing only fluid is satisfied with the sum of the stress surface integral and external force from the solid. The force balance on the red-dotted CV in Fig. \[Fig:controlvolume\] in the $z$-direction is expressed by $$- \int_{0}^{\xend}\tau_{zz}(x,\zlblk) \drom x + \int_{\xsf}^{\xend}\tau_{zz}(x,\zsl) \drom x + \Fzbot = 0, \label{eq:forcebalance_botCV}$$ with the stress contributions from the bottom and top and external force in the RHS, respectively, by taking into account that $\tau_{xz}=0$ on the $x$-normal faces at $x=0$ and $x=\xend$ due to the symmetry, and also that the stress at the SF interface is zero. Similarly, the force balance on the blue-dotted CV and dark-yellow-dotted CV in Fig. \[Fig:controlvolume\] in the $z$-direction are expressed by $$- \int_{\xsf}^{\xend}\tau_{zz}(x,\zsl) \drom x + \int_{\xsf}^{\xend}\tau_{zz}(x,\zsv) \drom x + \Fzcl = 0, \label{eq:forcebalance_midCV}$$ and $$- \int_{\xsf}^{\xend}\tau_{zz}(x,\zsv) \drom x + \int_{0}^{\xend}\tau_{zz}(x,\zvblk) \drom x + \Fztop = 0, \label{eq:forcebalance_topCV}$$ respectively. By taking the sum of Eqs. , and , and inserting Eq. , it follows for $\xiztot$ that $$\xiztot = \int_{0}^{\xend}\tau_{zz}(x,\zlblk) \drom x - \int_{0}^{\xend}\tau_{zz}(x,\zvblk) \drom x \label{eq:xiztbot_stress}$$ Since the bottom face of the red-dotted CV and top face of the dark-yellow-dotted CV in Fig. \[Fig:controlvolume\] are respectively set in the liquid and vapor bulk regions under an isotropic static pressure $\plblk$, and $\pvblk$ given by $$\plblk = -\tau_{xx}(x,\zlblk) = -\tau_{zz}(x,\zlblk), \label{eq:plblk_tauzlblk}$$ and $$\pvblk = -\tau_{xx}(x,\zvblk) = -\tau_{zz}(x,\zvblk), \label{eq:pvblk_tauzvblk}$$ the 1st and 2nd terms in the RHS of Eq.  write $$\int_{0}^{\xend}\tau_{zz}(x,\zlblk) \drom x = -\int_{0}^{\xend} \plblk \drom x = -\plblk \xend, \label{eq:stressint_bot}$$ and $$\int_{0}^{\xend}\tau_{zz}(x,\zvblk) \drom x = -\int_{0}^{\xend} \pvblk \drom x = -\pvblk \xend. \label{eq:stressint_top}$$ Thus, Eq.  results in a simple analytical expression of $$\xiztot = (\pvblk-\plblk) \xend. \label{eq:xiztot_laplacepressure}$$ Furthermore, by applying the geometric relation $$\sin\left(\theta -\frac{\pi}{2}\right) = \cos \theta = \chi\left( \xend - \xsf \right) \label{eq:geom_curv}$$ with $\chi$ being the LV interface curvature and the Young-Laplace equation for the pressure difference in Eq. : $$\pvblk - \plblk = \glv\chi = \frac{\glv\cos \theta}{\xend - \xsf}, \label{eq:Young-Laplace}$$ which hold irrespective of whether the LV-interface is convex downward or upward, it follows for Eq.  as another analytical expression of $\xiztot$ that $$\xiztot = \frac{\xend}{\xend-\xsf} \glv \cos \theta, \label{eq:nano-Wilhelmy}$$ which includes the correction to Eq.  considering the effect of the Laplace pressure due to the finite system configuration with the periodic boundary condition. Note also that from Eq. , by giving $\xend$ and $\xsf$, it is possible to estimate $\glv$ from the relation between $\xiztot$ and $\cos \theta$. ![Comparison of the total downward force $\xiztot$ on the solid plate directly obtained from MD with the analytical expression $(\pvblk-\plblk) \xend$ in Eq.  using the pressures $\plblk$ and $\pvblk$ measured on the bottom and top boundaries. The Wilhelmy equation  using $\glv=9.79\times 10^{-3}$ N/m evaluated by the Young-Laplace equation  is also shown. \[Fig:comparison-wil-model\] ](./fig07-MD-wil){width="0.8\linewidth"} Figure \[Fig:comparison-wil-model\] shows the comparison of the total downward force $\xiztot$ on the solid plate directly obtained from MD with the analytical expression $(\pvblk-\plblk) \xend$ in Eq.  using the pressures $\plblk$ and $\pvblk$ measured on the bottom and top boundaries as the force exerted from the fluid on the potential walls per area. Clearly $\xiztot$ and $(\pvblk-\plblk) \xend$ agree very well, and this is because Eq.  is simply the force balance to be satisfied for equilibrium systems. Regarding the pressure, $\pvblk$ is almost constant, which corresponds to the saturated vapor pressure at this temperature. In addition, a linear relation between $\plblk - \pvblk$ and $\cos \theta$ can be observed, and this indicates that the Young-Laplace equation  is applicable in the present scale. We evaluated $\glv$ from this relation with the least-squares fitting, and the resulting value was $\glv = 9.79 \pm 0.23 \times 10^{-3}$ N/m with $\xsf = 1.15$ nm and $\xend=7.5$ nm, which was indeed close to the value obtained by a standard mechanical process. [@Surblys2014] The standard Wilhelmy equation  using this value is also shown in Fig. \[Fig:comparison-wil-model\], indicating that $\glv$ would be overestimated with this standard Wilhelmy equation  in a small measurement system like the present one. Finally, we derive the analytical expression of the local force $\xizbot$ and $\xiztop$. For the derivation of $\xizbot$, we apply the extended Bakker’s equation for the SL relative interfacial tension [@Yamaguchi2019; @Kusudo2019] $$\gsl - \gs0 = \int_{\xsf}^{\xend} \left[\tau_{zz}(x,\zsl)-\tau_\mathrm{L}^\mathrm{blk}\right] \drom x \label{eq:bakker_SL}$$ for the 2nd term in the LHS of Eq. , where $\gsl - \gs0$ is the SL interfacial tension relative to the interfacial tension between solid and fluid with only repulsive interaction (denoted by “0" to express the solid surface without adsorbed fluid particles). Then, it follows that $$\int_{\xsf}^{\xend} \tau_{zz}(x,\zsl) \drom x = \gsl - \gs0 - (\xend-\xsf)\plblk. \label{eq:stressint_SL}$$ By inserting Eqs. , and into Eq. , the analytical expression of $\xizbot$ writes $$\begin{aligned} \xizbot &= -\plblk \xend - [\gsl - \gs0 - (\xend-\xsf)\plblk] + \usl \nonumber \\ &= -\xsf \plblk - (\gsl - \gs0) + \usl. \label{eq:xizbot_final}\end{aligned}$$ Similary, by applying the Extended Bakker’s equation for the SV interfacial tension [@Yamaguchi2019; @Kusudo2019] $$\gsv - \gs0 = \int_{\xsf}^{\xend} \left[\tau_{zz}(x,\zsv)-\tau_\mathrm{V}^\mathrm{blk}\right] \drom x \label{eq:bakker_SV}$$ to Eq.  with Eq. , the analytical expression of $\xiztop$ writes $$\xiztop = \xsf\pvblk + (\gsv - \gs0) - \usv. \label{eq:xiztop_final}$$ To verify Eqs.  and , we compared the present results with $\xizbot$ and $\xiztop$ calculated using the corresponding SL and SV works of adhesion $\Wsl$ and $\Wsv$ obtained by the thermodynamics integration (TI) with the dry-surface scheme. [@Leroy2015; @Yamaguchi2019] The calculation detail is shown in Appendix \[sec:appendix\_TI\]. By definition, the SL and SV interfacial tensions $\gsl$ and $\gsv$ are related to $\Wsl$ and $\Wsv$ by $$ W_\mathrm{SL} \equiv \gs0 + \gl0 - \gsl \approx \gs0 + \glv - \gsl \label{eq:W_sl}$$ and $$\Wsv \equiv \gs0 + \gv0 - \gsv \approx \gs0 - \gsv, \label{eq:W_sv}$$ respectively, where the approximation $ \gl0 \approx \glv $ for the interfacial tension $\gl0$ between liquid and vacuum is used in Eq. , and $\gv0$ is set zero in the final approximation in Eq. . Note that $\gl0$ or $\glv$ is included in $\Wsl$. From Eqs.  and , and from Eqs.  and , $\xizbot$ and $\xiztop$ are respectively rewritten by $$\xizbot \approx \Wsl -\plblk \xsf - \glv + \usl, \label{eq:xizbot_Wsl_for_fig8}$$ and $$\xiztop \approx \xsf\pvblk - \Wsv - \usv. \label{eq:xiztop_Wsv_for_fig8}$$ ![Comparison of the downward forces $\xizbot$ and $\xiztop$ on the bottom and top of the solid plate directly obtained from MD with those evaluated using the works of adhesion $\Wsl$ and $\Wsv$ calculated by the thermodynamic integration (TI) using the dry-surface scheme shown in Appendix \[sec:appendix\_TI\]. The error bar for $\xizbot$ using $\Wsl$ in blue comes from the evaluation of $\glv$ from $\plblk$ and $\pvblk$ in Fig. \[Fig:comparison-wil-model\]. \[Fig:fig08-bot-top-Wil-TI\] ](./fig08-bot-top-Wil-TI){width="0.8\linewidth"} Figure \[Fig:fig08-bot-top-Wil-TI\] shows the comparison of $\xizbot$ and $\xiztop$ directly obtained from MD with those evaluated by Eqs.  and using the SL and SV works of adhesion $\Wsl$ and $\Wsv$, respectively obtained by the TI with the DS scheme shown in Appendix \[sec:appendix\_TI\]. Note that except $\Wsl$ and $\Wsv$, we used the values of $\plblk$, $\pvblk$, $\xend$, $\usl$, and $\usv$ directly obtained from the present Wilhelmy MD simulations as well as the $\glv$ value evaluated in Fig. \[Fig:comparison-wil-model\]. The error bars for $\xizbot$ using $\Wsl$ in blue mainly came from the error upon evaluating $\glv$. Note also that the TI calculation in Appendix \[sec:appendix\_TI\] for $\Wsl$ was carried out under a control pressure of 1 MPa whereas that for $\Wsv$ was considered to be under the saturated vapor pressure at the present temperature. For both $\xizbot$ and $\xiztop$, the Wilhelmy MD and TI results agreed well, and this indicates the validity of the present analytical expression. Discussion ---------- We list the key issues for the further application of the present expression in the following. First, Eqs. , and are about the force balance and should be satisfied in equilibrium systems without any restrictions. In addition, Eqs. , and are about the relation between the solid-fluid and fluid-solid forces and should hold as long as the solid plate can be decomposed into the three parts without the interface overlapping. At both SL and SV interfaces, which are between the CL and the plate bottom and between CL and the plate top respectively, a quasi-one-dimensional density distribution with $\ptl \rho/\ptl z=0$ can be assumed and one can apply the mean-field approach described in Sec. \[subsubsec:meanfield\]. Furthermore, Eqs.  and are Extended Bakker’s equations [@Yamaguchi2019] for the SL and SV interfacial tensions. Hence, our analytical expressions with these equations are constructed by a purely mechanical approach, and are exact, as observed in the comparison in Figs. \[Fig:usl\_usv\] and \[Fig:comparison-wil-model\]. Another issue is about the relation between Young’s equation  and the Wilhelmy equation  formulated with the Laplace pressure. Indeed, Eq.  holds irrespective of whether the CL is pinned or not because this relation means a simple equilibrium force balance. In the present case, $\Fzcl=0$ in Eq.  is satisfied because the solid surface is flat and smooth, and Young’s equation holds. This can easily be proved considering the force balance in Eq.  about the middle CV. In cases with $\Fzcl\neq 0$ because of the pinning force exerted on the fluid from the solid around the CL, *e.g.,* due to the boundary of wettability parallel to the CL in our previous research, [@Kusudo2019] Young’s equation should be rewritten including the pinning force. Even if such wettability boundary would be included in the present system, Eq.  would still be satisfied. In practice, such pinning force denoted by $\zpin$ in Ref.  as the downward force from the solid on the fluid around the CL corresponds to $-\Fzcl$ here, and this can be extracted by Eq.  as $$-\zpin = \Fzcl = \xizcl + \usl - \usv. $$ Considering the above discussion, we summarize the procedure to extract the wetting properties. In a single Wilhelmy MD simulation, we can calculate 1. Force $\xiztop$, $\xizcl$ and $\xizbot$ on three parts of the solid from the force-density distribution $\dxizdz$ in the surface-tangential direction, 2. SF potential energy densities $\usl$ and $\usv$ on solid per area at SL and SV interfaces, respectively from the distribution of the potential energy density $\usf$, 3. Bulk pressures $\pvblk$ and $\plblk$ measured on the top and bottom of the system, and 4. Contact angle $\theta$ from the density distribution. From these quantities the following physical properties can be obtained: 1. SL relative interfacial tension $\gsl - \gs0$ from $\xizbot$, $\usl$, $\xsf$ and $\plblk$ using Eq. , 2. SV relative interfacial tension $\gsv - \gs0$ from $\xiztop$, $\usv$, $\xsf$ and $\pvblk$ using Eq. , 3. LV interfacial tension $\glv$ from $\pvblk$, $\plblk$, $\xsf$, the system size $\xend$ and the contact angle $\theta$ using Eq.  , and 4. Pinning force $\Fzcl$ from Eq.  to be added to Young’s equation, which is zero in the case of flat and smooth solid surface. Related to the above procedure, it should also be noted that, surprisingly, the microscopic structure of the bottom face does not have a direct effect on the force $\xizbot$. This is similar to buoyancy given by the 3rd term of the RHS of Eq. , which depends on the volume $V$ immersed into the liquid and is not directly related to the microscopic structure. Finally, we compare the present analytical expression of the contact line force $\xizcl$ with an existing model by @Das2011, which states $$\xizcl = \gsv - \gsl + \glv =\glv (1+\cos\theta). \label{eq:das_model}$$ This model is derived based on the assumption that the densities of the liquid and vapor are constant at bulk values even close to the solid interface: the so-called sharp-kink approximation. This is similar to the interface of two different solids whose densities and structures do not change upon contact. Even under this assumption, the force $\xizcl$ on solid around the CL is expressed by Eq.  as the difference between the SL and SV potential energy densities $\usl$ and $\usv$ as well. [@Das2011] The difference arises for the works of adhesion. Under the sharp-kink approximation, it is clear that the works of adhesion required to quasi-statically strip the liquid and vapor off the solid surface are equal to the difference of solid-fluid potential energies after and before the procedure, $$\Wsl = 0 -\usl = -\usl, \quad \Wsv= 0 -\usv = -\usv \quad\mbox{(under the sharp-kink approx.),} \label{eq:wsl_usl_wsv_usv}$$ because the solid and fluid structures do not change upon this procedure. Then, it follows for Eq.  that $$\xizcl = \Wsl - \Wsv \quad\mbox{(under the sharp-kink approx.),}$$ which indeed results in Eq.  with Eqs  and . However, the density around the solid surface is not constant as shown in the density distribution in Fig. \[Fig:distribtution\], and the difference of $\Wsl$ and $\Wsv$ is not directly related to the SL and SV potential energy densities $\usl$ and $\usv$ as in Eq. . In other words, the fluid can freely deform and can have inhomogeneous density in a field formed by the solid at the interface to minimize its free energy at equilibrium, and this includes the entropy effect in addition to $\usl$ and $\usv$ as parts of the internal energies. [@Surblys2018] conclusion ========== We have given theoretical expressions for the forces exerted on a Wilhelmy plate, which we modeled as a quasi-2D flat and smooth solid plate immersed into a liquid pool of a simple liquid. By a purely mechanical approach, we have derived the expressions for the local forces on the top, the contact line (CL) and the bottom of the plate as well as the total force on the plate. All forces given by the theory showed an excellent agreement with the MD simulation results. In particular, we have shown that the local force on the CL is written as the difference of the potential energy densities between the SL and SV interfaces away from the CL but not generally as the difference between the SL and SV works of adhesion. On the other hand, we have revealed that the local forces on the top and bottom of the plate can be related to the SV and SL works of adhesion, respectively. As the summation of these local forces, we have obtained the modified form of the Wilhelmy equation, which was consistent with the overall force balance on the system. The modified Wilhelmy equation includes the cofactor taking into account the plate thickness, whose effect can be significant in small systems like the present one. Finally, we have shown that with these expressions of the forces all the interfacial tensions $\gsl$ and $\gsv$ as well as $\glv$ can be extracted from a single equilibrium MD simulation without the computationally demanding calculation of the local stress distributions and the thermodynamic integrations. We thank Konan Imadate for fruitful discussion. T.O. and Y.Y. are supported by JSPS KAKENHI Grant Nos. JP18K03929 and JP18K03978, Japan, respectively. Y.Y. is also supported by JST CREST Grant No. JPMJCR18I1, Japan. Relation between the SL interaction parameter and the contact angle \[sec:appendix\_eta\_costheta\] ==================================================================================================== ![Relation between the cosine of the apparent contact angle $\cos \theta$ of the meniscus and the SF interaction coefficient $\eta$. \[Fig:eta-costheta\] ](./fig-a01-eta-costheta){width="0.8\linewidth"} In the main text, we summarized the results by $\cos \theta$ as the cosine of the apparent contact angle $\theta$ of the meniscus, while the SF interaction coefficient $\eta$ was varied as the parameter for the MD simulations. As described in the main text, we defined $\theta$ by the angle between the SF interface at $x=\xsf= 1.15$ nm and the extended cylindrical curved surface of the LV interface having a constant curvature determined through the least-squares fitting of a circle on the density contour of $\rho=$400 kg/m$^{3}$ at the LV interface excluding the region in the adsorption layers near the solid surface. Figure \[Fig:eta-costheta\] shows the relation between the SL interaction parameter $\eta$ and the apparent contact angle $\theta$. The contact angle cosine $\cos \theta$ monotonically increased with the increase of $\eta$, and a unique relation can be obtained between the two for the present range of $\eta$. Thermodynamic integration (TI) with the dry-surface scheme \[sec:appendix\_TI\] ================================================================================ ![Simulation systems for the calculation of the solid-liquid and solid-vapor works of adhesion by the thermodynamic integration (TI) through the dry-surface (DS) scheme. \[Fig:TI-systems\] ](./fig-a02-TI-systems){width="0.65\linewidth"} We calculated the solid-liquid (SL) and solid-vapor (SV) works of adhesion $\Wsl$ and $\Wsv$, respectively, by the thermodynamic integration (TI) [@Frenkel2007] through the dry-surface (DS) scheme [@Leroy2015] to compare with the relative SL and SV interfacial tensions obtained in the present Wilhelmy MD systems. Details of the DS scheme were basically the same as in our previous study. [@Yamaguchi2019] In the systems shown in Fig. \[Fig:TI-systems\], the liquid or vapor was quasi-statically stripped off from the solid surface fixed on the bottom of the coordinate system, which had the same periodic honeycomb structure as the solid plate in the Wilhelmy MD system. The work of adhesion was calculated as the free energy difference after and before the above procedure, where the coupling parameter for the TI was embedded in the SF interaction parameter in the DS scheme. For the calculation of $\Wsl$, a SL interface was formed between the liquid and bottom solid as shown in Fig. \[Fig:TI-systems\] (a) with wettability parameter $\eta$ corresponding to the Wilhelmy MD system. Periodic boundary condition was employed in the $x$-and $y$-directions tangential to the solid surface. In addition, we set a piston at $z=\zpis$ above the liquid to attain a constant pressure system. By allocating sufficient number of fluid particles $N_\mathrm{f}$ and by setting the pressure $p_\mathrm{set}$ above the vapor pressure, a liquid bulk with a constant density was formed between the solid wall and piston. We used 3000 fluid particles, and the system size was set as shown in Fig. \[Fig:TI-systems\] (a). We also controlled the temperature of the fluid particles within 0.8 nm from the top piston regarding the velocity components in the $x$- and $y$-directions at $T_\mathrm{c}=90$ K. We embedded a coupling parameter $\lambda$ into the SF interaction potential given in Eq.  as $$\label{eq:LJcouple} \Phi^\mathrm{DS}_\mathrm{sf}(r_{ij},\lambda) = (1-\lambda) \Phi^\mathrm{LJ}_\mathrm{sf}(r_{ij}),$$ and we obtained multiple equilibrium systems with various $\lambda$ values with $0 \leq \lambda < 1$ to numerically calculate the TI described below. Each system was obtained after a preliminary equilibration of 10 ns, and the time average of 20 ns was used for the analysis. The work of adhesion $\Wsl$ is defined by the minimum work needed to strip the liquid from the solid surface per area under constant $NpT$, and it can be calculated by the TI along a reversible path between the initial and final states of the process. In the present DS scheme, this was achieved by at first forming a SL interface, and then by weakening the SF interaction potential through the coupling parameter. We obtained equilibrium SL interfaces with discrete coupling parameter $\lambda$ varied from 0 to 0.999. Note that the maximum value of $\lambda$ was set slightly below 1 to keep the SF interaction to be effectively only repulsive. This value is denoted by $1^{-}$ hereafter. The difference of the SL interfacial Gibbs free energy $\Delta G_\mathrm{SL} \equiv G_\mathrm{SL}|_{\lambda=1^{-}} - G_\mathrm{SL}|_{\lambda=0}$ between systems at $\lambda=0$ and $\lambda=1^{-}$ under constant $NpT$ was related to the difference in the surface interfacial energies as $$\begin{aligned} \nonumber W_\mathrm{SL} &\equiv& \frac{\Delta G_\mathrm{SL}}{A} = \gamma_\mathrm{S0} + \gamma_\mathrm{L0} - \gamma_\mathrm{SL} \\ \label{eq:W_sl_appendix} & \approx & \gamma_\mathrm{S0} + \gamma_\mathrm{LV} - \gamma_\mathrm{SL},\end{aligned}$$ where the vacuum phase was denoted by subscript 0’ and $\gamma_\mathrm{S0}$ and $\gamma_\mathrm{L0}$ were the solid-vacuum and liquid-vacuum interfacial energies per unit area. Note that $\gamma_\mathrm{L0}$ was substituted by the liquid-vapor interfacial tension $\glv$ in the final approximation considering that the vapor density was negligibly small. Using the $NpT$ canonical ensemble, the difference of the SL interfacial Gibbs free energy $\Delta G_\mathrm{SL}$ in Eq.  was calculated through the following TI: $$\begin{aligned} \Delta G &=& \int_0^{1^{-}} \frac{d G(\lambda)}{d \lambda} d \lambda \nonumber = \int_0^{1^{-}} \angb{ \frac{\partial H}{\partial \lambda} } d \lambda \nonumber \\ &=& -\int_0^{1^{-}} \angb{ \sum_{i\in\mathrm{fluid}}^{N_\mathrm{f}} \sum_{j\in\mathrm{wall}}^{N_\mathrm{w}} \Phi_\mathrm{fw} } d \lambda,\end{aligned}$$ $$\Delta G_\mathrm{SL} = \Delta G - A p_\mathrm{set} \left( \angb{ z_\mathrm{p} |_{\lambda =1^{-}} } - \angb{ z_\mathrm{p} |_{\lambda =0} } \right) \label{eq:DeltaGSL=DeltaG-work_piston}$$ where $H$ was the Hamiltonian, the internal energy of the system and $N_\mathrm{w}$ was the numbers of wall molecules. The ensemble average was substituted by the time average in the simulation, and was denoted by the angle brackets. Note that to obtain $\Delta G_\mathrm{SL}$, the work exerted on the piston $A p_\mathrm{set} \left( \angb{ z_\mathrm{p} |_{\lambda =1^{-}} } - \angb{ z_\mathrm{p} |_{\lambda =0} } \right)$ was subtracted from the change of the Gibbs free energy of the system $\Delta G$ including the piston in Eq. . For the calculation of the SV work of adhesion $\Wsv$, we investigated the interfacial energy between saturated vapor and corresponding solid surface set on the bottom of the simulation cell by placing an additional particle bath on the top as shown in Fig. \[Fig:TI-systems\] (b). The setup regarding the periodic boundary conditions employed in $x$-and $y$-directions, temperature control and placement conditions for the solid surface were the same as the SL system, whereas the particle bath was kept in place by a potential field at a fixed height sufficiently far from the solid surface. This potential field mimicked a completely wettable surface with an equilibrium contact angle of zero with the present potential parameters, a liquid film was formed on the particle bath. With this setting, a solid-vapor interface with the same density distribution as that in the Wilhelmy MD system was achieved. We formed multiple equilibrium systems with various values of the coupling parameter $\lambda$ with the same recipe as the SL systems. Similar to the calculation of $\Wsl$, the SV interface at $\lambda=0$ was divided into S0 and V0 interfaces at $\lambda = 1^{-}$ as shown in Fig. \[Fig:TI-systems\] (b), while the calculation systems for $\Wsv$ were under constant $NVT$. Thus, the solid-vapor work of adhesion $W_\mathrm{SV}$ was given by the difference of the Helmholtz free energy $\Delta F$ per unit area, and was related to the difference in the surface interfacial energy as $$\begin{aligned} \nonumber W_\mathrm{SV} &\equiv& \frac{\Delta{F}}{A} = \gamma_\mathrm{S0} + \gamma_\mathrm{V0} - \gamma_\mathrm{SV} \\ \label{eq:W_sv_appendix} & \approx & \gamma_\mathrm{S0} - \gamma_\mathrm{SV},\end{aligned}$$ where $\gamma_\mathrm{V0}$ was set zero in the final approximation. Using the $NVT$ canonical ensemble, $\Delta F$ in Eq.  was calculated through the TI as: $$\begin{aligned} \nonumber \Delta{F} &=& \int_0^{1^{-}} \frac{\partial F(\lambda)}{\partial \lambda} d \lambda =\int_0^{1^{-}} \angb{ \frac{\partial H}{\partial \lambda} } d \lambda \\ \label{eq:thermo_sv} &=& -\int_0^{1^{-}} \angb{ \sum_{i}^{N_\mathrm{f}} \sum_{j}^{N_\mathrm{w} } \Phi^\mathrm{LJ}_\mathrm{fw}(r_{ij}) } d \lambda.\end{aligned}$$ ![Works of adhesion $\Wsl$ and $\Wsv$ calculated by the TI as a function of the solid-fluid interaction coefficient $\eta$. \[Fig:TI-results\] ](./fig-a03-TI-results){width="0.8\linewidth"} Figure \[Fig:TI-results\] shows the SL and SV works of adhesion $\Wsl$ and $\Wsv$ calculated by the TI as a function of the solid-fluid interaction coefficient $\eta$. These values were used for the results shown in Fig. \[Fig:fig08-bot-top-Wil-TI\] through $\eta$-$\cos \theta$ relation in Fig. \[Fig:eta-costheta\]. **DATA AVAILABILITY** The data that support the findings of this study are available from the corresponding author upon reasonable request. [53]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [**]{}, Encyclopedia of Physics / Handbuch der Physik (, ) pp.  @noop [**]{} (, ) [**** (), 10.1063/1.2799990](\doibase 10.1063/1.2799990),  [ ()](\doibase 10.1680/jsuin.19.00007) [****,  ()](\doibase 10.1098/rstl.1805.0005) @noop [****,  ()]{} [****,  ()](\doibase 10.1143/jjap.32.6059) [****,  ()](\doibase 10.1063/1.1747248) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.111.096101) [****, ()](\doibase http://dx.doi.org/10.1063/1.4861039) [****,  ()](\doibase http://dx.doi.org/10.1063/1.4865254) @noop [****,  ()]{} [****,  ()](\doibase 10.1063/1.5053881) [****,  ()](\doibase 10.1063/1.5124014) [****,  ()](\doibase 10.1063/1.5143201) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1021/acs.langmuir.5b01394) @noop [****, ()]{} [****, ()](\doibase 10.1063/1.4979847) [****, ()](\doibase 10.1063/1.4990741) @noop [****, ()]{} [****,  ()](\doibase 10.1063/1.3601055) @noop [****,  ()]{} [****,  ()](\doibase 10.1073/pnas.1513942113) [****,  ()](\doibase 10.1021/acs.langmuir.5b01097) [****,  ()](\doibase 10.1021/acs.langmuir.9b00796) [****,  ()](\doibase 10.1002/andp.18631950602),  [****,  ()](\doibase 10.1680/jsuin.17.00059) @noop [****,  ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1039/c3sm50861g) [****,  ()](\doibase 10.1063/1.858320) [****,  ()](\doibase 10.1103/PhysRevE.57.655) [****,  ()](\doibase 10.1063/1.2753149) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} [****,  ()](\doibase 10.1299/jfst.5.180) [****, ()](\doibase 10.1063/1.1747782) @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) pp.
--- abstract: 'Recently, it was shown that local variance maps of temperature anisotropy are simple and useful tools for the study of large scale hemispherical power asymmetry. This was done by studying the distribution of dipoles of the local variance maps. In this work, we extend the study of the dipolar asymmetry in local variance maps using foreground cleaned Planck 143 GHz and 217 GHz data to smaller scales. In doing so, we include the effect of the CMB Doppler dipole. Further, we show that it is possible to use local variance maps to measure the Doppler dipole in these Planck channel maps, after removing large scale features (up to $l=600$), at a significance of about $3 \sigma$. At these small scales, we do not find any power asymmetry in the direction of the anomalous large scale power asymmetry beyond that expected from cosmic variance. At large scales, we verify previous results i.e. the presence of hemispherical power asymmetry at a significance of at least $3.3 \sigma$.' author: - | Saroj Adhikari$^{1}$[^1]\ $^{1}$Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA title: Local variance asymmetries in Planck temperature anisotropy maps --- \[firstpage\] cosmic background radiation, methods: statistical Introduction ============ A number of studies have been performed that show approximately 3.5$\sigma$ hemispherical power asymmetry at large scales in WMAP and Planck CMB temperature fluctuations [@2004ApJ...605...14E; @2007ApJ...660L..81E; @PlanckCollaboration2013; @2013JCAP...09..033F; @2009ApJ...699..985H]. Recently, [@2014ApJ...784L..42A] used a conceptually simple pixel space *local variance* method to demonstrate the presence of asymmetry at large scales in WMAP and Planck data at a significance of at least $3.3 \sigma$; they find that none of the 1000 isotropic Planck [[FFP6 ]{}]{}simulations had a local variance dipole amplitude equal to or greater than that found in data for disk radii $6^\circ \leq {r_{\rm disk}}\leq 12^\circ$. In this method, a local variance map (${\textbf{m}_{\rm r}}$) is generated at a smaller HEALPIX [@healpix] resolution $N_{\rm side}$ from a higher resolution CMB temperature fluctuations map by computing temperature variance inside disks of certain radius ${r_{\rm disk}}$ centered at the center of each pixel of the HEALPIX map with resolution $N_{\rm side}$. Isotropic simulations are used to get the expected mean map (${\bar{\textbf{m}}_{\rm r}}$), and each map is normalized:$$\begin{aligned} {\textbf{m}_{\rm r}}^{n}&=&\frac{{\textbf{m}_{\rm r}}-{\bar{\textbf{m}}_{\rm r}}}{{\bar{\textbf{m}}_{\rm r}}}.\end{aligned}$$ Then, we obtain local variance dipoles by fitting for dipoles in each of these normalized maps (both simulations and Planck data) using the HEALPIX `remove_dipole` module with inverse variance (of the 1000 simulated local variance maps) weighting. In this paper, we will denote the amplitude of a local variance dipole as ${A_{\rm LV}}$. The local variance dipole obtained from data is then compared to the distributions obtained from simulations. In this work, we make use of the local variance method and extend the results obtained in [@2014ApJ...784L..42A] to include smaller disk radii. The authors in [@2014ApJ...784L..42A] focused on large scale power asymmetry, and therefore only looked at large values of ${r_{\rm disk}}$. After confirming their results at large disk radii (${r_{\rm disk}}\geq 4^\circ$), we perform the same local variance dipole analysis using smaller disk sizes. We find that, for smaller disk radii, the contribution of the *Doppler dipole* becomes increasingly significant. The Doppler dipole in the local variance map is an expected signal because of our velocity with respect to the CMB rest frame. While the direction and magnitude of the CMB dipole has been known from previous CMB experiments [@2011ApJS..192...14J], the Doppler dipole signal in the temperature fluctuations is rather weak and reported only recently by the Planck Collaboration [@2013arXiv1303.5087P] using harmonic space estimators [@2011JCAP...07..027A; @2011PhRvL.106s1301K]. We work towards detecting the expected Doppler dipole in the local variance maps after removing large scale features from the maps. Our goal therefore is two fold: first, extend the local variance dipole study of the hemispherical power asymmetry to smaller disk radii and second, use the method of local variance to detect the Doppler dipole whose amplitude is much smaller but is expected to contribute at all angular scales. Before presenting the details of the analysis and results, we would like to point out and clarify a difference between our analysis and that of Akrami et al. They used 3072 disks ($N_{\rm side}=16$ healpix map) for all sizes of disks they considered (${r_{\rm disk}}\geq1^\circ$). However, we find that for ${r_{\rm disk}}=1^\circ$ and $2^\circ$, 3072 disks are not enough to cover the whole sky. Therefore, we use $N_{\rm side}=32$ (12288 disks) for ${r_{\rm disk}}=2^\circ$ and $N_{\rm side}=64$ (49152 disks) for ${r_{\rm disk}}=1^\circ$. Once we do this, we find that, unlike the results in Akrami et al. (see Fig 2(a) and Table 1 in [@2014ApJ...784L..42A]), none of our 1000 isotropic simulations produce a local variance dipole amplitude larger than that of our foreground cleaned channel maps, for ${r_{\rm disk}}=1, 2^\circ$. In fact, the effect of the anomalous dipole (with respect to the isotropic case) can be observed for even smaller angular disk radii (see Figure \[fig:subdegrees\]). This result, however, is not surprising because the local variance dipoles computed at smaller disk radii get some contribution from the large scale anomalous power asymmetry at low $l$. Next, in section \[sec:sim\] we discuss in some detail the Planck CMB data and simulations used in this paper. Then, in section \[sec:results\] we present our results for both the anomalous dipole and the Doppler dipole, followed by discussions and a summary of our results in section \[sec:discussion\]. Simulations and data {#sec:sim} ==================== For our analysis, we use 1000 realizations of [[FFP6 ]{}]{}Planck simulations [^2] that include lensing and instrumental effects. The [[FFP6 ]{}]{}simulations (CMB and noise realizations separately) are provided at each Planck frequency. We generate local variance maps from which we obtain local variance dipole distributions following the method briefly introduced in the previous section (described in more detail in [@2014ApJ...784L..42A]). Since the [[FFP6 ]{}]{}simulations for CMB and noise processed through the component separation procedure are not yet publicly available, we will generate foreground cleaned CMB channel maps at two frequencies 143 GHz and 217 GHz using the [SEVEM ]{}method [@2013arXiv1303.5072P; @Kim2013]. We use the same four template maps as the Planck Collaboration; these are difference maps of two near frequency channels: (30-44) GHz, (44-70) GHz, (545-353) GHz and (857-545) GHz. Before generating the four templates, we smooth the larger frequency channel map to the resolution of the smaller frequency channel map: $a_{lm}^{\rm large}\rightarrow a_{lm}^{\rm large}\times B_{l}^{\rm small}/B_{l}^{\rm large}$, where $B_l$ is the beam transfer function for the given frequency channel map. See Appendix C of [@2013arXiv1303.5072P] for details of this method. To summarize: the cleaned 143 GHz or 217 GHz map $d_{\nu}^{\rm clean}$, is obtained by subtracting from the uncleaned channel map $d_\nu$, a linear combination of the templates $t_j$: $$\begin{aligned} d_{\nu}^{\rm clean} &=& d_\nu - \sum_{j=1}^{n_t} \alpha_j t_j\end{aligned}$$ The coefficients $\alpha_j$’s are obtained by minimizing the variance of $d_{\nu}^{\rm clean}$ outside of the mask used for our analysis. Once the linear coefficients for the four template maps are obtained, we use the same coefficients to process combinations of noise [[FFP6 ]{}]{}simulations in order to generate noise maps for our foreground cleaned simulated maps. We process the Planck maps and the simulated maps identically in each step. In addition to the isotropic local variance maps (obtained from isotropic realizations of CMB plus noise), we obtain two new sets of simulated local variance maps from two other models which are obtained from the isotropic realizations as explained below: **Doppler model**: The expected dipolar temperature modulation from the Doppler effect, with an amplitude $0.00123 b_\nu$ along the direction $(l,b)=(264^\circ, 48^\circ)$ [@2011ApJS..192...14J]. We take $b_\nu=1.96$ for the 143 GHz map and $b_\nu=3.07$ for the 217 GHz map [@2013arXiv1303.5087P]. The dipolar modulation is generated in pixel space simply as: $$\begin{aligned} {\left.{\frac{\Delta T}{T}}\right\vert}_{\rm dop}(\hat{n})&=&\left[1+1.23\times 10^{-3} b_{\nu} (\hat{n}\cdot\hat{p})\right] {\left.{\frac{\Delta T}{T}}\right\vert}_{\rm iso} (\hat{n}) \end{aligned}$$ ![The hemispherical asymmetry in $C_l$s generated by our modulation model that modulates large scales (1.25$^\circ$ smoothing) with an amplitude $A_T=0.073$ in real space, in addition to the dipolar modulation at all scales due to the CMB Doppler effect. This plot was generated using 1000 simulations for the 217 GHz channel.[]{data-label="fig:modulation"}](figures/mod125.pdf) **Modulation model** : The Doppler model as described above plus a large angle modulation, corresponding to a modulation at smoothing ${\rm fwhm}=1.25^\circ$, along the direction `(l,b)=(218^\circ, -20^\circ)`, with an amplitude $A_T=0.073$ [@PlanckCollaboration2013]. The map obtained is: $$\begin{aligned} {\left.{\frac{\Delta T}{T}}\right\vert}_{\rm mod}(\hat{n}) &=& {\left.{\frac{\Delta T}{T}}\right\vert}_{\rm dop} (\hat{n}) + A_T (\hat{n}\cdot\hat{p})\left.{\frac{\Delta T}{T}}\right\vert_{\rm dop}^{\rm smoothed}(\hat{n}) \nonumber\\ \label{eqn:modulation} \end{aligned}$$ in which the modulation is only applied to the Doppler model after smoothing at ${\rm fwhm}=1.25^\circ$. Here we have used the notation ${A_{\rm T}}$ for the amplitude of modulation for temperature fluctuations in the above equation to distinguish it from the dipole modulation amplitude in local variance maps ${A_{\rm LV}}$. Our choice of the smoothing scale ${\rm fwhm}=1.25^\circ$ is guided by our attempt to fit the obtained local variance asymmetry in data for different ${r_{\rm disk}}$ that we have considered. For example, we find that a larger ${\rm fwhm}=5^\circ$ modulation does not reproduce the local variance asymmetry in data for ${r_{\rm disk}}=1^\circ$ or smaller. Similarly, using a smaller fwhm produces local variance asymmetry distributions with amplitudes too large to be consistent within $2\sigma$ with that seen in data for some disk radii (${r_{\rm disk}}=1^\circ\; {\rm and\;} 4^\circ$, for example). Note that when we apply modulation at large scales in this manner, we are using a filter that will suppress modulation at scales much smaller than the smoothing fwhm specified. Therefore, by construction, the modulation is only generated at large scales. In Figure \[fig:modulation\], we plot the asymmetry generated by our modulation model in harmonic space (up to $l=600$); the quantity plotted is: $$\begin{aligned} A_l = 2\frac{C_l^{+} - C_l^{-}}{C_l^{+}+C_l^{-}} \end{aligned}$$ where $+$ is taken to be the hemisphere centered at $(l,b)=(218^\circ,-20^\circ)$ and $-$ is the opposite hemisphere. The $C_l$s for the relevant hemisphere is computed by masking the rest of the sky. #### Masks: {#masks .unnumbered} For each map, we use the union mask [U73 ]{}from the Planck Collaboration. This is the union of the qualitative masks for the four different component separation techniques employed by the Planck Collaboration, and leaves approximately 73 percent of sky unmasked. The template fitting in our SEVEM-like cleaning procedure is also done in the region outside this mask. The resulting clean maps at 143 GHz and 217 GHz (with the [U73 ]{}mask) are shown in Figure \[fig:cleanmaps\]. ![image](figures/Aplotsdeg0_1_2.pdf) ![image](figures/Aplotsdeg0_4_2.pdf) Results {#sec:results} ======= First, let us comment that we get results consistent with [@2014ApJ...784L..42A] for ${r_{\rm disk}}=(4^\circ$ to $16^\circ)$, in terms of the significance of the magnitude, and the direction of the anomalous dipole. Also, our modified set of simulations that includes temperature modulation at large angular scales produces dipole distributions consistent with data. All our results presented in this paper were analyzed using the foreground cleaned 217 GHz maps unless otherwise stated. For large scale results i.e. relating to the anomalous dipole, we get similar results using the 143 GHz maps. Local variance dipole for ${r_{\rm disk}}=1^\circ, 2^\circ$ ----------------------------------------------------------- For ${r_{\rm disk}}=1^\circ, 2^\circ$, our results in Figure \[fig:onetwodegrees\] indicate that the effect of the Doppler dipole is increasing compared to larger disk radii. Further, the results also show that the modulation model that we have chosen (modulated at large scales corresponding to a smoothing of ${\rm fwhm}=1.25^\circ$, with an amplitude $A_T=0.073$) is consistent with data, while the no modulation (isotropic) model is disfavored at approximately $3.5 \sigma$. Note that the distribution of ${A_{\rm LV}}$ is not exactly Gaussian, but we assume the distribution to be Gaussian to obtain standard deviation significance values as rough guides. Next, we extend the analysis to apply to even smaller disk radii. ![image](figures/Aplotsdeg0_25_2.pdf) ![image](figures/Aplotsdeg0_18_2.pdf) Local variance dipole for sub-degree disk radii ----------------------------------------------- We will now consider disks of radii ${r_{\rm disk}}=0.25^\circ$ and $0.18^\circ$. This requires that we increase the number of disks to cover the whole sky. We therefore use disks centered on pixels of a healpix map of $N_{\rm side}=256$ (786432 disks). Figure \[fig:subdegrees\] shows our results for these two disk sizes. From the figures, it is clear that for these disk radii, the effect of the Doppler dipole cannot be neglected when computing the significance of the dipole amplitude obtained in the data. If we do not include the Doppler effect, then the local variance dipole in the Planck map is anomalous at approximately $3.8 \sigma$ for both ${r_{\rm disk}}=0.25^\circ$ and $0.18^\circ$, whereas with respect to the Doppler model distribution the significance drops to about $2\sigma$. Also, the direction of the dipole detected from Planck maps gets closer to the Doppler dipole as the disk radius is decreased. This is illustrated in Figure \[fig:dirs\]. However, as seen in Figure \[fig:subdegrees\], the effect is still small and the distributions of the isotropic simulations and the Doppler model simulations overlap quite a bit. We will now investigate if, by removing large scale features from the temperature fluctuation maps, it is possible to separate the Doppler modulated local variance dipole distribution from the isotropic one, and therefore measure the Doppler dipole in the Planck maps. ![image](figures/Bplotsdeg600_18_2.pdf) ![image](figures/Bplotsdeg600_8_2.pdf) ![image](figures/dirs.pdf) The Doppler dipole in local variance maps ----------------------------------------- We remove large scale features by simply filtering a map using a high-$l$ filter i.e. set the low-$l$ $a_{lm}$ values to zero in multipole space. Since we are looking for an expected signal that is a vector (i.e. to test the significance of the measurement we need to look at both the direction and magnitude of the dipole signals), we will determine the distribution of the component of the dipoles obtained in the direction of the known CMB dipole. This was also the approach taken by the Planck Collaboration [@2013arXiv1303.5087P]; the quantity whose distribution they plot, called $\beta_{\parallel}$, is the component of their estimator in the CMB dipole direction. In Figure \[fig:doppler18\], we show results for our local variance analysis using disk radius ${r_{\rm disk}}=0.18^\circ$ but using CMB temperature maps with $a_{lm}=0, $ for $l\leq l_{\rm min}=600$ i.e. large scale features removed up to $l=600$. The signal in the direction parallel to the CMB dipole is consistent with the Doppler model distribution while the isotropic model is disfavored at $3.2\sigma$. When using a larger disk radius ${r_{\rm disk}}=8^\circ$ (also shown in Figure \[fig:doppler18\]), we measure the Doppler dipole signal at approximately $2.7\sigma$. We have repeated this analysis using other disk radii, different $l_{\rm min}$ and the 143 GHz channel map. The results are summarized in Table \[table:dopplerresults\]. Unexpectedly, we obtain higher values of amplitude for the 143 GHz channel but in most cases the dipole amplitude in the direction of CMB dipole is within $2\sigma$ of the expected distribution obtained from the Doppler model. The only exception being the case of $l_{\rm min}=900$, 143 GHz, ${r_{\rm disk}}=0.18^\circ$ in which the signal in data is at $3.1\sigma$ from the Doppler model distribution. In all cases, no amplitude in excess of $2\sigma$ is obtained in two directions orthogonal to the CMB dipole and with each other (see Figure 2 of [@2013arXiv1303.5087P]). ----------- ------------------ ---------------------- ---------------------- $l_{min}$ ${r_{\rm disk}}$ $217$ GHz $143$ GHz 600 0.18 0.0042 $(3.2\sigma)$ 0.0057 $(3.4\sigma)$ 900 0.18 0.0027 $(2.9\sigma)$ 0.0047 $(4.1\sigma)$ 600 8.0 0.0048 $(2.7\sigma)$ 0.006 $(2.4\sigma)$ 900 8.0 0.0042 $(2.6\sigma)$ 0.0071 $(2.9\sigma)$ ----------- ------------------ ---------------------- ---------------------- : Summary of the Doppler dipole detection results. The significance of detection is computed with respect to the corresponding isotropic distribution for local variance amplitudes (1000 simulations) in the CMB dipole direction.[]{data-label="table:dopplerresults"} The local variance dipole directions obtained for various cases are shown in Figure \[fig:dirs\]. Small scale power asymmetry --------------------------- We can also investigate the presence of power asymmetry at small scales in the direction $(l,b)=(218^\circ, -20^\circ)$, in our small scale maps that have $a_{lm}$’s up to $l=600$ set to zero. A similar study of the power asymmetry at small scales (but in multipole space) was performed in [@2013JCAP...09..033F] using foreground cleaned SMICA maps; they report no such small scale asymmetry after accounting for a number of effects, including estimates of power asymmetry due to the Doppler effect. They report an upper bound on the modulation amplitude of $0.0045\;(95\%)$ at these scales. Another important previous work on constraining hemispherical power asymmetry at smaller scales is [@2009JCAP...09..011H] using quasars that reports $-0.0073<{A_{\rm T}}<0.012\; (95\%)$ at $k_{\rm eff} \approx 1.5 h {\rm Mpc^{-1}}$. When repeating our measurements shown in Figure \[fig:doppler18\], but now in the direction $(l,b)=(218^\circ, -20^\circ)$, we find that the local variance dipole amplitude component in the data is within $1\sigma$ expectation from both isotropic and Doppler model distributions. This is shown in Figure \[fig:smallscale\]. Each value in the Doppler model distribution is basically shifted right from the isotropic distribution (see the inset in Figure \[fig:smallscale\]); this shift is proportional to the cosine of the angle between the direction of large scale asymmetry and the direction of the CMB Doppler dipole. Therefore, we subtract this shift to get the value of the intrinsic local variance power asymmetry. Using ${r_{\rm disk}}=0.18^\circ$, we obtain (at $2\sigma$): $$\begin{aligned} {A_{\rm LV}}=(0.71\pm3.0)\times 10^{-3} \label{eqn:smallconstraint}\end{aligned}$$ ![image](figures/B1plotsdeg600_18_2.pdf) ![image](figures/ss.pdf) However, we can see from our results for the Doppler model distributions that the relation between ${A_{\rm T}}$ of a dipole modulated model and the most likely value of ${A_{\rm LV}}$ obtained from the corresponding local variance maps is not quite simple. This can be directly seen in Figure \[fig:doppler18\] in which the Doppler model had ${A_{\rm T}}=3.07\times 0.00123\approx0.0038$ whereas the obtained distributions for ${A_{\rm LV}}$ are different for the two cases with different ${r_{\rm disk}}$. To estimate the ${A_{\rm T}}$ constraint from the ${A_{\rm LV}}$ constraint in eqn \[eqn:smallconstraint\], let us assume that the relationship between them is linear for small values of ${A_{\rm T}}$ up to approximately ${A_{\rm T}}=0.0038$ (for a given frequency channel, ${r_{\rm disk}}$ and $l_{\rm min}$). Then, for the case of the 217 GHz channel maps, ${r_{\rm disk}}=0.18^\circ$ and $l_{\rm min}=600$, we can use the correspondence between ${A_{\rm T}}$ and ${A_{\rm LV}}$ of the distributions from simulations in Figure \[fig:doppler18\], and translate the constraint in eqn \[eqn:smallconstraint\] to (at $2\sigma$): $$\begin{aligned} {A_{\rm T}}=(0.8\pm3.5)\times 10^{-3}\end{aligned}$$ To further check our translation between ${A_{\rm LV}}$ and ${A_{\rm T}}$, we repeated this small scale analysis with a $A_{\rm T, \textit{input}}=0.0008$ modulation model and recovered the intrinsic local variance dipole amplitude obtained in eqn \[eqn:smallconstraint\]. Using other larger values of ${r_{\rm disk}}$ or the 143 GHz channel map, we obtained slightly weaker but consistent constraints for the small scale power asymmetry ${A_{\rm T}}$ in the direction of the large scale hemispherical power asymmetry. Discussion and Summary {#sec:discussion} ====================== In this short work, we have used local variance maps to study the power asymmetry in Planck temperature anisotropy maps, extending the work done in [@2014ApJ...784L..42A] that used the same method to smaller scales. We have shown that the effect of the Doppler dipole is small for local variance dipole measurements at large disk radii (${r_{\rm disk}}\gtrsim 4^\circ$); we find that the peak of the Doppler model distribution for the 217 GHz case is less than $0.5\sigma$ away from that of the isotropic distribution for the ${r_{\rm disk}}=4^\circ$ result. The difference gets even smaller for larger values of ${r_{\rm disk}}$. Further, we have also shown that the Doppler dipole can be measured to a moderate significance using this method. The first measurement of the Doppler signal in the temperature fluctuations was done by the Planck Collaboration in [@2013arXiv1303.5087P]. The method of local variance is a pixel space method, so our measurement is complimentary to that done by the Planck team using harmonic space estimators sensitive to the correlations between different multipoles induced by the Doppler effect. However, local variance dipoles are not sensitive to the aberration effect and therefore our method is less sensitive than the estimators that consider the aberration effect in addition to the dipolar power modulation. For our large scale asymmetry analysis, we find similar results using both 217 GHz channel maps (reported in figures in this paper) and 143 GHz channel maps. This is in agreement with previous works [@2009ApJ...699..985H; @Hansen:2004vq] that have investigated channel dependence of hemispherical power asymmetry in WMAP maps. Throughout our analysis, we included a large scale modulation model in which temperature anisotropies were modulated only for scales smoothed (Gaussian smoothing) at fwhm=1.25 degrees and found that the anomalous power asymmetry seen at all ${r_{\rm disk}}$ analyzed in this paper using the local variance method is consistent with this simple phenomenological model. This tells us about the scale dependence of the power asymmetry seen in data as our modulation model generates a scale dependent hemispherical asymmetry in $C_l$s (see Figure \[fig:modulation\]). We summarize important aspects of our work and results in the following points: - We have verified the hemispherical power asymmetry results obtained in [@2014ApJ...784L..42A], with the exception of ${r_{\rm disk}}=1,2$ degrees for which we have identified the need to use more disks in order to cover the whole sky. - Once we use smaller disk radii in our local variance analysis, the effect of the dipolar Doppler modulation becomes increasingly important which can be directly observed from our dipole amplitude distributions. - After removing large scale features up to $l=l_{\rm min}=\{600,900\}$, we could detect the expected dipolar Doppler modulation in Planck temperature anisotropy maps at a significance of approximately $3\sigma$. - We have obtained a constraint on dipolar modulation amplitude at small scales ($l>600$) in the direction of the large scale anomalous hemispherical power asymmetry as ${A_{\rm T}}=0.0008\pm0.0035 (2\sigma)$. We expect this work to be useful for a better understanding of the power asymmetries that are known to exist in the CMB data. In particular, we hope that our work will shed more light on local variance statistics in CMB maps which is already being used to compare theoretical models of power asymmetry with data [@2014arXiv1408.3057J]. While a satisfying theoretical explanation for the power asymmetry anomaly still lacks in the literature, several interesting proposals exist [@2013PhRvD..87l3005D; @2013PhRvL.110a1301S; @2013JCAP...08..007L; @2008PhRvD..78l3520E; @2009PhRvD..80h3507E; @2013PhRvD..88h3527N; @2014MNRAS.442..670C]. The statistical fluke hypothesis still remains a possibility too [@2011ApJS..192...17B]. More careful studies of both data and theoretical possibilities are important since the implications of progress in any direction consists of learning more about the fundamental statistical assumptions that we make about the universe at large scales. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by the National Aeronautics and Space Administration under Grant No. NNX12AC99G issued through the Astrophysics Theory Program. The computations for the project were performed using computing resources at the Penn State Research Computing and Cyberinfrastructure (RCC). The author would like to thank Yashar Akrami and Sarah Shandera for discussions and many useful suggestions, Donghui Jeong for useful suggestions on an earlier draft of this work, and Julian Borrill for help with getting the Planck [[FFP6 ]{}]{}simulation data. In addition, the author would like to thank Shaun Hotchkiss for a critical review and suggesting improvements to the manuscript. Y., [Fantaye]{} Y., [Shafieloo]{} A., [Eriksen]{} H. K., [Hansen]{} F. K., [Banday]{} A. J., [G[ó]{}rski]{} K. M., 2014, ApJ, 784, L42, L., [Catena]{} R., [Masina]{} I., [Notari]{} A., [Quartin]{} M., [Quercellini]{} C., 2011, , 7, 27 Bennett, C. L., Hill, R. S., Hinshaw, G., et al. 2011, , 192, 17 J., [Dai]{} L., [Jeong]{} D., [Kamionkowski]{} M., [Yoho]{} A., 2014, MNRAS, 442, 670 L., [Jeong]{} D., [Kamionkowski]{} M., [Chluba]{} J., 2013, , 87, 123005 A. L., [Hirata]{} C. M., [Kamionkowski]{} M., 2009, , 80, 083507 A. L., [Kamionkowski]{} M., [Carroll]{} S. M., 2008, , 78, 123520 H. K., [Banday]{} A. J., [G[ó]{}rski]{} K. M., [Hansen]{} F. K., [Lilje]{} P. B., 2007, ApJ, 660, L81 H. K., [Hansen]{} F. K., [Banday]{} A. J., [G[ó]{}rski]{} K. M., [Lilje]{} P. B., 2004, ApJ, 605, 14 S., [Hotchkiss]{} S., 2013, , 9, 33 K. M., [Hivon]{} E., [Banday]{} A. J., [Wandelt]{} B. D., [Hansen]{} F. K., [Reinecke]{} M., [Bartelmann]{} M., 2005, , 622, 759 F. K., [Banday]{} A. J., [G[ó]{}rski]{} K. M., 2004, MNRAS, 354, 641, C. M., 2009, , 9, 11 J., [Eriksen]{} H. K., [Banday]{} A. J., [G[ó]{}rski]{} K. M., [Hansen]{} F. K., [Lilje]{} P. B., 2009, ApJ, 699, 985 Jarosik, N., Bennett, C. L., Dunkley, J., et al. 2011, , 192, 14 S., [Akrami]{} Y., [Firouzjahi]{} H., [Solomon]{} A. R., [Wang]{} Y., 2014, ArXiv e-prints, [[1408.3057](http://arxiv.org/abs/1408.3057)]{} Kim J., Komatsu E., 2013, Physical Review D, 88, 101301 A., [Kahniashvili]{} T., 2011, Physical Review Letters, 106, 191301 D. H., 2013, , 8, 7 M. H., [Baghram]{} S., [Firouzjahi]{} H., 2013, , 88, 083527 2013b, ArXiv e-prints, [[1303.5072](http://arxiv.org/abs/1303.5072)]{} 2013a, ArXiv e-prints, [[1303.5083](http://arxiv.org/abs/1303.5083)]{} 2013, ArXiv e-prints, [[1303.5087](http://arxiv.org/abs/1303.5087)]{} F., [Hui]{} L., 2013, Physical Review Letters, 110, 011301 \[lastpage\] [^1]: E-mail: sza5154@psu.edu [^2]: <http://crd.lbl.gov/groups-depts/computational-cosmology-center/c3-research/cosmic-microwave-background/cmb-data-at-nersc/>
--- abstract: 'Measurements of elastic and inelastic cotunneling currents are presented on a two-terminal Aharonov–Bohm interferometer with a Coulomb blockaded quantum dot embedded in each arm. Coherent current contributions, even in magnetic field, are found in the nonlinear regime of inelastic cotunneling at finite bias voltage. The phase of the Aharonov–Bohm oscillations in the current exhibits phase jumps of $\pi$ at the onsets of inelastic processes. We suggest that additional coherent elastic processes occur via the excited state. Our measurement technique allows the detection of such processes on a background of other inelastic current contributions and contains information about the excited state occupation probability and the inelastic relaxation rates.' author: - 'Martin Sigrist,$^1$ Thomas Ihn,$^1$ Klaus Ensslin,$^1$ Matthias Reinwald,$^2$ and Werner Wegscheider$^2$' title: Coherent probing of excited quantum dot states in an interferometer --- Quantum dots (QDs) in the Coulomb blockade regime show well understood conductance resonances at low bias voltage when the gate voltage is swept [@Kouwenhoven97]. In interference experiments involving the Aharonov–Bohm (AB) effect a coherent current contribution was observed on such resonances [@Yacoby1995]. At increased tunnel coupling, higher order cotunneling [@Nazarov92] leads to a finite conductance between these resonances. At low bias, elastic cotunneling occurs which is energy conserving. Elastic cotunneling has also been shown to have a coherent contribution [@Sigrist2006]. At finite bias voltages, inelastic cotunneling is observed [@Franceschi2001] in which the tunneling process excites the QD. Inelastic cotunneling was used for studying Zeeman-splitting [@Kogan04] and the singlet–triplet gap [@Zumbuhl04]. If the QD is embedded in an AB interferometer, inelastic processes are not expected to contribute to interference, because the resulting excited dot state allows which-path detection. Here we present the observation of coherent contributions to the current at and beyond the onset of inelastic cotunneling. We show that the corresponding AB oscillations exhibit a phase change of $\pi$ at the bias voltage of the inelastic onset in most cases. An explanation of this finding requires the contribution of additional coherent elastic cotunneling processes through the involved excited state. The experiments therefore demonstrate the possibility of probing excited states and elastic cotunneling processes through them via the coherent current contribution in the nonlinear bias regime. ![\[figure1\] (a) SFM-micrograph of the structure (Details in the main text). (b) $G_\mathrm{sd}$ as a function of $V_\mathrm{pg1}$ and $V_\mathrm{pg2}$ representing the charge stability diagram of the two QDs. The finite-bias measurements in Fig.\[figure2\] were taken along the dashed lines.](fig1){width="3.1in"} The sample shown in Fig.\[figure1\](a) is based on a Ga\[Al\]As heterostructure with a two-dimensional electron gas (2DEG) 34 nm below the surface. It was fabricated by multiple-layer local oxidation with a scanning force microscope [@Sigrist2004]: The 2DEG is depleted below the oxide lines written on the GaAs surface \[bright lines in Fig.\[figure1\](a)\] thus defining the ring-interferometer. A Ti film evaporated on top is cut by local oxidation \[faint lines in Fig.\[figure1\](a)\] into mutually isolated top gates. A QD is embedded in each arm of the resulting AB-interferometer as indicated by the dots in Fig.\[figure1\](a). Direct tunneling between the two dots is suppressed by applying a negative voltage between the 2DEG and the metallic top gate, in contrast to previous experiments [@Sigrist2006]. In-plane gates pg1 and pg2 are used as plunger gates for dot 1 and 2, respectively. Topologically the sample is similar to those of Refs.  and . More details about the sample are found in Ref. . The source–drain two-terminal differential conductance, $G_{sd}=\partial I/\partial V_\mathrm{sd}$, was measured as indicated in Fig.\[figure1\](a) with low-frequency lock-in techniques at 120 mK electronic temperature. With the dots strongly coupled to the ring (open regime) and applying a magnetic field, $B$, normal to the 2DEG plane, we observe a periodically modulated conductance with an AB period of 22 mT, consistent with one magnetic flux quantum $\phi_0=h/e$ penetrating the area enclosed by the paths indicated in Fig.\[figure1\](a). The conductance $G_\mathrm{sd}$ of the system in the Coulomb blockade regime of the dots is plotted as a function of $V_\mathrm{pg1}$ and $V_\mathrm{pg2}$ in Fig.\[figure1\](b). The two families of parallel dark lines differing in slope are conductance resonances of dot 1 and dot 2. There is no apparent avoided crossing between resonances due to the absence of tunnel coupling and an interdot/intradot capacitance ratio of less than 1/20. From the resonance heights we estimate that the coupling of dot 2 to the leads is stronger by more than one order of magnitude than that of dot 1. This regime differs completely from the experiments in Ref. , because direct tunneling between the dots is absent and their coupling to the ring is much stronger. Along the dashed lines ‘a’ and ‘b’ in Fig.\[figure1\](b) we measured $G_\mathrm{sd}(V_\mathrm{sd})$. Along line ‘a’ the electron number changes in dot 1 while it is constant in dot 2, an vice versa along line ‘b’. The corresponding Coulomb blockade diamonds shown in Fig.\[figure2\] give a charging energy of about 0.7 meV and single-particle level spacings of about 0.1 meV. ![\[figure2\] (a) Differential conductance is measured as a function of DC source-drain voltage along trace *a* in Fig.\[figure1\](b). An inelastic onset independent of the electron number of dot 1 is superposed on the Coulomb diamonds. (b) Differential conductance is measured as a function of bias voltage along trace *b* in Fig.\[figure1\](b). The inelastic onset can be linked to an excited state of dot 2.](fig2){width="3.1in"} The cotunneling current observed at the intersection of lines ‘a’ and ‘b’ in Fig.\[figure1\](b) can be seen in Fig.\[figure2\]. It shows $V_\mathrm{sd}$ thresholds for inelastic cotunneling in one of the two QDs. In Fig.\[figure2\](a) we observe a superposition of Coulomb diamonds and an inelastic cotunneling onset at about $|V_\mathrm{sd}^{dc}|=0.1$ meV (black arrows). It persists when the electron number in dot 1 is changed. The same onset, but now observed along trace ‘b’, is seen only in the central diamond, i.e. it depends on the electron number in dot 2 \[Fig.\[figure2\](b)\]. We conclude that inelastic cotunneling occurs in dot 2 beyond the bias threshold. The inelastic cotunneling onset connects to excited state resonances outside the Coulomb-blockaded region \[white arrows in Fig.\[figure2\](b)\]. However, only resonances with positive slope are observed and the corresponding resonances with negative slopes are missing, indicating asymmetric tunnel coupling. We have therefore fine-tuned the tunnel barriers in order to reach a $G_\mathrm{sd}$ trace as symmetric as possible in $V_\mathrm{sd}$ \[Fig.\[figure4\](a)\]. ![(a) $B$ dependence of the differential conductance for gate voltages set to the center of the hexagon in Fig.\[figure1\](b). Bottom trace: low DC bias voltage (square, c.f. Fig.\[figure2\]), top trace: high DC bias voltage (dot). (b) Normalized AB conductance $g_\mathrm{AB}(B,V_\mathrm{bias})$. []{data-label="figure3"}](fig3){width="3.1in"} Measurements of the AB effect in a magnetic field allow the detection of phase-coherent contributions to the cotunneling current. We have measured $G_\mathrm{sd}$ as a function of $B$ at the crossing point of lines ‘a’ and ‘b’ in Fig.\[figure1\](b) for a number of DC source–drain voltages. Two of these are displayed in Fig.\[figure3\](a). The lower trace corresponds to low DC bias voltage as marked by a square in Fig.\[figure2\]. AB oscillations with a maximum at zero $B$ and a period of 22 mT are observed confirming a phase-coherent contribution to the elastic cotunneling current. The upper trace in Fig.\[figure3\](a) taken at higher DC bias voltage \[dot in Fig.\[figure2\]\] involves inelastic cotunneling through dot 2. Also in this case AB oscillations are observed, but show a minimum at $B=0$. We find either maxima or minima at $B=0$, i.e., phase rigidity, for all investigated source–drain voltages (see below), in contrast to non Coulomb blockaded systems [@Leturcq06]. It is evident from the data that the participation of the inelastic cotunneling process does not hamper the occurrence of quantum interference. We emphasize that $G_\mathrm{sd}$ does not detect the total (energy integrated) DC current, but only a small (compared to temperature) energy window around the chemical potentials in source and drain. We analyze the data following Ref. by splitting the measured $G_\mathrm{sd}(B)$ into three additive contributions: a smoothly varying background conductance $G_\mathrm{bg}(B,V_\mathrm{sd})$, the coherent AB-contribution $G_\mathrm{AB}(B,V_\mathrm{sd})$, and a contribution with fluctuations much faster than the AB period. In Fig.\[figure3\](a) we have plotted $G_\mathrm{bg}+G_\mathrm{AB}$ (smooth, gray) on top of the measured $G_\mathrm{sd}$ traces (ragged, black). Small conductance fluctuations beyond the AB frequency that may arise due to interference effects in the contacts outside the system are filtered out with this procedure. ![\[figure4\] (a) Differential conductance and its derivative averaged over one AB period around zero magnetic field as a function of DC source-drain voltage. (b) Normalized amplitude of the AB oscillation at $B=0$ as a function of source–drain bias. (c) AB-phase at $B=0$. (d) Schematic of elastic cotunneling transport through dot 2 triggered at the inelastic onset.](fig4){width="3.1in"} Figure \[figure3\](b) displays the normalized AB conductance $g_\mathrm{AB}(B,V_\mathrm{sd})=G_\mathrm{AB}(B,V_\mathrm{sd})/G_\mathrm{bg}(B,V_\mathrm{sd})-1$. This quantity can take values in the interval $[-1,1]$ and, evaluated at an AB-maximum or minimum, its modulus is related to the visibility of the AB oscillations. The visibility found in the measurement is always less than 0.1, a value comparable to other experiments [@Sigrist2004a; @Yacoby1995], but significantly lower than that observed in Ref.  where the tunnel coupling between the QDs was significant. At zero magnetic field, $g_\mathrm{AB}$ in Fig.\[figure3\](b) shows either maxima or minima \[see also Fig.\[figure3\](a)\]. Fig.\[figure4\](c) shows $\varphi(B=0)$ of the oscillations as determined from a fit of $a\cos(\omega_\mathrm{AB}B+\varphi)$ to the data around $B=0$, with amplitude $a$ and phase $\varphi$ being fitting parameters, and $\omega_\mathrm{AB}=2\pi BA/\phi_0$ ($A$ is the ring area). Several phase jumps between the two values $0$ and $\pi$ are observed which correspondingly appear in Fig.\[figure3\](b) with changing $V_\mathrm{sd}$. The generalized Onsager symmetries imposed on the two-terminal measurement restrict the AB phase only at $B=0$ and at low bias to be either zero or $\pi$ [@Leturcq06]. The measurement shows that $\varphi$ in our system is at $B=0$ very close to 0 or $\pi$ even in the nonlinear regime. Figures \[figure4\](a)–(c) relate $G_\mathrm{bg}(B=0,V_\mathrm{sd})$, $g_\mathrm{AB}(B=0, V_\mathrm{sd})$ and $\varphi(B=0,V_\mathrm{sd})$. $G_\mathrm{bg}$ in (a) shows an elastic cotunneling contribution at low bias and an inelastic cotunneling onset slightly below $\left|V_\mathrm{sd}\right|=0.1$ V. Additional weaker shoulders in $G_\mathrm{bg}$ can be well detected as extrema in the derivative of $G_\mathrm{bg}$ shown in the same plot. Whenever $g_\mathrm{AB}(B=0,V_\mathrm{sd})$ in Fig.\[figure4\](b) crosses zero, the phase in Fig.\[figure4\](c) jumps rather abruptly between $0$ and $\pi$. The dashed vertical lines in Figs. \[figure4\](a)–(c) indicate that a correlation exists between some of these phase jumps and the inelastic cotunneling shoulders in $G_\mathrm{bg}$. At small $V_\mathrm{sd}$ there is an AB maximum ($\varphi=\pi$), but the phase jumps to $0$ at the first inelastic onset for both polarities. The AB phase jumps back to $\pi$ (AB maximum) at further increased $|V_\mathrm{sd}|$. Another phase jump is observed at the second inelastic onset at about $|V_\mathrm{sd}|=0.3$ meV. Again, $\varphi$ jumps back by increasing $|V_\mathrm{sd}|$ further. Summarizing, we find the same AB phase for each of the two inelastic onsets with different AB phases in-between. From measurements of the same sample in different regimes we can say that most inelastic cotunneling onsets lead to a $\pi$ phase jump in the AB oscillations of the differential conductance, although there are occasional exceptions where no phase jump can be observed. The experiment raises the question why quantum coherence is not impaired by the presence of inelastic cotunneling. Leaving dot 2 in an excited state after such a cotunneling event allows which-path detection. A possible scenario resolving this puzzle is shown in Fig.\[figure4\](d). An inelastic cotunneling process excites the dot and increases the occupation probability of the excited state. Starting from this state, coherent [*elastic*]{} cotunneling processes via the [*excited*]{} state in dot 2 can take place that interfere with elastic cotunneling processes through dot 1 and give rise to the observed AB oscillations. For such processes to occur, a significant population of the excited state is required. The relaxation rate [*from*]{} the excited state to the ground state (by phonon emission or further inelastic electron tunneling) must be small compared to the rate bringing the QD from the ground to the excited state via inelastic cotunneling. Charge relaxation times in QDs have been measured to be of the order of $1-10$ ns and attributed to acoustic phonon emission [@Fujisawa02]. Relaxation times involving a spin-flip can be much longer [@Fujisawa02; @Hanson05]. Inelastic cotunneling relaxing the dot back to the ground state will have a similar time scale as the process exciting the dot. Once the above condition is fulfilled, elastic cotunneling through the excited state can take place. We estimate its contribution to the differential conductance to be typically comparable to that of zero bias elastic cotunneling through the ground state and to the inelastic contributions. This discussion makes clear that the coherent contribution to the tunneling current probes the occupation probability of the excited QD state and thereby gives information about the rates of inelastic processes. The scenario proposed here is the cotunneling analogue to the cotunneling mediated transport through excited states in the Coulomb-blockade regime reported recently [@Schleser2005]. It can be particularly strong, if the excited state transition has a significantly stronger tunnel coupling to the leads than the ground state transition. This is supported by the fact that we did not find AB oscillations in the regime of weak interdot [*and*]{} weak dot–ring coupling. Our experiment differs significantly from previous measurements addressing the electrostatic AB effect [@Nazarov1993]. A recent experiment on an AB ring [@vanderWiel2003] was interpreted in terms of this prediction, and an experiment on a Mach-Zender interferometer obtained similar results [@Neder2006]. Phenomenologically, these results show similar abrupt jumps by $\pi$ in the AB phase and oscillations of the visibility with $V_\mathrm{sd}$. An important property of our structure is the presence of the two quantum dots with discrete levels, which allows only cotunneling currents to flow. The close relation of some phase jumps to the addition of transport channels through one of the two dots is unlikely to occur by chance as a result of the electrostatic AB effect. In conclusion, we have shown that the measurement of the coherent contribution to the cotunneling current in an Aharonov–Bohm interference experiment can be used to detect coherent elastic cotunneling processes on a background of other inelastic processes. This coherent current contribution contains information about the occupation probability of the involved excited dot state and relaxation times. The results give a new perspective on inelastic cotunneling onsets. The measurement technique can be employed for further studies of coherent tunneling and interference involving quantum dots. We thank Y. Meir for valuable discussions and appreciate financial support from the Swiss National Science Foundation (Schweizerischer Nationalfonds). [100]{} L.P. Kouwenhoven [*et al.*]{}, in [*Mesoscopic Electron Transport*]{}, edited by . L.P. Kouwenhoven, G. Schön, and L.L. Sohn, NATO ASI, Ser. E, Vol. 345 (Kluwer, Dordrecht, 1997), pp. 105–214. A. Yacoby, M. Heiblum, D. Mahalu, H. Shtrikman, Phys. Rev. Lett. [**74**]{}, 4047 (1995). D.V. Averin, Yu.V. Nazarov, in [*Single Charge Tunneling: Coulomb Blockade Phenomena in Nanostructures*]{}, edited by H. Grabert, and M.H. Devoret (Plenum Press and NATO Scientific Affairs Division, New York, 1992), p. 217. M. Sigrist [*et al.*]{}, Phys. Rev. Lett. [**96**]{}, 036804 (2006). S. De Franceschi [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 878 (2001). A. Kogan [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 166602 (2004). D.M. Zumbuhl [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 256801 (2004). M. Sigrist [*et al.*]{}, Appl. Phys. Lett. [**85**]{}, 3558 (2004). A.W. Holleitner [*et al.*]{}, Phys. Rev. Lett. [**87**]{}, 256802 (2001). T. Hatano [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 066806 (2004); M.C. Rogge [*et al.*]{}, Appl. Phys. Lett. [**83**]{}, 1163 (2003). R. Leturcq, D. Sanchez, G. Götz, T. Ihn, K. Ensslin, D.C. Driscoll, A.C. Gossard, Phys. Rev. Lett. [**96**]{}, 126801 (2006). M. Sigrist [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 66802 (2004). T. Fujisawa, D.G. Austing, Y. Tokura, Y. Hirayama, S. Tarucha, Nature [**419**]{}, 278 (2002). R. Hanson [*et al.*]{}, Phys. Rev. Lett. [**94**]{}, 196802 (2005). R. Schleser, T. Ihn, E. Ruh, K. Ensslin, M. Tews, D. Pfannkuche, D.C. Driscoll, A.C. Gossard, Phys. Rev. Lett. [**94**]{}, 206805 (2005). Y.V. Nazarov, Phys. Rev. B [**47**]{}, 2768 (1993). W.G. van der Wiel, Y.V. Nazarov, S. de Franceschi, T. Fujisawa, J. Elzerman, E.W.G.M. Huizeling, S. Tarucha, L.P. Kouwenhoven, Phys. Rev. B [**67**]{}, 033307 (2003). I. Neder, M. Heiblum, Y. Levinson, D. Mahalu, V. Umansky, Phys. Rev. Lett. [**96**]{}, 016804 (2006).
--- author: - 'R.A.Marino' - 'A.Gil de Paz' - 'S.F.Sánchez' - 'P.Sánchez-Blázquez' - 'N.Cardiel' - 'A.Castillo-Morales' - 'S.Pascual' - 'J.Vílchez' - 'C.Kehrig' - 'M.Mollá' - 'J.Mendez-Abreu' - 'C.Catalán-Torrecilla' - 'E.Florido' - 'I.Perez' - 'T.Ruiz-Lara' - 'S.Ellis' - 'A.R.López-Sánchez' - 'R.M.González Delgado' - 'A.de Lorenzo-Cáceres' - 'R.García-Benito' - 'L.Galbany' - 'S.Zibetti' - 'C.Cortijo' - 'V.Kalinova' - 'D.Mast' - 'J.Iglesias-Páramo' - 'P.Papaderos' - 'C.J.Walcher' - 'J.Bland-Hawthorn' - 'the CALIFA Team[^1]' bibliography: - 'referencias.bib' date: 'Received July 2015; Accepted September 2015' title: | Outer-disk reddening and gas-phase metallicities:\ The CALIFA connection --- Introduction ============ After the pioneering works on surface photometry of nearby galaxies by @pat40, @devac59, @1968adga.book.....S and @free70, it became accepted that galaxy disks follow an exponential light profile. The [*inside-out*]{} scenario of galaxy formation predict that the outskirts of a galaxy need longer times for their assembly resulting in an exponential decline of the radial light distribution and of the metal abundances [@1991ApJ...379...52W; @1998MNRAS.295..319M]. In the last two decades, especially with the advent of CCD imaging first and SDSS drift-scanning imaging more recently, we have learnt that the vast majority of nearby disks show breaks[^2], which will be called [*breaks*]{} hereafter. Note that the radial position of this break should not be affected by inclination effect, as suggested by @2009MNRAS.398..591S [@2014MNRAS.441.2809M] in their surface brightness (SB, hereafter) radial profiles after several scale lengths, and these can be either [*down*]{}- or [*up-bending*]{}. @2005ApJ...626L..81E and @PT06 proposed a detailed classification of the different SB distributions in three general categories: (i) <span style="font-variant:small-caps;">Type i</span> (<span style="font-variant:small-caps;">Ti</span>) profiles that follow a single exponential law beyond the bulge area along all the optical extension of the galaxies, (ii) <span style="font-variant:small-caps;">Type ii</span> (<span style="font-variant:small-caps;">Tii</span>) profiles that present a double exponential law with a [*down-bending*]{} beyond the break radius, and (iii) <span style="font-variant:small-caps;">Type iii</span> (<span style="font-variant:small-caps;">Tiii</span>) profiles that exhibit an [*up-bending*]{} in the outer part. The observational results obtained at high redshift also suggest that breaks are present in distant galaxies and that, once formed, are long-lived [@2007MNRAS.374.1479G; @2009ApJ...705L.133M]. This variety of radial morphologies have been tentatively explained by different mechanisms: Outer Lindblad Resonances [OLR, @2012MNRAS.427.1102M; @2013ApJ...771...59M], the presence of a bar [@2006ApJ...645..209D] or long-lived spiral arms [@2011MNRAS.412.1741S], a shrinking of the star-forming disk [@2009MNRAS.398..591S; @2015ApJ...800..120Z Z15 hereafter], changes in the star-formation triggering mechanisms [@1994ApJ...435L.121E], satellite accretion or the existence of a star formation (SF) threshold radius beyond which only stellar migration would populate the outer disk [@2008ApJ...675L..65R R08 hereafter]. The recent findings of a reddening in the optical broad-band colors for 39 <span style="font-variant:small-caps;">Tii</span> profiles [@2008ApJ...683L.103B B08 hereafter] have provided a fundamental piece of evidence to the actual scenario for the formation of galaxy disks and posed a challenge to these mechanisms. B08 also found a characteristic minimum color associated with these [*U-shaped*]{} color profiles. Such reddening in the optical colors is better explained as being due to a shrinking of the region of where SF has taken place over time (Z15) or to stellar migration (R08). In particular, a minimum in the luminosity-weighted age (and resulting optical colors) results naturally from the theoretical predictions of the stellar migration scenario .\ A direct prediction of the somewhat naive [*inside-out*]{} disk-formation scenario, under the assumption of closed-box chemical enrichment, is the presence of a universal radial abundance/metallicity gradient in disk galaxies . This is indeed observed in most of late-type galaxies for both the gas and stellar populations but is still under debate whether this abundance gradient is valid for all disk galaxies and at all radii . On the other hand, not all theoretical models produce elemental abundance radial distributions as perfect exponential functions. In this regard, [@2015MNRAS.451.3693M] shows how the radial distributions of oxygen abundance for a sample of theoretical galaxies with different dynamical masses is better fitted by a curve instead of only one straight line. Their distribution results flatter in the inner disk, and flattens again in the outer regions of disks, mainly in the less massive galaxies. This behavior is a consequence of the ratio between the SF and the infall rates in each radial region, which, in turn, is defined by the surface stellar profile and the gas (molecular and diffuse) radial distributions. More interesting, although the surface brightness do not show any flattening associated to the oxygen abundances ones, colors also have an [*U-shape*]{} at the outer regions of disks especially for galaxies with masses similar to the Milky Way (MW). In addition, several investigations both in our MW [@1996MNRAS.280..720V] and in nearby galaxies [@2009ApJ...695..580B; @2011MNRAS.415.2439R; @2011MNRAS.412.1246G; @bres12; @2012ApJ...754...61M; @seba12] have reported a shallower oxygen abundance gradient (or flattening) in the outskirts, beyond $\sim$2 effective radii, R$_{\rm{eff}}$. In general, these deviations to the universal abundance gradient are explained in terms of variations of the in-situ gas density or effective SF history, the presence of a bar, or coincidence with the corotation radius. Recently, @2013MNRAS.428..625S showed a clear correlation between this minimum in the oxygen distribution and the corotation radii for 27 galaxies, but the mechanisms causing such different behaviors are not yet fully understood.\ A fundamental question therefore arises from these results: are the breaks observed in the SB profiles and the flattening in the oxygen abundance gradients connected? In order to investigate the role of the ionized-gas metallicity on the nature of the observed changes in SB and colors we analyze these properties in a large sample of nearby disk galaxies from the CALIFA Integral Field Spectroscopy (IFS) survey. Data and Analysis ================= The Sample ---------- For this work we have selected the 350 galaxies observed by the CALIFA survey [@seba12a] at the CAHA 3.5m telescope with PMAS (Potsdam Multi Aperture Spectrograph) in the PPak mode [@2006PASP..118..129K] and processed by the CALIFA v1$.$5 pipeline [@rubenDR2] up to September 2014. CALIFA is an IFS survey, whose main aim is to acquire spatially resolved spectroscopic information of $\sim$600 galaxies in the Local Universe (0.005 $<{\it z}<$ 0.03), sampling their optical extension up to $\sim$2.5 R$_{\rm{eff}}$ along the major axis with a spatial resolution of FWHM 2.5“ (1”/spaxel), and covering the wavelength range 3700Å-7500Å. By construction, our sample includes galaxies of any morphological type, being representative of all local galaxies between $-$23$<$M$_{abs,z}$$<$$-$18. Details on the data reduction are given in [@bernd13] and in [@rubenDR2], and more information on the mother sample can be found in . We exclude from our analysis all those galaxies classified as mergers (26/350 galaxies), as interactions are expected to flatten the metallicity profiles independently of the secular mechanisms put to test in this work [e$.$g$.$ @2015arXiv150603819B; @2010ApJ...710L.156R S14]. Our surface-photometry sample therefore consists of 324 CALIFA galaxies. Global properties for the galaxies in our sample, such as morphological type, stellar mass, distance, etc., were taken from . [l l | r | r r r | r r]{}\ & & & &\ Number & [TOT=324]{} & & &\ Frequency & & & &\ & & & & & & &\ Number & & & 81 & 31 & 60 & 83 & 16\ Frequency & & & 25.0 & 9.6 & 18.5 & 25.6 & 4.9\ $\mu_{0}$ & & 19.88$\pm$0.81 & 20.14$\pm$0.61 & 20.21$\pm$0.82 & 20.38$\pm$0.75 & 19.30$\pm$0.88 & 19.49$\pm$0.61\ \ Number & [TOT=131]{} & & 37 & 18 & 43 & 30 & 3\ Frequency & & & 28.2 & 13.7 & 32.8 & 23.0 & 2.3\ R$_{\rm{break}}$ & & & 1.43$\pm$0.48 & 1.43$\pm$0.37 & 1.47$\pm$0.38 & 1.50$\pm$0.49 & 1.50$\pm$0.47\ $\mu_{\rm{break}}$ & & & 22.18$\pm$0.81 & 22.21$\pm$0.97 & 22.46$\pm$0.71 & 22.70$\pm$0.51 & 22.01$\pm$0.49\ ([*g’*]{}- [*r’*]{})$_{\rm{break}}$ & & & 0.52$\pm$0.11 & 0.51$\pm$0.15 & 0.51$\pm$0.13 & 0.52$\pm$0.14 & 0.75$\pm$0.19\ [(12+log(O/H))]{}$_{\rm{break,\,N2}}$ & & & 8.50$\pm$0.08 & 8.46$\pm$0.11 & 8.52$\pm$0.08 & 8.51$\pm$0.09 & 8.58$\pm$0.09\ $^{\dagger}$ [For a detailed explanation of each category see the classification schema presented in Fig$.$4 of @PT06. ]{}\ Surface brightness profiles and color gradients {#SBandColor} ----------------------------------------------- The SDSS [*g’*]{} and [*r’*]{} SB and ([*g’*]{}$-$[*r’*]{}) color profiles were derived using the DR10 data products, in particular we use the [*swarp*]{} mosaicking code [@2014ApJS..211...17A]. We selected [*g’*]{} and [*r’*]{}$-$band data for two reasons: (i) they are deep enough to be sensitive to the outer part of galaxies and (ii) the breaks and corresponding [*U-shaped*]{} profiles were originally found in these SDSS bands (B08, Z15). We create 3’$\times$3’ post-stamp images (as shown in Fig$.$1 for the galaxy NGC5980) and we estimate that our SB measurements are reliable up to 27$-$28mag/$\arcsec$$^{2}$ (note that DR10 images are sky subtracted contrary to the DR7 data used by B08 and that our faintest SB value for this analysis[^3] is 27mag/$\arcsec$$^{2}$). For all galaxies in the sample we mask all contaminating sources such as bright stars or background galaxies and then we extract the flux in each band from the isophotal fitting provided by the IRAF task [*ellipse*]{}. Each isophote was computed varying both the ellipticity ($\epsilon$) and the position angle (PA) with a step of 1$\arcsec$. This approach should affect less the color profiles with respect to a procedure where $\epsilon$ and PA are kept fixed. The extracted fluxes were converted to AB magnitudes and corrected for Galactic extinction using the @1998ApJ...500..525S maps. Both [*g’*]{} (cyan circles) and [*r’*]{} (yellow circles) SB profiles are plotted in Fig$.$1 along with the resulting radial color profile. The details of the SB profile fitting procedure are given in Section \[radprofs\]. Oxygen abundance gradients {#metprofs} -------------------------- We obtain spectroscopic information for $\sim$15130 regions (or complexes) from our 324 CALIFA datacubes using <span style="font-variant:small-caps;">HIIexplorer</span>[^4]. Following the prescriptions described in S14 and the analysis presented in @my13, we compute the radial oxygen gradients for both N2 (log(\[N[<span style="font-variant:small-caps;">ii</span>]{}\]$\lambda$6583/H$\alpha$)) and O3N2 (log((\[O[<span style="font-variant:small-caps;">iii</span>]{}\]$\lambda$5007/H$\beta$)/N2) indicators normalized at R$_{\rm{eff}}$ (see Fig$.$1). We refer to these calibrations as [*M13-N2*]{}[^5] and [*M13-O3N2*]{}[^6] hereafter, respectively. The disk effective radii (R$_{\rm{eff}}$) values for the galaxies analyzed in this work were taken from S14. For the current analysis, we use the results based on [*M13$-$N2*]{} as it provides a better match to the abundances obtained via [*T$_{e}$*]{}-based methods [@bres12; @my13]. Instead of a single fit [@2014AJ....148..134P S14], we perform two independent linear regressions in the inner and outer disk ranges to each side of the best-fitting SB break in the SDSS [*r’*]{}-band (see Section \[radprofs\]). This allows us to investigate whether or not there is a connection between SB and (O/H) breaks using a method that is less prone to the effects of outliers and the irregular sampling of the metallicity radial distribution provided by individual regions (compared to a direct double fit of the metallicities without priors). The oxygen abundance fits are computed including both systematic and random errors through Monte Carlo (MC) simulations. For each galaxy within our sample, we have performed 10$^{5}$ MC simulations to compute the difference in slopes and its uncertainty. We assume that the line fluxes are normally distributed according their estimated uncertainty, the metallicity has an intrinsic normal scatter of $\sigma$=0.0567 \[dex\] and the break radius is also normally distributed. This likely overestimates the uncertainties because part of the systematics in the [*M13-N2*]{} calibration might come from parameters that take the same value across the disk of a given galaxy but vary from galaxy to galaxy. ![Comparison of the position of the break in the surface brightness profiles derived for the SDSS [*g’*]{} and [*r’*]{} bands for the total of 324 galaxies analyzed in surface photometry in this work. The color coding is shown at right and represents the offset from the 1:1 relation in units of the error of each individual point.](rbreaks_comparison.eps){width="0.8\hsize"} Results ======= Radial profile classification {#radprofs} ----------------------------- After excluding the bulge component, we have carried out a detailed analysis of the disk [*g’*]{}- and [*r’*]{}-band profiles. We identify the transition radius from the bulge to the disk in the profile (innermost point of our fitting range) as that where (1) there was an evident change in the isophote’s ellipticity and (2) the brightness of the extrapolated inner disk component was equal or brighter than that of the bulge. Our procedure is aimed at deriving the broken exponential (including the position of the break radius) that best fits our SB profile via bootstrapping (see resulting best fits in the middle panel of Fig$.$1). Since the position of the break radii are found to be filter-independent (B08), our profile fitting and classification was performed on the [*r’*]{}-band data owing to their better S/N with respect to those on the [*g’*]{}-band (where the breaks appear brighter). In spite of that, as a consistency check we initially derived the position of the break in both bands independently finding a very good overall agreement between the two break radii (see Fig$.$2). We find that in the [*r’*]{}-band only 16% of the CALIFA galaxies are well described by a single exponential law (<span style="font-variant:small-caps;">Ti</span> profiles) while the remaining 84% of galaxies are better described by a broken exponential. In particular, 53% of our disks present down-bending profiles and were classified as <span style="font-variant:small-caps;">Tii</span> and the remaining 31% are <span style="font-variant:small-caps;">Tiii</span> (up-bending) profiles. Previous studies by @2005ApJ...626L..81E and @PT06 (among others) have proposed that according to the presence of a bar or/and to the relative position of the break respect to the bar, the <span style="font-variant:small-caps;">Tii</span> class could be divided in different subgroups. The up-bending <span style="font-variant:small-caps;">Tiii</span> breaks represent 31% of our sample and historically they are also subdivided according to the possible nature of the outer zone (spheroid or disk like outer region). Our statistical analysis is focused on the possible relation between the outer-disk reddening and the ionized-gas metallicity but is not aimed at explaining in detail the physical nature of each subgroup of the <span style="font-variant:small-caps;">Tii</span> and the <span style="font-variant:small-caps;">Tiii</span> categories. To achieve this task, in this study we will consider only the main classes for the join analysis of the stars and gas profiles to easy compare our results with previous findings. Finally, the results of our disks classification are presented in Table 1, including the detailed frequencies and the SB, color and oxygen abundance measurements at break radius for each subtype. We conclude that our results are consistent with the previous classifications and that our breaks occur, as expected, at $\sim$2.5 scale-lengths (or $\sim$1.5$\times$R$_{\rm{eff}}$) on average and also that the mean ([*g’*]{}$-$[*r’*]{}) color at R$_{\rm{break}}$ is similar to the one obtained by B08 for <span style="font-variant:small-caps;">Tii</span> disks ($\sim$0.5 mag, see Table 1), although with a significant larger (&gt; 2x) sample. Note that the latter color value is an average observed measurement so it has not be corrected for internal reddening, which could vary as a function of the galaxy inclination. The interplay between stellar light and abundance profiles ---------------------------------------------------------- The goal of this work goes beyond the disk classification of the CALIFA galaxies. Our main aim is to find possible connections between the stellar light colors and the gas metallicity in the external parts of disk galaxies. In order to ensure a good statistical sampling we impose that our final sample must include only spiral galaxies that have a minimum 5 regions beyond the break radius and also present a broken exponential light profile (i$.$e$.$ elliptical and <span style="font-variant:small-caps;">Ti</span> galaxies are excluded from the following analysis). The final sample comprises a total of 131 galaxies (98<span style="font-variant:small-caps;">Tii</span>+33<span style="font-variant:small-caps;">Tiii</span>) that fulfill these requirements and reduces the number of regions used to 8653 from the 15130 detected in the surface photometry sample. We carry out two linear regressions in the same SB intervals to calculate the difference between the slopes of the outer-to-inner color profiles, $\Delta$$\alpha$$_{(g'- r')}$. As described in Section \[metprofs\], we then apply the same analysis to the radial distribution of the oxygen abundance of regions and simultaneously fit the metallicity gradients within and beyond the [*r’*]{}-band R$_{\rm{break}}$ (obtaining the difference of the outer-to-inner oxygen slopes, $\Delta$$\alpha$$_{\rm(O/H)}$). We find a flattening or an inverted oxygen abundance trend beyond the break radius for 69/131 galaxies, which are the ones showing positive differences, $\Delta$$\alpha$$_{\rm(O/H)}$$\,>\,$0 (difference outer-inner). Negatives values of $\Delta$$\alpha$$_{\rm(O/H)}$ indicate a relative drop in the external part of the oxygen radial profile (as most profiles show a negative internal metallicity gradient). The difference between the outer and the inner slopes ($\Delta$$\alpha$$_{\rm(O/H)}$) of our (O/H) fits is plotted in Fig$.$3 versus the color one, $\Delta$$\alpha$$_{(g'- r')}$. In this figure we represent the difference, $\Delta$$\alpha$$_{\rm(O/H)}$, of oxygen abundance slopes (outer$-$inner) versus the color slope difference, $\Delta$$\alpha$$_{(g'- r')}$, along with their errors obtained through the propagation of the fitting uncertainties. In general, our best-fitting results are in agreement with the oxygen abundance slope distributions obtained by and @2015MNRAS.448.2030H. In the case of the [*M13-N2*]{} our mean values for the inner and the outer slopes are $-$0.044 and $-$0.036 \[dex/R$_{\rm{eff}}$\], (median $-$0.041 and $-$0.029) respectively. We find that 111/131 galaxies host color profiles that present a flattening or are [*U-shaped*]{}. In addition, 69 of these galaxies show also a change in the outer part of their oxygen gradients (50<span style="font-variant:small-caps;">Tii</span>+19<span style="font-variant:small-caps;">Tiii</span>). Our results suggest that [*U-shaped*]{} color profiles are more common in <span style="font-variant:small-caps;">Tii</span> than <span style="font-variant:small-caps;">Tiii</span> galaxies. However, although most <span style="font-variant:small-caps;">Tii</span> galaxies are barred or weakly barred galaxies, no correlation is found when we analyze the behavior of the whole sample with respect to the presence or not of a bar. When all galaxies are considered together, the probability of this distribution in difference of color gradient being the result of a galaxy population with no change in color gradient is smaller than 10$^{-4}$ ($i.$e$.$ none among the total of 10$^{4}$ MC runs used in this test). This result confirms the positive detection of the [*U-shaped*]{} color profiles within our sample. However, with regard to the change in metallicity gradient, while the distribution of gradient difference for <span style="font-variant:small-caps;">Tii</span> disks is compatible with the null hypothesis (the Kendall test $p$-value in this case is as high as 0.3), in the case of the <span style="font-variant:small-caps;">Tiii</span> disks there seems to be a correlation between color- and metallicity-gradient difference. Indeed, the $p$-value in this case for the null hypothesis is $<$0.02, so the presence of a positive correlation is supported by the data. The use of a statistical tests, including non-parametrical ones, are justified as the functional dependence between these properties is not known a priori. We have used three different tests for this purpose, namely the ones from Pearson, Spearman, and Kendall (see [@kendallgibbons] and references therein). All three tests yield very similar results in all correlations analyzed. Mass dependence of metallicity and color breaks ----------------------------------------------- Despite <span style="font-variant:small-caps;">Tii</span> galaxies do not show a correlation between the change in color and metallicity gradient, it is worth analyzing whether these changes might be related with other properties of the disks either global or spatially resolved. The same can be said about the <span style="font-variant:small-caps;">Tiii</span>, where understanding the origin of the weak correlation between color and ionized-gas metallicity gradient would certainly require the analysis of other properties of these disks. Although splitting our samples of <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span> galaxies by physical properties would certainly benefit of an even larger sample of objects, we explore in this section the presence of potential correlations between the change in color and ionized-gas metallicity gradients with some global properties of our sample. Due in part to the reduced size of the sample once it is split in <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span>-type galaxies and also to the large uncertainty of the individual measurements of the change in color and metallicity gradient only the most significant of these potential correlations would stand out. In this regard, after analyzing the relation between these changes in color and metallicity gradients with (1) presence or lack of barred structures, (2) morphological type, or (3) galaxy stellar mass, only the latter is statistically significant within our sample, and only in the case of the <span style="font-variant:small-caps;">Tii</span> galaxies. In Fig$.$ 4 we represent the change in color gradient (left panel) and the change in ionized-gas metallicity gradient (right panel) as a function of the stellar mass (as provided by ). There is no clear dependence between the change in color gradient and the stellar mass, despite the obvious global reddening of the outer disks in our sample. The [*p*]{}-values derived are $\sim$0.9 (in all three tests carried out) in the case of the <span style="font-variant:small-caps;">Tii</span> galaxies and $\sim$0.4 in the case of the <span style="font-variant:small-caps;">Tiii</span> systems. However, in the case of the change in ionized-gas metallicity of <span style="font-variant:small-caps;">Tii</span> galaxies we find that more massive systems have a rather uniform negative metallicity gradient. than low-mass galaxies. Thus, at masses below 10$^{10}$M$_{\odot}$ a drop in the outer-disk metallicity gradient is commonly found. In this case the [*p*]{}-values found are as low as 0.003 (Pearson), 0.018 (Spearman), or 0.010 (Kendall). Again, a large number of systems, especially at the low-mass half of the distribution, would be desirable to confirm this relation. It is also worth noting that the outer-disk reddening is clear in <span style="font-variant:small-caps;">Tii</span> galaxies at all masses, while for <span style="font-variant:small-caps;">Tiii</span> galaxies, this is only clear at stellar masses above 10$^{10}$M$_{\odot}$. The segregation between <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span> galaxies below this stellar-mass value is very clear from this figure (left panel). The best linear fit (plotted in Fig$.$ 4) in the case of the change in the color gradients as a function of stellar masses for <span style="font-variant:small-caps;">Tii</span> (<span style="font-variant:small-caps;">Tiii</span>) galaxies yields a slope of $-$0.016$\pm$0.006 (0.044$\pm$0.005)\[dex/M$_{\odot}$\]. We also show in Fig$.$ 4 the corresponding linear fit of the aforementioned correlation between the change in metallicity gradient and stellar mass in <span style="font-variant:small-caps;">Tii</span> (<span style="font-variant:small-caps;">Tiii</span>) galaxies in this case. The slope of the best fit is $-$0.06$\pm$0.02($-$0.05$\pm$0.02)\[dex/M$_{\odot}$\]. Discussion ========== Outer-disk properties --------------------- In order to interpret the nature of SB breaks in nearby galaxies we have carried out a joint analysis of CALIFA gas-phase metallicities and SDSS optical SB and colors. Whatever the mechanisms responsible for such breaks are, they should also be able to explain the diversity of morphologies, colors and metallicity gradients found in these, otherwise poorly understood, outskirts of disk galaxies. Moreover, any theoretical interpretation should also explain the results derived from this work (some of them already found by other authors), namely:\ (<span style="font-variant:small-caps;">i</span>) The percentage of SB profiles and mean break colors found confirm those reported by previous works [@2005ApJ...626L..81E; @PT06] B08, this time using the well-defined and large sample of nearby galaxies from the CALIFA IFS survey.\ (<span style="font-variant:small-caps;">ii</span>) Most of the CALIFA <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span> disk galaxies show a flattening and even a reversed color gradients (see also B08).\ (<span style="font-variant:small-caps;">iii</span>) The distribution of differences in the outer$-$inner (gas) metallicity gradient shows no correlation with the difference in color gradient in the case of the <span style="font-variant:small-caps;">Tii</span> disks, while there is a positive correlation between them (i$.$e$.$ a metallicity flattening) in the case of the <span style="font-variant:small-caps;">Tiii</span> disks.\ (<span style="font-variant:small-caps;">iv</span>) The change in the ionized-gas metallicity gradient at both sides of the SB breaks in <span style="font-variant:small-caps;">Tii</span> disk galaxies varies with the galaxy stellar mass ([*p*]{}-value$\sim$0.01) in the sense that the low-mass galaxies show a more significant metallicity flattening (i$.$e$.$ with respect to the inner gradient) than more massive systems.\ (<span style="font-variant:small-caps;">v</span>) At stellar masses below $\sim$10$^{10}$M$_{\odot}$, <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span> galaxies behave differently in terms of outer-disk reddening, with the latter showing little reddening or even a bluing in their color profiles.\ Note that despite the evidence provided for the presence of these trends, the scatter is large. This suggests that each subgroup is rather inhomogeneous and therefore it likely includes galaxies with different spectro-photometric, chemical and dynamical histories. A question naturally arises, can these observational results be reconciled in a unique disk formation theoretical scenario? Outer-disk formation scenarios ------------------------------ The level of detail reached by recent models of galaxy formation and evolution are finally allowing to use outer disks as laboratories for a better understanding of the relative contribution of in-situ SF, stellar migration and halo-gas and satellite accretion in shaping the observational properties of galaxy disks. Thus, @2009MNRAS.398..591S were able to estimate that 60% of the stars in the outskirts of their simulated disk were not formed in-situ but migrated from the inner to the outer (warped) disk, leaving an important imprint on the stellar metallicity gradient. The idealized models of R08 indicate that the [*U-shaped*]{} and minimum in the color profile found by B08 is caused by a drop of gas surface density mainly due to changes in the angular momentum and that stars migrate mainly due to churning effects [@2002MNRAS.336..785S]. The simulations of predict that secular processes (bars and spiral structures) could redistribute material towards several disk scale lengths (up to $\sim$10kpc). Z15 have recently proposed that most of the stars currently in the outer disks of a sample of galaxies observed with Pan-STARRS1 were not formed in-situ and the pollution of their outskirts is due to the combination of radial migration plus a truncation of the SF beyond the R$_{\rm{break}}$. These scenarios mainly differ in whether or not the effects of stellar migration dominate over those related with the time evolution of the size of the disk where star formation takes place. Implications on the evolution of disks -------------------------------------- Our results indicate that the majority of our disk galaxies show [*U-shaped*]{} color profiles associated while more than half of them (69/131 disks in total present both features) have flat or inverted oxygen metallicity gradients. A correlation between the two is found but only in the case of the <span style="font-variant:small-caps;">Tiii</span> disks. <span style="font-variant:small-caps;">Tii</span> galaxies, on the other hand, where the outer-disk reddening is notorious, do not follow such a trend but when a metallicity flattening is present this becomes more severe as the stellar mass decreases. In this section we explore the implications (and constraints) of these results on the different theoretical scenarios proposed. Given the lack of correlation between the change in color and metallicity, and taken into account the typical sizes of our <span style="font-variant:small-caps;">Tii</span> disks, we infer that the change in metallicity associated to the observed color flattening cannot be larger than $\sim$0.4dex or that correlation should be present. Assuming such maximum change in metallicity we would expect a negligible change in the optical color of the stars associated to it. Thus, at a fixed age and SF timescale (from instantaneous to continuous) the change in ([*g’*]{}$-$[*r’*]{}) between e.g$.$ 12+log(O/H)=8.3 and 8.7 would be smaller than $\sim$0.07mag (SB09, ). So, our results indicate that metallicity alone (at least in the ionized-gas phase) would never explain the observed outer-disk color profiles of <span style="font-variant:small-caps;">Tii</span> galaxies. Despite the correlation found between the two quantities, even in the case of the <span style="font-variant:small-caps;">Tiii</span> galaxies, the amount of metallicity flattening does not seem to be enough to explain the reddening of their outer-disk optical colors. Besides, even if a stellar metallicity gradient would be present it is not obvious that could have an immediate effect on the ionized-gas phase abundances, especially in the case of the Oxygen, as this is released almost exclusively by short-lived massive stars. Therefore, we conclude that our results are in agreement with recent findings regarding positive age gradients in outer disks [@2009ApJ...697..361V; @2012ApJ...752...97Y]. In this regard, the work made by @2015arXiv150604157G on the stellar age radial profiles of 300 CALIFA galaxies (stacked by morphology and mass) has also shown a flattening of these profiles beyond 1.5-2 half light radius (HLR). @2015arXiv150604157G find negative extinction and stellar metallicity gradients, which leave the age as the only possible presumed responsible for the outer-disk reddening. Therefore, any scenario aimed at explaining the color profiles presented in this work should also predict a radial change in the luminosity-weighted age of the stellar populations in outer disks, since the radial variation of either extinction or metallicity (see above) cannot. In principle, both the scenario where the radius of the disk where in-situ SF takes place shrinks with time (SB09) and the stellar migration scenario (R08) naturally predict a positive age gradient in the outer disks and are actually not mutually exclusive. We should note, however, that the use of ionized-gas metallicities could lead to more modest metallicity flattenings than those expected from the stars due to the dilution of the enriched gas by low-metallicity (or even pristine) gas from the halo [@2013ApJ...772..119L], preferably in the outer parts of the disk. On the other hand, this could be compensated by the fact that this halo gas might have been previously polluted by metal-rich outflows originated during early phases of star formation in the disk [@bres12; @2011MNRAS.416.1354D]. These inflows of unpolluted gas versus enriched inflows are always there, but certainly speculative if there is no clear evidence, e$.$g$.$ comparing the metallicity of the old population with the ionized gas metallicity. With the idea of overcoming those limitations, including also the potential contribution of satellite accretion to the population of the outer disks, a careful spectroscopic study of the stellar content in the outer parts of the CALIFA galaxies is being pursued by Ruiz-Lara et al. (in prep). Our results indicate that the interpretation of the colors and ionized-gas metallicities of outer disks might be different for <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span> and, possibly, also for different stellar-mass ranges. The fact that virtually all <span style="font-variant:small-caps;">Tii</span> galaxies show a reddening in their outer-disk optical colors (independently of their stellar mass) already puts a clear difference compared with the <span style="font-variant:small-caps;">Tiii</span> galaxies (see below). Besides, we find that the metallicity flattening (although not correlated with the reddening in color) in <span style="font-variant:small-caps;">Tii</span> objects is more notorious at low stellar masses, something that is less clear in the case of <span style="font-variant:small-caps;">Tiii</span> galaxies. Finally, it is also worth keeping in mind that the mere shape of the <span style="font-variant:small-caps;">Tii</span> profiles indicate that the amount of stars found (whose presence is ought to be explained) beyond the SB break is smaller than that in the outer-disks of <span style="font-variant:small-caps;">Tiii</span> galaxies, at least for the intermediate-to-high stellar masses where <span style="font-variant:small-caps;">Tii</span> and <span style="font-variant:small-caps;">Tiii</span> galaxies show similar changes in their outer-disks color gradient (see left panel of Fig$.$4). In the case of <span style="font-variant:small-caps;">Tii</span> galaxies we ought to explain (1) why, for a similar level of color reddening, the outer disks of low-mass systems show a more obvious metallicity flattening than high-mass ones but (2) being the age still the major driver of the radial change in color in either case. A possible explanation for the behavior observed in low-mass <span style="font-variant:small-caps;">Tii</span> galaxies is the presence of radial migration possibly due to the mechanism known as [*churning*]{} [@2002MNRAS.336..785S], since these low-mass disks are expected to be kinematically cold (although some authors suggest that migration might be negligible in this case, [@2010ApJ...712..858G]). Unfortunately, current numerical simulations do not yet allow establishing whether this mechanism should lead to a larger radial metal diffusion but similar outer-disk color reddening than [*heating*]{}, that dominates the net stellar migration in more massive systems, once a large number of galaxies under different evolutionary conditions are considered (see e.g$.$ SB09). One aspect that should be taken into account when considering the feasibility of these migration mechanisms to explain the ionized-gas metal abundances of disks is the fact that the oxygen is virtually all released by massive stars, so the oxygen abundance of the ISM should be not altered by the presence of low-mass evolving stars that could migrate from the inner parts of the disks. However, since the oxygen abundances derived here rely on the intensity of the \[<span style="font-variant:small-caps;">Nii</span>\]6584Å/H$\alpha$ line ratio and on the empirical relation between the N/O and O/H abundance ratios, a flattening in nitrogen abundance (which could be produced in this case by migrating intermediate-mass stars; see [@2013MNRAS.436..934W]) would also lead to an apparent flattening of the oxygen abundances derived. Here again, the comparison of ionized-gas and stellar abundances of outer disks could provide further clues. According to the scenario of a shrinking star-forming disk, the stellar population in the outer disks is mainly populated due to in-situ star formation. In that case we would expect that the drop in surface brightness would lead to a drop in the oxygen abundance (we are very close to the Instantaneous-Reclycing Approximation in this case) even if a positive color gradient is present in these regions, which is what we find for the most massive <span style="font-variant:small-caps;">Tii</span> galaxies. Should this scenario be valid for all <span style="font-variant:small-caps;">Tii</span> galaxies in general, we should be able to also explain why in low-mass <span style="font-variant:small-caps;">Tii</span> galaxies we find a signal of flattening in the oxygen abundance, despite the drop in surface brightness. Possible explanations could be that stellar migration is also playing a role in this case (see above) or that these galaxies have experienced episodes of extended star formation (which have led to the oxygen enrichment) on top of a secular shrinking of the size of the disk where star formation takes place during the long quiescent episodes (see also the case of <span style="font-variant:small-caps;">Tiii</span> disk galaxies below).\ In the case of the <span style="font-variant:small-caps;">Tiii</span> galaxies, we find (1) a correlation between outer-disk reddening and ionized-gas metallicity flattening and (2) that galaxies with low level of reddening (or even bluing) are typically low-mass systems. These results are compatible with a scenario where low-mass <span style="font-variant:small-caps;">Tiii</span> galaxies are systems that have recently experienced (or are currently experiencing) an episode of enhanced inside-out growth, such as in the case of the Type-2 XUV disks [@2007ApJS..173..538T], with blue colors and relatively flat metallicity gradients [@bres12]. In low-mass galaxies the small change in the metallicity gradient across the SB break would be consequence of their lower overall abundances and the presence of a rather homogeneous metallicity in outer disks. Indeed, recent cosmological hydrodynamical simulations by @2011MNRAS.416.1354D propose that accretion of IGM gas enriched by early outflows could be taking place in the outskirts of disks (see also [@2013ApJ...772..119L]). From the observational point of view, many are the results that show signs of accretion of metal-rich gas in the outer disks of spiral galaxies [@2015MNRAS.449..867B; @2015MNRAS.450.3381L]. Finally, we cannot exclude that a fraction of the <span style="font-variant:small-caps;">Tiii</span> systems analyzed here could be also <span style="font-variant:small-caps;">Ti</span> disks (which are, indeed, also growing from inside out) with only a modest change in surface brightness at the break radius position. More massive <span style="font-variant:small-caps;">Tiii</span> galaxies, on the other hand, show a clear outer-disk reddening and corresponding metallicity flattening (through the correlation described above). This can be explained as due to the fact that they might have experienced episodes of enhanced inside-out growth (or, equivalently, XUV emission) in their outer disks in the past, that could have raised the oxygen abundance in these outer disks to the levels found in XUV disks [@bres12], but where these have now decreased in frequency and/or strength. This is equivalent to a shrinking in the SF disk with time having occurred in the case of the massive ($\geq$10$^{10}$M$_{\odot}$) disks (see Z15 and references therein). In other words, our results indicate that the outer regions of spiral disks (at least the ones that are susceptible to have experienced outer-disk growth and, therefore, get classified as <span style="font-variant:small-caps;">Tiii</span>) also suffer from mass down-sizing effects. It would be worth exploring if this effect might be related to different gas fractions in the outer disks of these objects. Finally, whether stellar migration could be able to contribute significantly to the population of these shallow outer disks cannot be ruled out, at least in the case of the high-mass <span style="font-variant:small-caps;">Tiii</span> galaxies. The former interpretation, however, allows again to put all <span style="font-variant:small-caps;">Tiii</span> galaxies in the context of a common mass-driven evolutionary scenario. Conclusions =========== In this paper, we have explored the connections between the color and ionized-gas metallicity gradients in the external parts of the CALIFA disk galaxies. The main results of this paper are summarized as follows: - We find a [*U-shaped*]{} color profiles for most <span style="font-variant:small-caps;">Tii</span> galaxies with an average minimum [*(g’*]{}- [*r’)*]{} color of $\sim$0.5mag and a ionized-gas metallicity flattening associated in the case of the low-mass galaxies. - The distribution of differences in the outer$-$inner (gas) metallicity gradient shows no correlation with the difference in color gradient in the case of the <span style="font-variant:small-caps;">Tii</span> disks, while there is a positive correlation between them (i$.$e$.$ a metallicity flattening) in the case of the <span style="font-variant:small-caps;">Tiii</span> disks. - In the case of <span style="font-variant:small-caps;">Tiii</span> galaxies a positive correlation between the change in color and oxygen abundance gradient is found, with the low-mass <span style="font-variant:small-caps;">Tiii</span> ($\geq$10$^{10}$M$_{\odot}$) showing a weak color reddening or even a bluing. Our view on the origin of these results in the context of the evolution of the outskirts of disks galaxies is:\ (<span style="font-variant:small-caps;">i</span>) In the case of <span style="font-variant:small-caps;">Tii</span> galaxies, the observed color reddening could be explained by the presence of stellar radial migration.\ (<span style="font-variant:small-caps;">ii</span>) Alternatively, within the scenario of a shrinking star-forming disk, these galaxies should have experienced episodes of extended star formation (which have led to the oxygen enrichment) on top of a secular shrinking of the size of the SF disk.\ (<span style="font-variant:small-caps;">iii</span>) In the case of <span style="font-variant:small-caps;">Tiii</span> galaxies, a scenario where low-mass galaxies have recently shown an enhanced inside-out growth is proposed in order to explain the overall (negative) oxygen abundance gradient and the outer-disk bluing.\ (<span style="font-variant:small-caps;">iv</span>) For more massive <span style="font-variant:small-caps;">Tiii</span> disks, the outer color reddening associated with a flattening in their oxygen gradients can be explained as due to a past inside-out growth that has now decreased in frequency and/or strength. Our results indicate that the outer regions of spiral disks also suffer from mass down-sizing effects.\ Our results show that the CALIFA ionized-gas metallicities alone are not enough to tackle this aspects, furthermore deeper IFS data both for the stellar and the gas components as (MUSE, @2010SPIE.7735E..08B; MaNGA, @2015ApJ...798....7B; SAMI, @2012MNRAS.421..872C) should be analyzed in order to determine the relation between outer-disk (both gas and star) metallicity gradients and galaxy global properties, something that should allow establishing the mechanism(s) which dominate the photometric and chemical evolution of the outskirts of disk galaxies. We are grateful to the anonymous referee for constructive comments and suggestions. R.A. Marino is funded by the Spanish program of International Campus of Excellence Moncloa (CEI). This study makes uses of the data provided by the Calar Alto Legacy Integral Field Area (CALIFA) survey (http://www.califa.caha.es). CALIFA is the first legacy survey being performed at Calar Alto. The CALIFA collaboration would like to thank the IAA-CSIC and MPIA-MPG as major partners of the observatory, and CAHA itself, for the unique access to telescope time and support in manpower and infrastructures. The CALIFA collaboration thanks also the CAHA staff for the dedication to this project. We thank Carmen Eliche-Moral and Antonio Cava for stimulating discussions at several points in the developments of this work. We acknowledge support from the Plan Nacional de Investigación y Desarrollo funding programs, AyA2010-15081, AyA2012-30717 and AyA2013-46724P, of Spanish Ministerio de Economía y Competitividad (MINECO), as well as to the DAGAL network from the People’s Program (Marie Curie Actions) of the European Union’s Seventh Framework Program FP7/2007-2013/ under REA grant agreement number PITN-GA-2011-289313. C$.$C$.$$-$T$.$ thanks the support of the Spanish Ministerio de Educación, Cultura y Deporte by means of the FPU fellowship program. CJW acknowledges support through the Marie Curie Career Integration Grant 303912. Support for LG is provided by the Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. LG acknowledges support by CONICYT through FONDECYT grant 3140566. SFS thanks the CONACYT-125180 and DGAPA-IA100815 projects for providing him support in this study. JMA acknowledges support from the European Research Council Starting Grant (SEDmorph; P$.$I$.$ V$.$ Wild). PP is supported by FCT through the Investigador FCT Contract No. IF/01220/2013 and POPH/FSE (EC) by FEDER funding through the program COMPETE. He also acknowledges support by FCT under project FCOMP-01-0124-FEDER-029170 (Reference FCT PTDC/FIS-AST/3214/2012), funded by FCT-MEC (PIDDAC) and FEDER (COMPETE). [^1]: Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC). [^2]: Breaks and truncations are sometimes referred as different phenomena as explained in @2012MNRAS.427.1102M. In this study we will focus our attention on the innermost change in the SB profiles happening at $\mu_{r}$=22.5 mag/$^{2}$ [^3]: This SB lower limit value ensures that our measurements are not affected by the contamination of the stellar halo which starts to contribute at fainter SB and at radii larger than 20 kpc neither the color are affected by the extended wings of the SDSS PSF. [^4]: <span style="font-variant:small-caps;">HIIexplorer</span>: <http://www.caha.es/sanchez/HII_explorer/> [^5]: 12 + log(O/H) = 8.743\[$\pm$0.027\]$+$0.462\[$\pm$0.024\] $\times$ N2 [^6]: 12 + log(O/H)= 8.533\[$\pm$0.012\]$-$0.214\[$\pm$0.012\] $\times$ O3N2
--- abstract: 'We present a systematic study of both nuclear radii and binding energies in (even) oxygen isotopes from the valley of stability to the neutron drip line. Both charge and matter radii are compared to state-of-the-art [*ab initio*]{} calculations along with binding energy systematics. Experimental matter radii are obtained through a complete evaluation of the available elastic proton scattering data of oxygen isotopes. We show that, in spite of a good reproduction of binding energies, [*ab initio*]{} calculations with conventional nuclear interactions derived within chiral effective field theory fail to provide a realistic description of charge and matter radii. A novel version of two- and three-nucleon forces leads to considerable improvement of the simultaneous description of the three observables for stable isotopes but shows deficiencies for the most neutron-rich systems. Thus, crucial challenges related to the development of nuclear interactions remain.' author: - 'V. Lapoux$^{1}$' - 'V. Somà$^{1}$' - 'C. Barbieri$^{2}$' - 'H. Hergert$^{3}$' - 'J. D. Holt$^{4}$' - 'S. R. Stroberg$^{4}$' title: 'Radii and Binding Energies in Oxygen Isotopes: A Challenge for Nuclear Forces' --- Our present understanding of atomic nuclei faces the following major questions. Experimentally, we aim (i) to determine the location of the proton and neutron drip lines [@Erler12; @Thoennessen2015], i.e. the limits in neutron numbers $N$ upon which, for fixed proton number $Z$, with decreasing or increasing $N$, nuclei are not bound with respect to particle emission, and (ii) to measure nuclear structure observables offering systematic tests of microscopic models. While nuclear masses have been experimentally determined for the majority of known light and medium-mass nuclei [@AME12], measurements of charge and matter radii are typically more challenging. Charge radii for stable isotopes have been accessed in the past by means of electron scattering [@Hof56]. In recent years, laser spectroscopy experiments allow extending such measurements to unstable nuclei with lifetimes down to a few milliseconds [@Laser95]. Matter radii are determined by scattering with hadronic probes which requires a modelization of the reaction mechanism. Theoretically, intensive works have also been performed also towards linking a universal description of atomic nuclei to elementary interactions [@Machleidt01; @QCDab; @EFTth11] amongst constituent nucleons and, ultimately, to the underlying theory of strong interactions, quantum chromoDynamics (QCD). If accomplished, this [*ab initio*]{} description would be beneficial both for a deep understanding of known nuclei (stable and unstable, totalling around 3300) and to predict on reliable bases the features of undiscovered ones (few more thousands are expected). Many of the latter are not, in the foreseeable future, experimentally at reach, yet they are crucial to understanding nucleosynthesis phenomena, modelled using large sets of evaluated data and of calculated observables. The reliability of first-principles calculations depends upon a consistent understanding of fundamental observables: ground-state characteristics of nuclei related to their existence (masses, expressed as binding energies) and sizes (expressed as root mean square -rms- radii). Special interest resides in the study of masses and sizes for a given element along isotopic chains. Experimentally, their determination is increasingly difficult as one approaches the neutron drip line; as of today, the heaviest element with available data on all existing bound isotopes is oxygen ($Z$=8) [@AME12]. Using theoretical simulations, the link between nuclear properties and inter-nucleon forces can be explored for different [*N/Z*]{} values, thus, critically testing both our knowledge of nuclear forces and many-body theories. In this Letter, we focus on oxygen isotopes for which, in spite of the tremendous progress of recent [*ab initio*]{} methods, a simultaneous reproduction of masses and radii has not yet been achieved. We present important findings from novel [*ab initio*]{} calculations along with a complete evaluation of matter radii, $r_m$, for stable and neutron-rich oxygen isotopes. Here, $r_m$ are deduced via a microscopic reanalysis of proton elastic scattering data sets. They complement charge radii $r_{\text{ch}}$, offering an extended comparison through the isotopic chain that allows testing state-of-the-art many-body calculations. We show that a recent version of two- and three-nucleon (2$N$ and 3$N$) forces leads to considerable improvement in the critical description of radii. A viable *ab initio* strategy consists in exploiting the separation of scales between QCD and (low-energy) nuclear dynamics, taking point nucleons as degrees of freedom. For decades, realistic $2N$ interactions were built from fitting scattering data, see, [*e.g.*]{}, [@Machleidt01]. However, model limitations were seen through discrepancies with experimental data, like underbinding of finite nuclei and inadequate saturation properties of extended nuclear matter. More recently, the approach consisted in using the principles of chiral effective field theory (EFT) to provide a systematic construction of nuclear forces, a well-founded starting point for structure calculations [@QCDab; @EFTth11]. Many-body techniques have, themselves, undergone major progress and extended their domain of applicability both in mass and in terms of accessible (open-shell) isotopes for a given element [@Soma11; @Her13; @Som13; @Bin14; @Her14; @Holt13a; @Holt14; @Hag14; @Bog14; @Jan14; @Sig15; @Dug15]. An emblematic case that has received considerable attention is oxygen binding energies, where several calculations have established the crucial role played by 3$N$ forces in the reproduction of the neutron drip line at $^{24}$O [@Ots10; @Hag12o; @Holt13b; @Her13; @Cip13; @Lah14; @Heb15]. The excellent agreement between experimental data and calculations based on a next-to-next-to-next-to-leading order (N$^3$LO) 2$N$ and N$^2$LO 3$N$ chiral interaction by Entem, Machleidt and others (EM) [@EM03; @Nav07; @Roth12] was greeted as a milestone for [*ab initio*]{} methods, even though a consistent description of nuclear radii could not be achieved at the same time [@Cipollone15]. Since then, this deficiency has remained a puzzle. Subsequent calculations of heavier systems [@Som13; @Bin14; @Her14] and infinite nuclear matter [@Car13; @Hag14nm] confirmed the systematic underestimation of charge radii, a sizable overbinding and too spread-out spectra, all pointing to an incorrect reproduction of the saturation properties of nuclear matter. While interactions with good saturation properties existed [@Heb11; @Cor14; @Simo16], this problem led to the focused development of a novel nuclear interaction, NNLO$_{\text{sat}}\,$ [@NNLOeks15], which includes contributions up to N$^2$LO in the chiral EFT expansion (both in the 2$N$ and 3$N$ sectors) and differs from EM in two main aspects. First, the optimization of the (“low-energy") coupling constants is performed simultaneously for 2$N$ and 3$N$ terms [@Car15]; EM, in contrast, optimizes 3$N$ forces subsequently. Second, in addition to observables from few-body ($A$=2,3,4) systems, experimental constraints from light nuclei (energies and charge radii in some C and O isotopes) are included in the optimization. This aspect departs from the strategy of EM, in which parameters in the $A$-body sector are fixed uniquely by observables in $A$-body systems. Although first applications point to good predictive power for ground-state properties [@NNLOeks15; @Hag15; @Ruiz16], the performance of the NNLO$_{\text{sat}}\,$ potential remains to be tested along complete isotopic chains. Here, we employ two different many-body approaches, self-consistent Green’s function (SCGF) and in-medium similarity renormalization group (IMSRG), each available in two versions. The first are based on standard expansion schemes and, thus, applicable only to closed-shell nuclei (e.g., not $^{18,20}$O): Dyson SCGF (DGF) [@Dic04] and single-reference IMSRG (SR-IMSRG) [@Tsu11] respectively. The second are built on Bogoliubov-type reference states and thus allow for a proper treatment of pairing correlations and systems displaying an open-shell character. These are labeled Gorkov SCGF (GGF) [@Soma11] and multireference IMSRG (MR-IMSRG) [@Her13], respectively. For the MR-IMSRG, the reference state is first projected on good proton and neutron numbers. Having different [*ab initio*]{} approaches at hand is crucial for benchmarking theoretical results and inferring as unbiased as possible information on the input forces. Moreover, while DGF, SR-IMSRG and MR-IMSRG feature a comparable content in terms of many-body expansion, GGF currently includes a lower amount of many-body correlations, which allows testing the many-body convergence [@Som13]. First, we compute binding energies $E_B$ for $^{14-24}$O for the two sets of 2$N$ and 3$N$ interactions with the four many-body schemes. EM is further evolved to a low-momentum scale $\lambda=1.88 - 2.0\,\text{fm}^{-1}$ by means of SRG techniques [@Bog07; @Bog10]. Results are displayed in Fig. \[fig:plotEbExpTheoOxyNNLO\]. For both interactions, different many-body calculations yield values of $E_B$ spanning intervals of up to 10 MeV, from 5 to 10$\%$ of the total. Compared to experimental binding energies, EM and NNLO$_{\text{sat}}\,$ perform similarly, following the trend of available data along the chain both in absolute and in relative terms. Overall, results shown in Fig. \[fig:plotEbExpTheoOxyNNLO\] confirm previous findings for EM and validate the use along the isotopic chain for NNLO$_{\text{sat}}$. ![Oxygen binding energies. Results from SCGF (DGF and GGF) and IMSRG calculations with EM and NNLO$_{\text{sat}}\,$ are displayed along with experimental data.[]{data-label="fig:plotEbExpTheoOxyNNLO"}](FIG1artEBexpcalcEMandNNLOhkTOTShift.eps){width="8.0cm"} Now, we examine the nuclear charge observables. In addition to $r_{\text{ch}}$ radii, analytical forms of fitted experimental charge densities can be extracted from ($e$,$e$) cross sections. Standard forms include two- or three-parameter Fermi (2PF or 3PF) profiles [@Vries87]. By unfolding [@SatLove79] the finite size of proton charge distribution \[whose $r_{\text{ch}}$ radius is 0.877(7) fm [@PDG2010]\], proton ground-state densities $\rho_p$ can be deduced, and the corresponding $r_p$ radius defined as the rms radius of the $\rho_p(r)$ distribution ($\sqrt{\langle r^2 \rangle}$). It should be underlined that, due to the various analysis techniques providing charge densities, the global systematic error on $r_p$ is significantly larger (roughly $0.05$ fm) than the one on single $r_{\text{ch}}$ values (of the order of 0.01 fm). For $^{16}$O, $r_{\text{ch}}$ was estimated to be 2.730 (25) fm [@Sick70] and 2.737 (8) fm [@Miska79; @Vries87]. Differences in $r_{\text{ch}}$ between $^{17,18}$O and $^{16}$O, $\Delta r_{\text{ch}}~=- 0.008 (7)$ and $+0.074 (8)$ fm [@Miska79], are affected by the same systematic errors. In this Letter, we determine matter radii via the proton probe. We consider angular distributions of proton elastic scattering cross sections and compare data to calculations performed using a microscopic density-dependent optical model potential (OMP) inserted in the distorted wave Born approximation (DWBA). Recently, this type of analysis has been successfully applied to the case of helium isotopes, for which $r_m$ radii were extracted with uncertainties of the order of 0.1 fm [@EPJArmsHe]. We employ the energy- and density-dependent Jeukenne-Lejeune-Mahaux (JLM) potential [@JLM77b], derived from a $G$-matrix formalism and extensively tested in the analysis of nucleon scattering data for a wide range of nuclei. This complex potential depends only on the incident energy $E$ and on neutron and proton densities. Here, we use the standard form:\ $U_{\text{JLM}} (\rho,E) = \lambda_V V(\rho,E) + i \lambda_W W(\rho,E) $, with $\lambda_V=\lambda_W=1$.\ For $^{18-22}$O, nucleon separation energies are sufficiently high to exclude strong coupling effects to continuum or to excited states, and the imaginary part is enough to include, implicitly, all other relevant coupled-channel effects. For the stable symmetric $^{16}$O, $r_m$ was extracted from combined ($e$,$e$), ($p$,$p$) and ($n$,$n$) in Ref. [@Pet85] using the following procedure: the (3PF) density profile $\rho_p$ was deduced from electron scattering data [@Sick70], the same profile was assumed for the neutron density distribution. This “experimental" matter density built from the ($e$,$e$) data was used to compute the potentials. This procedure was also followed for $^{17,18}$O, with the neutron density profiles initially taken as $(N/Z)*\rho_p$ then adjusted to reproduce elastic data on heavy ions [@SatLove79]. We refer to densities extracted in this way as the experimental (exp) ones, with $r_p$ values for $^{16-18}$O given in Table \[tab:ExpRmsEeSigpp\]. We first performed OMP calculations for $^{18}$O and compared them to data collected at 35.2 $A\cdot$MeV in direct kinematics [@Fab80] and at 43 $A\cdot$MeV in inverse kinematics [@Khan00]. Starting from a 2PF profile fitted to exp densities, by changing the two parameters governing size and diffusiveness, we generated a family of densities then inserted into the OMP and fitted to data. Since only the most forward angles have small global errors and are sensitive to the size of the nucleus, we limited our fit to 46$^{\circ}$ and 33$^{\circ}$ for 35.2 and 43 $A\cdot$MeV data, respectively, i.e., to data with statistical + systematic errors below 10%. For these degrees of freedom (DOF), by keeping the curves falling within $\chi^2 / \text{DOF} <1$, we determined an associated matter radius $r_m = 2.75 (10)$ fm. The 2PF profiles with the same $r_m$ lead to very similar $\chi^2 / \text{DOF}$, signaling that calculations, in the region of forward angles, are rather insensitive to the diffusiveness. As shown in Fig. \[fig:elasticOpp\], calculations are in good agreement with ($p$,$p$) data, which confirms the validity of the OMP approach provided that realistic densities are employed. We repeated the analysis using densities generated by Hartree-Fock BCS calculations [@Khan00] with Skyrme interactions, each associated with a different $r_m$. Results are very similar to the ones of Fig. \[fig:elasticOpp\], with $r_m = 2.77 (10)$ fm, close to the one from ‘exp densities. This validates the use of OMP calculations to estimate $r_m$ radii from ($p$,$p$) cross sections [@EPJArmsHe]. For unstable $^{20,22}$O, elastic proton scattering cross sections were measured using oxygen beams at 43 and 46.6 $A\cdot$MeV, respectively [@Khan00; @Bec06]. We performed OMP calculations with microscopic densities for $^{20,22}$O. Angular distributions up to 30$^{\circ}$ (for $^{20}$O) and 33$^{\circ}$ (for $^{22}$O) were considered for the fits. Results are displayed in Fig. \[fig:elasticOpp\]. In order to show the sensitivity to the microscopic inputs, we compare, for $^{22}$O, results with densities from the Sly4 [@Sly4] Skyrme interaction with those obtained with densities from Hartree-Fock-Bogoliubov calculations based on the Gogny D1S force [@DeGo80; @D1S91]. In both cases, ($p$,$p$) cross sections are well reproduced. Resulting $r_m$ radii are 2.90 fm in $^{20}$O along with 2.96 and 3.03 fm in $^{22}$O for Sly4 and D1S densities, respectively. The sensitivity study led us to the same range of $\pm 0.1$ fm, which is the uncertainty on our values throughout the ($p$,$p$) analysis. The results are summarized in Table \[tab:ExpRmsEeSigpp\]. [llll|ll]{} A & 16 & 17 & 18 & 20 & 22\ $r_p$ & 2.59 (7) & 2.60 (8) & 2.68 (10) & &\ $r_m$ ($\sigma_I$) & 2.54 (2) & 2.59 (5) & 2.61 (8) & 2.69(3) & 2.88(6)\ $r_m$ ($p$,$p$) & 2.60 (8) & 2.67 (10) & 2.77 (10) & 2.9 (1) & 3.0 (1)\ Studying interaction cross sections ($\sigma_I$) [@Oza01] is another way of deducing matter radii. In Fig. \[fig:plotRmsExpTheoOxySigIpp\], we compare experimental $r_m$ radii for $^{16-22}$O from ($e$,$e$) and ($p$,$p$) to values obtained from $\sigma_I$ measurements [@Oza01; @Kan11] (see, also, Table \[tab:ExpRmsEeSigpp\]). ![Experimental values for the $r_m$ radii, deduced from $\sigma_I$, ($e$,$e$) and ($p$,$p$) measurements (see Table \[tab:ExpRmsEeSigpp\]). Blue lines show the $A^{1/3}$ behavior of the liquid drop model. []{data-label="fig:plotRmsExpTheoOxySigIpp"}](FIG3artRMSMatexpSigPpelec.eps){width="6cm"} While ($e$,$e$) and ($p$,$p$) provide a consistent set of $r_p$ and $r_m$ radii for $^{16-18}$O, this is not the case for $r_m$ values obtained from $\sigma_I$, usually extracted without including correlations in the target, which arguably influences scattering amplitudes. Since our analysis of the stable isotopes, used as a reference, provides $r_m$ radii with an uncertainty of the order of 0.1 fm, we also conclude that uncertainties deduced from $\sigma_I$ are underestimated. Consequently, we focus on results obtained from ($e$,$e$) and ($p$,$p$) data for the comparison with theory. We start by analyzing calculations for proton and neutron radii, shown in Fig. \[fig:plotRmsExpTheoOxyNNLOpn\]. We notice that, for each interaction, there is good agreement between the various methods, which span 0.05 (0.1) fm when EM (NNLO$_{\text{sat}}$) is used. This shows that different state-of-the-art schemes achieve, for a given interaction, an uncertainty that is smaller than (i) experimental uncertainty and (ii) the uncertainty coming from the use of different interactions. Clear discrepancies are observed between radii computed with EM and NNLO$_{\text{sat}}$, with the former being systematically smaller by 0.2-0.3 fm. While EM largely underestimates data, $r_p$ values are well reproduced by NNLO$_{\text{sat}}$, keeping in mind that $r_{\text{ch}}$ of $^{16}$O is included in the NNLO$_{\text{sat}}\,$ fit. The performance of the interactions along the isotopic chain can be seen for matter radii, where in Fig. \[fig:plotRmsExpTheoOxyNNLO\] the evaluations from the ($p$,$p$) analysis are compared to GGF and MR-IMSRG. Similar conclusions are drawn by considering other schemes, e.g., see Fig. \[fig:plotRmsExpTheoOxyNNLOpn\], where rms radii computed with EM underestimate evaluated data by about 0.3 - 0.4 fm for all isotopes. ![Matter radii from our analysis and given in Tab. \[tab:ExpRmsEeSigpp\], compared to calculations with EM [@EM03; @Nav07; @Roth12] and NNLO$_{\text{sat}}\,$ [@NNLOeks15]. Bands span results from GGF and MR-IMSRG schemes. []{data-label="fig:plotRmsExpTheoOxyNNLO"}](FIG5artRMSMatexpNNLOetEMhkTotband.eps){width="7cm"} Results significantly improve with NNLO$_{\text{sat}}$, although the description deteriorates towards the neutron drip line, with a discrepancy of about 0.2 fm in $^{22}$O. Recently, a similar effect was observed for the calcium isotopes [@Ruiz16]. These results reinforce the progress of nuclear [*ab initio*]{} calculations, which are able to address systematics of isotopic chains beyond light systems and, thus, provide critical feedback on the long-term developments of internucleon interactions. To this extent, joint theory-experiment analyses are essential and have to start with a realistic description of both sizes and masses. In this work we focused on the oxygen chain, the heaviest one for which experimental information on both $E_B$ and radii is available up to the neutron drip line. We showed that nuclear sizes of unstable isotopes can be obtained through the ($p$,$p$) data analysis within 0.1 fm. The combined comparison of measured charge-matter radii and $E_B$ with [*ab initio*]{} calculations offers a unique insight on nuclear forces: the current standard EM yields an excellent reproduction of $E_B$ but significantly underestimates radii, whereas the unconventional NNLO$_{\text{sat}}$ clearly improves the description of radii. Our results raise questions about the choice of observables that should be included in the fit and the resulting predictive power whenever this strategy is followed. More precise information on oxygen radii, e.g., $r_{\text{ch}}$ via laser spectroscopy measurements, would allow confirming our ($p$,$p$) analysis and further refining the present discussion. Similar studies in heavier isotopes will also contribute to the systematic development of nuclear forces. Finally, we stress that a simultaneous reproduction of binding energies and radii in stable and neutron-rich nuclei is mandatory for reliable structure but even more for reaction calculations. Scattering amplitudes and nucleon-nucleus interactions evolve as a function of the size, which should be consistently taken into account when more microscopic reaction approaches are considered. acknowledgements {#acknowledgements .unnumbered} ================ The [*Espace de Structure et de réactions Nucléaires Théorique*]{} ESNT (http://esnt.cea.fr) framework at CEA is gratefully acknowledged for supporting the project that initiated the present work. The authors would like to thank T. Duguet for useful discussions and P. Navr[á]{}til, A. Calci, S. Binder, J. Langhammer, and R. Roth for providing the interaction matrix elements used in the present calculations. C. B. is funded by the Science and Technology Facilities Council (STFC) under Grant No. ST/L005743/1. SCGF calculations were performed by using HPC resources from GENCI-TGCC (Contracts No. 2015-057392 and No. 2016-057392) and the DiRAC Data Analytic system at the University of Cambridge (under BIS National E-infrastructure capital Grant No. ST/J005673/1, and STFC Grants No. ST/H008586/1 and No. ST/K00333X/1). H. H. acknowledges support by the NSCL/FRIB Laboratory. TRIUMF receives federal funding via a contribution agreement with the National Research Council of Canada. Computing resources for MR-IMSRG calculations were provided by the Ohio Supercomputing Center (OSC) and the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. J. Erler, N. Birge, M. Kortelainen, W. Nazarewicz, E. Olsen, A. M. Perhac and M. Stoitsov, [*The limits of the nuclear landscape*]{}, Nature [**486**]{}, 509 (2012). M. Thoennessen, Int. J. Mod. Phys. E [**24**]{}, 1530002 (2015). G. Audi, M. Wang, A.H. Wapstra, F.G. Kondev, M. MacCormick, X. Xu, and B. Pfeiffer, Chin. Phys. C [**36**]{}, 1287 (2012). R. Hofstadter, Rev. Mod. Phys. [**28**]{}, 214 (1956). J. Billowes and P. Campbell, J. Phys. G [**21**]{}, 707 (1995). R. Machleidt and I. Slaus, J. Phys. G [**27**]{}, (2001) R 69 (topical review), [*and references therein*]{}. E. Epelbaum, H.-W. Hammer, and U.-G. Mei[ß]{}ner, Rev. Mod. Phys. [**81**]{}, 1773 (2009). R. Machleidt and D. Entem, Phys. Rep. [**503**]{}, 1 (2011). V. Somà, T. Duguet, and C. Barbieri, Phys. Rev. C [**84**]{}, 064317 (2011). H. Hergert, S. Binder, A. Calci, J. Langhammer, and R. Roth, Phys. Rev. Lett. [**110**]{}, 242501 (2013). V. Somà, A. Cipollone, C. Barbieri, P. Navrátil, and T. Duguet, Phys. Rev. C [**89**]{}, 061301 (2014). S. Binder, J. Langhammer, A. Calci, and R. Roth, Phys. Lett. B [**736**]{}, 119 (2014). H. Hergert, S. K. Bogner, T. D. Morris, S. Binder, A. Calci, J. Langhammer, and R. Roth, Phys. Rev. C [**90**]{}, 041302 (2014). J. D. Holt, J. Menéndez, and A. Schwenk, Phys. Rev. Lett. [**110**]{}, 022502 (2013). J. D. Holt, J. Menendez, J. Simonis, and A. Schwenk, Phys. Rev. C [**90**]{}, 024312 (2014). G. Hagen, T. Papenbrock, M. Hjorth-Jensen, and D. J. Dean, Rep. Prog. Phys. [**77**]{}, 096302 (2014). S. K. Bogner, H. Hergert, J. D. Holt, A. Schwenk, S. Binder, A. Calci, J. Langhammer, and R. Roth, Phys. Rev. Lett. [**113**]{}, 142501 (2014). G. R. Jansen, J. Engel, G. Hagen, P. Navrátil, and A. Signoracci, Phys. Rev. Lett. [**113**]{}, 142502 (2014). A. Signoracci, T. Duguet, G. Hagen, and G. R. Jansen, Phys. Rev. C [**91**]{}, 064320 (2015). T. Duguet, J. Phys. G [**42**]{}, 025107 (2015). T. Otsuka, T. Suzuki, J. D. Holt, A. Schwenk, and Y. Akaishi, Phys. Rev. Lett. [**105**]{}, 032501 (2010). G. Hagen, M. Hjorth-Jensen, G. R. Jansen, R. Machleidt, and T. Papenbrock, Phys. Rev. Lett. [**108**]{}, 242501 (2012). J. D. Holt, J. Menéndez, and A. Schwenk, Eur. Phys. J. A [**49**]{}, 39 (2013). A. Cipollone, C. Barbieri, and P. Navrátil, Phys. Rev. Lett. [**111**]{}, 062501 (2013). T. A. Lähde, E. Epelbaum, H. Krebs, D. Lee, U.-G. Mei[ß]{}ner, and G. Rupak, Phys. Lett. B [**732**]{}, 110 (2014). K. Hebeler, J. D. Holt, J. Menéndez, and A. Schwenk, Annu. Rev. Nucl. Part. Sci. [**65**]{}, 457 (2015). D.R. Entem and R. Machleidt, Phys. Rev. C [**68**]{}, 041001 (2003). P. Navrátil, Few-Body Syst. [**41**]{}, 117 (2007). R. Roth, S. Binder, K. Vobig, A. Calci, J. Langhammer, and P. Navrátil, Phys. Rev. Lett. [**109**]{}, 052501 (2012). A. Cipollone, C. Barbieri, and P. Navrátil, Phys. Rev. C [**92**]{}, 014306 (2015). A. Carbone, A. Polls, and A. Rios, Phys. Rev. C [**88**]{}, 044302 (2013). G. Hagen, T. Papenbrock, A. Ekstr[ö]{}m, K. A. Wendt, G. Baardsen, S. Gandolfi, M. Hjorth-Jensen, and C. J. Horowitz, Phys. Rev. C [**89**]{}, 014319 (2014). K. Hebeler, S. K. Bogner, R. J. Furnstahl, A. Nogga, and A. Schwenk, Phys. Rev. C [**83**]{}, 031301(R) (2011). L. Coraggio, J. W. Holt, N. Itaco, R. Machleidt, L. E. Marcucci, and F. Sammarruca, Phys. Rev. C [ **89**]{}, 044321 (2014). J. Simonis, K. Hebeler, J. D. Holt, J. Menéndez, and A. Schwenk, Phys. Rev. C [**93**]{}, 011302(R) (2016). A. Ekström, G. R. Jansen, K. A. Wendt, G. Hagen, T. Papenbrock, B. D. Carlsson, C. Forssén, M. Hjorth-Jensen, P. Navrátil, and W. Nazarewicz, Phys. Rev. C [**91**]{}, 051301 (2015). B. D. Carlsson, A. Ekstr[ö]{}m, C. Forssén, D. F. Str[ö]{}mberg, G. R. Jansen, O. Lilja, M. Lindby, B. A. Mattsson, and K. A. Wendt, Phys. Rev. X [**6**]{}, 011019 (2016). G. Hagen [*et al.*]{}, Nat. Phys. [**12**]{}, 186 (2015). R. F. Garcia Ruiz [*et al.*]{}, Nat. Physics [**12**]{}, 594 (2016). W. H. Dickhoff and C. Barbieri, Prog. Part. Nucl. Phys. [**52**]{}, 377 (2004). K. Tsukiyama, S. K. Bogner, and A. Schwenk, Phys. Rev. Lett. [**106**]{}, 222502 (2011). S. K. Bogner, R. J. Furnstahl, and R. J. Perry, Phys. Rev. C [**75**]{}, 061001(R) (2007). S. K. Bogner, R. J. Furnstahl, and A. Schwenk, Prog. Part. Nucl. Phys. [**65**]{}, 94 (2010). H. De Vries, C. W. De Jager, and C. De Vries, At. Data Nucl. Data Tables [**36**]{}, 495 (1987), [*and references therein*]{}. G. R. Satchler and W. G. Love, Phys. Rep. [**55**]{}, 183 (1979). K. Nakamura [*et al.*]{}, (Particle Data Group), J. Phys. G [**37**]{}, 075021 (2010). I. Sick and J. S. McCarthy, Nucl. Phys. [**A150**]{}, 631 (1970). H. Miska, B. Norum, M. V. Hynes, W. Bertozzi, S. Kowalski, F. N. Rad, C. P. Sargent, T. Sasanuma, and B. L. Berman, Phys. Lett. [**83**]{}B, 165 (1979). V. Lapoux and N.Alamanos, Eur. Phys. J. A [**51**]{}, 91 (2015). J.P. Jeukenne, A. Lejeune, and C. Mahaux, Phys. Rev. C [**16**]{}, 80 (1977). J.S. Petler, M. S. Islam, R.W. Finlay, and F. S. Dietrich, Phys. Rev. C [**32**]{}, 673 (1985). B. Norum [*et al.*]{}, Phys. Rev. C [**25**]{}, 1778 (1982). A. Ozawa, T. Suzuki, I. Tanihata, Nucl. Phys. **A 693**, 32 (2001). E. Fabrici, S. Micheletti, M. Pignanelli, F. G. Resmini, R. De Leo, G. D’Erasmo, and A. Pantaleo, Phys. Rev. C [**21**]{}, 844 (1980). E. Khan [*et al.*]{}, Phys. Lett. B. [**490**]{}, 45 (2000). E. Becheva [*et al.*]{}, (MUST collaboration), Phys. Rev. Lett. [**9**6]{}, 012501 (2006). E. Chabanat, P Bonche, P. Haensel, J. Meyer, R. Schaeffer, Nucl. Phys **A 635**, 231 (1998). J. Dechargé and D. Gogny, Phys. Rev. C [**21**]{}, 1568 (1980). J.-F. Berger, M. Girod, and G. Gogny, Comput. Phys. Commun. [**63**]{}, 365 (1991). R. Kanungo [*et al.*]{} Phys. Rev. C [**84**]{}, 061304 (R) (2011).
--- author: - Nathalie Degenaar - Rudy Wijnands bibliography: - '0654.bib' date: 'Received 23 July 2008 / Accepted 11 December 2008 ' title: 'The behavior of subluminous X-ray transients near the Galactic center as observed using the X-ray telescope aboard Swift' --- Introduction ============ Our Galaxy harbors many X-ray transients that spend most of their time in a dim, quiescent state, but occasionally they experience bright X-ray outbursts (typically lasting weeks to months) during which their X-ray luminosity increases by more than a factor of 100. Many of these transient X-ray sources can be identified with compact objects (neutron stars or black holes) accreting matter from a companion star. In such systems, the X-ray outbursts are ascribed to a sudden strong increase in the accretion rate onto the compact object. X-ray transients can be classified based on their 2-10 keV peak luminosity[^1], $L_{X}^{\mathrm{peak}}$. The *bright* X-ray transients ($L_{X}^{\mathrm{peak}}=10^{37-39}$ erg s$^{-1}$) have been known and extensively studied since the early days of X-ray astronomy. However, in the past decade it became clear that a group of subluminous X-ray transients ($L_{X}^{\mathrm{peak}}<10^{37}$ erg s$^{-1}$) also exists, where the distinction is made between *faint* [$L_{X}^{\mathrm{peak}}=10^{36-37}$ erg s$^{-1}$, e.g., @heise99; @zand01] and *very-faint* [$L_{X}^{\mathrm{peak}}=10^{34-36}$ erg s$^{-1}$, e.g., @sidoli99; @porquet05; @muno05_apj622; @wijn06_monit] systems. Although the faint to very faint X-ray transients exhibit qualitatively different behavior than the brighter systems [e.g., @cornelisse02; @okazaki01; @king00], this classification based on peak luminosities is not strict and hybrid systems are known to exist [e.g., @wijn02]. In particular the study of very-faint X-ray transients (VFXTs) is hampered by the sensitivity limitations of X-ray instruments, and consequently their nature is not understood well. To date, about 30 members are known, most of which are found very close to $\mathrm{Sgr~A}^{\ast}$ [within $\sim$ 10 arcminutes; @muno05_apj622], but this might be a selection effect due to all the high-resolution X-ray observations in this region. Several VFXTs were found at larger distances from $\mathrm{Sgr~A}^{\ast}$ as well [e.g., @hands04; @heinke2008]. A significant fraction ($\sim 1/3$) of the VFXTs have exhibited type-I X-ray bursts [e.g., @cornelisse02] and can thus be identified with neutron stars accreting matter from, most likely, a low-mass (i.e., $M \lesssim 1 \mathrm{M_{\odot}}$) companion. The low outburst luminosities characteristic of VFXTs combined with what is known about their duty cycles, imply that these low-mass X-ray binaries (LMXBs) have very low time-averaged mass accretion rates, which could challenge our understanding of their evolution [@king_wijn06]. There might also be other types of sources that can produce subluminous X-ray outbursts. It is conceivable that some systems are compact objects that are transiently accreting at a very low level from the strong stellar wind of a high-mass star or the circumstellar matter around a Be star [e.g., @okazaki01]. In addition, some strongly magnetized neutron stars ($B \sim10^{14}-10^{15}$ G, magnetars) are observed to experience occasional X-ray outbursts with peak luminosities of $\sim10^{35}$ erg s$^{-1}$ [@ibrahim04; @muno07_magnetar] and can thus be classified as VFXTs. The cause of their outbursts is unknown, but is likely related to magnetic field decay [e.g., @ibrahim04]. Furthermore, @mukai08 recently pointed out that classical novae can be visible as 2-10 keV X-ray sources with luminosities in the range of a few times $10^{34-35}~\mathrm{erg~s}^{-1}$ for weeks to months [see Fig. 1 of @mukai08]. The X-ray emission is thought to emerge from shocks within the matter that is ejected during the nova. Here we present the analysis of seven X-ray transients, that were found active during a monitoring campaign of the Galactic center (GC) by the X-ray telescope (XRT) aboard the *Swift* satellite [@kennea_monit], carried out in 2006 and 2007. Observations and data analysis {#obs_ana} ============================== The GC was monitored almost daily with the XRT aboard *Swift*, from February 24, 2006, until November 2, 2007[^2], with exclusion of the epochs from November 3, 2006, till March 6, 2007 (due to Solar constraints) and August 11 till September 26, 2007 [due to a safe-hold event; @swift_offline07]. Each *Swift*/XRT pointing typically lasted $\sim 1$ ksec, although occasionally longer exposures (up to $\sim 13$ ksec) were carried out. Most of the data was collected in Photon Counting (PC) mode, albeit sometimes an unusual high count rate (due to the occurrence of a type-I X-ray burst) induced an automated switch to the Windowed Timing (WT) mode. We obtained all observations of 2006-2007 GC monitoring campaign from the *Swift* data archive. The XRT data were processed with the task `xrtpipeline` using standard quality cuts and event grades 0-12 in PC mode (0-2 in WT mode)[^3]. We searched the data for transient X-ray sources by comparing small segments of *Swift* data, spanning $\sim 5$ ksec, with one another. We found a total of seven different X-ray transients with peak luminosities $ \gtrsim 10^{34}~\mathrm{erg~s}^{-1}$. The source coordinates and associated uncertainties of the detected transients were found by running the XRT software task `xrtcentroid` on the data. The results are listed in Table \[tab:vfxts\]. A source was considered in quiescence when it was not detected within a databin of approximately 5 ksec by visual inspection. The unabsorbed 2-10 keV flux corresponding to this threshold depends on the assumed spectral model, but is roughly $2 \times 10^{-13}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$. This translates into a luminosity of $\sim 1.5 \times 10^{33}~\mathrm{erg~s}^{-1}$ for a distance of 8 kpc. Figure 1 shows two 0.3-10 keV images of the *Swift*/XRT campaign, which covered a total field of $\sim 26'~\mathrm{x} ~26'$ of sky around $\mathrm{Sgr~A}^{\ast}$ (note that individual pointings have a smaller field of view, FOV). Figure \[fig:ds9-a\] displays a merged image of all PC mode observations carried out in 2006 and 2007. Apart from many persistent X-ray sources and strong diffuse emission around $\mathrm{Sgr~A}^{\ast}$, Fig. \[fig:ds9-a\] shows six different X-ray transients with peak luminosities $ \gtrsim 10^{34}~\mathrm{erg~s}^{-1}$ (listed in Table \[tab:vfxts\]). Figure \[fig:ds9-b\] is a zoomed image of the inner region around $\mathrm{Sgr~A}^{\ast}$, taken from the epoch June 30 till November 2, 2006. This was the only episode during the entire 2006-2007 *Swift* monitoring campaign in which AX J1745.6-2901 was not active, and a seventh active transient, CXOGC J174535.5-290124, could be detected. CXOGC J174535.5-290124 and AX J1745.6-2901 are so close together, that *Swift* cannot spatially resolve both sources when the latter, which is the brighter of the two, is active. Apart from CXOGC J174535.5-290124, Fig. \[fig:ds9-b\] also shows CXOGC J174540.0-290005, which lies North of $\mathrm{Sgr~A}^{\ast}$. We extracted source lightcurves and spectra (using *XSelect* version 2.3) from the event lists using a circular region with a radius of 10 or 15 pixels (the largest regions were used for the brightest sources). Spectra were extracted only from the data in which a source was active, whereas lightcurves were constructed from all data where a source was in FOV. Corresponding background lightcurves and spectra were averaged over a set of three nearby source-free regions, each of which had the same shape and size as the source region. For none of the transients it was possible to use an annulus for the background subtraction, either because the objects were too close to the edge of the CCD or using an annular background region would encompass too much contamination from nearby X-ray sources or diffuse emission around $\mathrm{Sgr~A}^{\ast}$. The spectra were grouped using the FTOOL `grppha`, to contain bins with a minimum number of 20 photons. Following event selection, exposure maps were generated with `xrtexpomap` to correct the spectra for fractional exposure loss due to bad columns on the CCD [@abbey06][^4]. The generated exposure maps were used as input to create ancillary response files (ARF) with `xrtmkarf`. We used the latest versions of the response matrix files (v10; RMF) from the CALDB database. For the brightest of the seven transients, AX J1745.6-2901 and GRS 1741.9-2853, the 2007 PC mode data was affected by pile-up. We attempted to correct for the consequent effect on spectral shape and loss in source flux by the same methods as described by @vaughan06[^5]. Using *XSPEC* [version 11.1; @xspec], we fitted all grouped spectra with a powerlaw continuum model modified by absorption. From these fits we deduce the 2-10 keV mean unabsorbed outburst flux for each source and combined this with the average 2-10 keV *Swift*/XRT count rate of the outburst to infer a flux-to-count rate conversion factor. This factor was then used to determine the 2-10 keV unabsorbed peak flux for each source from the maximum count rate observed. ------------------------ ------------- ------------------- -------- ----------------------------------------------- ----------------   Name R.A. Decl. Err. Comments/Association References   (h m s) $(^{\circ}~'~'')$ ($''$)     AX J1745.6-2901 17:45:35.44 $-$29:01:33.6 3.5 Swift J174535.5-290135/CXOGC J174535.6-290133 1,2,3,4 CXOGC J174535.5-290124 17:45:35.80 $-$29:01:21.0 3.5 New outburst from known X-ray transient 4,5 CXOGC J174540.0-290005 17:45:40.29 $-$29:00:05.4 3.5 Swift J174540.2-290005 6,7,8,9 Swift J174553.7-290347 17:45:53.79 $-$29:03:47.8 3.5 New X-ray transient, CXOGC J174553.8-290346? This work, 4 Swift J174622.1-290634 17:46:22.14 $-$29:06:34.7 3.6 New X-ray transient This work GRS 1741.9-2853 17:45:02.43 $-$28:54:50.0 3.5 New outburst from known X-ray transient 2,10,11,12, 14 XMM J174457-2850.3 17:44:57.30 $-$28:50:20.8 4.0 New outburst from known X-ray transient 12, 13, 14 ------------------------ ------------- ------------------- -------- ----------------------------------------------- ---------------- \[tab:vfxts\] The quoted coordinate errors refer to $90\%$ confidence level and were calculated using the software tool `xrtcentroid`. References: 1=@kennea06_atel753, 2=@porquet07, 3=@maeda1996, 4=@muno04_apj613, 5=@wijn05_atel638, 6=@kennea06_atel920, 7=@kennea06_atel921, 8=@wang06_atel935, 9=@muno05_apj622, 10=@muno03_grs, 11=@wijnands07_atel1006, 12=@wijn06_monit, 13=@sakano05, 14=@muno07_atel1013. Chandra data ------------ To obtain more accurate position information for the X-ray transients, we searched for *Chandra* archival data of the time the transients were in outburst. We found several *Chandra* observations at times when our seven *Swift* transients were active (see Table \[tab:chandra\]). We analyzed these *Chandra* data using the CIAO tools (version 4.0) and the standard *Chandra* analysis threads[^6]. The *Chandra* source positions and associated errors were determined using the tool `wavdetect` and are also listed in Table \[tab:chandra\]. Time-averaged accretion rates {#subsec:accrates} ----------------------------- Assuming that the observed transients are accreting neutron stars or black holes in X-ray binaries, we can estimate the mean accretion rate during an outburst, $\langle \dot{M}_{\mathrm{ob}} \rangle$, from the mean unabsorbed outburst flux. Following @zand07, we apply a correction factor of 3 to the mean 2-10 keV outburst luminosity (unabsorbed, inferred from spectral fitting) to obtain the 0.1-100 keV accretion luminosity $L_{\mathrm{acc}}$ (which is an approximation of the bolometric luminosity of the source). The mass-accretion rate during outburst is then estimated by employing the relation $\langle \dot{M}_{\mathrm{ob}} \rangle=R L_{\mathrm{acc}}/GM$, where $G=6.67 \times 10^{-8}~\mathrm{cm}^3~\mathrm{g}^{-1}~\mathrm{s}^{-2}$ is the gravitational constant. We adopt $M=1.4~\mathrm{M_{\odot}}$ and $R=10$ km for a neutron star accretor and $M=10~\mathrm{M_{\odot}}$ and $R=30$ km for the scenario of a black hole primary. Presuming that the observed outburst is typical, we convert the mass-accretion rate during outburst to a long-term averaged value, $\langle \dot{M}_{\mathrm{long}} \rangle$, by using the relation $\langle \dot{M}_{\mathrm{long}} \rangle=\langle \dot{M}_{\mathrm{ob}} \rangle \times t_{\mathrm{ob}} / t_{\mathrm{rec}}$, where $t_{\mathrm{ob}}$ is the outburst duration and $t_{\mathrm{rec}}$ is the system’s recurrence time, i.e., the sum of the outburst and quiescence timescales. The factor $t_{\mathrm{ob}}/t_{\mathrm{rec}}$ represents the duty cycle of the system. The calculation of the time-averaged accretion rate, as described above, is subject to several uncertainties. Both the translation from the observed 2-10 keV luminosity to the bolometric luminosity, as well as the conversion to the mass-accretion rate are uncertain (the exact efficiency of converting gravitational potential energy to X-ray radiation is unknown). Furthermore, many X-ray transients show irregular outburst- and recurrence times, which makes it difficult to estimate their duty cycles and what we observe over the course of a few years may not be typical for their long-term accretion history. However, the quasi-daily *Swift* monitoring observations of 2006-2007 provide an unique insight in the outburst behavior of these subluminous transients, allowing for a better estimate of their duty cycles than would be possible based on single, randomly spaced pointings alone. With the method described above, we can at least get an order of magnitude estimate of their time-averaged accretion rates. An important caveat is that accretion flows around low luminosity (below a few percent of Eddington) black holes might be radiatively inefficient [e.g., @blandford99; @narayan]. If this is the case, the mass-accretion rate as inferred from the X-ray luminosity can be severely underestimated. Thus, in particular the values inferred for the black hole scenario should be considered with caution (see Sec. \[mdot\_estimate\]). ------------------------ ------------- ------------------- -------- -------- ------------   Name R.A. Decl. Err. Obs ID Date   (h m s) $(^{\circ}~'~'')$ ($''$)     AX J1745.6-2901 17:45:35.65 $-$29:01:34.0 0.6 6639 2006-04-11 CXOGC J174535.5-290124 17:45:35.56 $-$29:01:23.9 0.6 6644 2006-08-22 CXOGC J174540.0-290005 17:45:40.06 $-$29:00:05.5 0.6 6646 2006-10-29 Swift J174553.7-290347 17:45:53.94 $-$29:03:46.9 0.6 6363 2006-07-17 Swift J174622.1-290634 17:46:22.25 $-$29:06:32.5 1.3 6642 2006-07-04 ------------------------ ------------- ------------------- -------- -------- ------------ \[tab:chandra\] The quoted position uncertainties ($1 \sigma$) were calculated by taking the square root of the quadric sum of the statistical error (from the `wavdetect` routine) and the uncertainty in absolute astrometry [$0.6''$; @aldcroft00]. X-ray lightcurves and spectra {#results} ============================= ![image](0654fi2a.eps){width="5.4cm"} ![image](0654fi2b.eps){width="5.4cm"} ![image](0654fi2c.eps){width="5.4cm"} ![image](0654fi2d.eps){width="5.4cm"} ![image](0654fi2e.eps){width="5.4cm"} ![image](0654fi2f.eps){width="5.4cm"} ![image](0654fi2g.eps){width="5.4cm"} The background corrected lightcurves of the seven transients are displayed in Fig. \[fig:lc\] and their spectra are plotted in Fig. \[fig:spec\]. The X-ray properties of each individual source will be discussed below; a summary of the spectral parameters for all sources can be found in Table \[tab:spectra\]. All detected transients were heavily absorbed ($N_{H} \gtrsim 6 \times 10^{22}~\mathrm{cm}^{-2}$), consistent with what is observed for sources close to $\mathrm{Sgr~A}^{\ast}$. Therefore, throughout this paper we assume a distance of 8 kpc for all detected transients when calculating their 2-10 keV X-ray luminosities. ![image](0654fi3a.eps){width="4.1cm"} ![image](0654fi3b.eps){width="4.1cm"} ![image](0654fi3c.eps){width="4.1cm"} ![image](0654fi3d.eps){width="4.1cm"} ![image](0654fi3e.eps){width="4.1cm"} ![image](0654fi3f.eps){width="4.1cm"} Source Year $N_{\mathrm{H}}$ $\Gamma$ red. $\chi^{2}$ $F_{\mathrm{X, abs}}$ $F_{\mathrm{X, unabs}}$ $F_{\mathrm{X, peak}}$ $L_{\mathrm{X}}\tnote{e}$ $L_{\mathrm{X, peak}}\tnote{f}$ ------------------------ ------ ------------------------ --------------------- ----------------- --------------------------- ------------------------- ------------------------ --------------------------- --------------------------------- AX J1745.6-2901 2006 $23.1 \pm 1.3$ $2.3 \pm 0.2$ 1.11 $14.7^{+0.3}_{-0.2} $ $50.4^{+4.6}_{-3.8}$ $120$ $39$ $92$ 2007 $24.9^{+0.6}_{-0.7}$ $2.8 \pm 0.1$ 1.14 $ 44.8^{+0.4}_{-0.3}$ $205^{+9}_{-11}$ $800$ $160$ $610$ CXOGC J174553.5-290124 2006 $14.2^{+5.1}_{-7.4}$ $1.1^{+1.2}_{-1.1}$ 1.40 $ 1.25^{+0.25}_{-0.21}$ $2.25 ^{+1.10}_{-0.43}$ $4.0$ $1.7$ $3.0$ CXOGC J174540.0-290005 2006 $8.63^{+5.66}_{-5.00}$ $1.4^{+1.0}_{-0.9}$ 1.54 $ 7.81^{+0.92}_{-0.88}$ $ 12.5^{+4.9}_{-2.2}$ $30$ $9.6$ $23$ Swift J174553.7-290347 2006 $24.4^{+10.1}_{-7.7}$ $3.0^{+1.6}_{-1.1}$ 1.23 $ 1.53^{+0.22}_{-0.24}$ $7.73^{+18.0}_{-3.49}$ $26$ $5.9$ $20$ Swift J174622.1-290634 2006 $11.7^{+6.1}_{-3.9}$ $3.3^{+1.4}_{-1.0}$ 0.57 $ 0.468\pm0.064$ $1.55^{+2.06}_{-0.55}$ $9.1$ $1.2$ $7.0$ GRS 1741.9-2853 2006 $14$ fix $5.0^{+2.5}_{-2.7}$ 0.84 $0.646^{+0.424}_{-0.199}$ $5.07^{+4.36}_{-2.38}$ $12$ $3.9$ $9.2$ 2007 $14.0^{+1.0}_{-0.9}$ $2.6 \pm 0.2$ 1.15 $61.6\pm1.2$ $175^{+17}_{-14}$ $260$ $130$ $200$ XMM J174457-2850.3 2007 $6$ fix $1.3$ fix $0.21$ $0.29$ $1.4$ $0.22$ $1.1$ \[tab:spectra\] Hydrogen column density in units of $10^{22}~\mathrm{H~cm}^{-2}$. Mean 2-10 keV absorbed outburst flux in units of $10^{-12}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$. Mean 2-10 keV unabsorbed outburst flux in units of $10^{-12}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$. Unabsorbed peak flux observed during the outburst in units of $10^{-12}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$. The mean outburst X-ray luminosity, in units of $10^{34}~\mathrm{erg~s}^{-1}$, is calculated from the mean unabsorbed flux by adopting a distance of 8 kpc for all sources. The peak X-ray luminosity, in units of $10^{34}~\mathrm{erg~s}^{-1}$, is calculated from the peak unabsorbed flux by adopting a distance of 8 kpc for all sources. Fluxes for XMM J174457-2850.3 were deduced using PIMMS, with $N_{\mathrm{H}}$ and $\Gamma$ fixed at the values obtained by @sakano05. AX J1745.6-2901 {#subsec:bron1} --------------- The start of the *Swift*/GC monitoring observations in 2006 immediately revealed the new X-ray transient Swift J174535.5-290135, which is located $\sim 1.5'$ SE from $\mathrm{Sgr~A}^{\ast}$ [@kennea06_atel753]. This X-ray source remained active for approximately 16 weeks until it returned to quiescence in late June, 2006. Renewed activity of the system was reported in February 2007 [@wijnands07_atel1006; @kuulkers07_atel1005], and the *Swift*/GC monitoring observations suggest that it remained as such for more than a year, as it was active until the campaign ended in November 2007 (see Fig. \[fig:lc\]). We note that the monitoring campaign continued in 2008 and that the source was detected throughout 2008. However, a detailed discussion of those observations are beyond the scope of our paper. The detection of eclipses with an 8.4 hours period seen in *XMM-Newton* observations [@porquet07], positively identify Swift J174535.5-290135 with the *ASCA* detected eclipsing X-ray burster AX J1745.6-290 [@maeda1996]. In addition, the *Chandra* position of AX J1745.6-290 (see Table \[tab:chandra\]) is consistent with that of the X-ray source CXOGC J174535.6-290133 [@muno03], which likely represents the quiescent counterpart of the system. Figure \[fig:lc\] displays the activity of AX J1745.6-2901 during the 2006-2007 *Swift* campaign. In 2006, the outburst reached a peak luminosity of $9.2 \times 10^{35}~\mathrm{erg~s}^{-1}$, while the average outburst luminosity was $3.9 \times 10^{35}~\mathrm{erg~s}^{-1}$ (both in the 2-10 keV energy band). For an outburst duration of at least 16 weeks (AX J1745.6-2901 might have been active before the start of the *Swift* monitoring campaign), we can deduce a fluency of $\gtrsim 1.8 \times 10^{-4}~\mathrm{erg~cm}^{-2}$. In 2007, the system was active again, but with an higher average luminosity of $1.6 \times 10^{36}~\mathrm{erg~s}^{-1}$ and a reached peak value of $6.1 \times 10^{36}~\mathrm{erg~s}^{-1}$ (both 2-10 keV). Different outburst luminosities have been reported for AX J1745.6-2901 in the past; in October 1993, the source was detected at a luminosity of $2 \times 10^{35}~\mathrm{erg~s}^{-1}$, while in October 1994 it became as bright as $9 \times 10^{35}~\mathrm{erg~s}^{-1}$ [both values are in the 3-10 keV band, @maeda1996]. Before and after the 6-week epoch in 2007 that the *Swift* observatory was offline due to a safe-hold event [@swift_offline07 this corresponds to days 533-579 in the lightcurves displayed in Fig. \[fig:lc\]], AX J1745.6-2901 was active at similar count rates. We have inspected proprietary *XMM-Newton* data of the GC performed on September 6, 2007 (Degenaar et al. in preparation), i.e., halfway the interval that the *Swift* observatory was offline. AX J1745.6-2901 was clearly detected during that observation, which demonstrates that the source remained active all through the 2007 *Swift* monitoring campaign. For an outburst duration of 34 weeks, the 2-10 keV fluency of the 2007 outburst is then $5.7 \times 10^{-3}~\mathrm{erg~cm}^{-2}$. However, this inferred value should be considered a lower limit, since we also found AX J1745.6-2901 to be active during all *Swift*/GC monitoring observations in 2008, at a flux similar to that of 2007 (the source was also reported active during *Chandra* observations carried out in 2008, see @heinke08 and @deeg08_atel_gc). This suggests that the outburst observed in 2007, continued in 2008 and thus has a duration of at least 1.5 years. For that outburst length, the fluency increases to $6.5 \times 10^{-3}~\mathrm{erg~cm}^{-2}$ (2-10 keV), and will become even larger if the outburst continues. Between 1999 and 2002, the GC was observed several times with *Chandra* [@muno03; @muno04_apj613]. Thus, if the observed long outburst duration of AX J1745.6-2901 is typical, the source likely resided in quiescence for at least 4 years. However, the quiescent timescale must be less than 13 years, the time since the *ASCA* discovery [@maeda1996]. Estimating the long-term time-averaged mass-accretion rate for AX J1745.6-2901 is difficult due to the different outburst durations and luminosities the system displays. To get a rough estimate, we will assume that an outburst duration of 1.5 years and a 2-10 keV outburst luminosity of $2 \times 10^{36}~\mathrm{erg~s}^{-1}$ are typical for the source. The duty cycle of this neutron star system then ranges from $10\%$ for $t_{\mathrm{q}}\sim 13$ yr up to $30\%$ for $t_{\mathrm{rec}} \sim 4$ yr. This results in an estimated long-term time-averaged accretion rate of $\sim (5-15) \times 10^{-11}~\mathrm{M_{\odot}~yr}^{-1}$ (see Table \[tab:mdot\]). This value might be a lower limit, since AX J1745.6-2901 possibly exhibited more outbursts like the smaller one observed in 2006. On the other hand, the observed long outburst of 1.5 years might not be typical for the system, in which case this estimate would be an upper limit on the time-averaged mass-accretion rate. To compare the outburst spectrum of AX J1745.6-2901 with the likely quiescent counterpart of the source (CXOGC J174535.6-290133), we downloaded the reduced data of the *Chandra* monitoring campaign that are made available online[^7]. After the spectrum was grouped to contain at least 20 photons per bin, we fitted it with an absorbed powerlaw model with the hydrogen column density fixed at the 2006 outburst value ($N_{H}=23.1 \times 10^{22}~\mathrm{cm}^{-2}$). This resulted in a powerlaw index of $\Gamma=1.8 \pm 0.5$ and an unabsorbed 2-10 keV flux of $7.7^{+0.4}_{-0.3} \times 10^{-14}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$. The inferred 2-10 keV luminosity is $5.9 \times 10^{32}~\mathrm{erg~s}^{-1}$. The quiescent spectrum is plotted in Fig. \[fig:spec\] along with the average outburst spectra of 2006 and 2007.\ **Type-I X-ray bursts**\ The *Swift*/GC monitoring observations detected two type-I X-ray bursts from AX J1745.6-2901. The times at which these bursts occurred are indicated in Fig. \[fig:lc\]. The first burst was observed on June 3, 2006, and had an exponential decay timescale of $\sim 10$ s (see Fig. 4a). Due to the sudden increase in count rate associated with the X-ray burst, the XRT instrument automatically switched from PC to WT mode. There is no burst data available during this switch, which took about 3 seconds. We extracted the spectrum of the first 3 seconds of the observed burst peak and fitted it to an absorbed blackbody model with the hydrogen column density fixed at $\mathrm{N_{H}}=23.1 \times 10^{22}~\mathrm{cm}^{-2}$, the value inferred from the mean outburst spectrum of 2006. This yielded $kT=1.7^{+1.9}_{-0.6}~\mathrm{keV}$ and a radiating surface area of $10^{+12}_{-6}~ \mathrm{km}$ (assuming $d=8~$kpc). The 0.01-100 keV peak flux inferred from our spectral fit is $1.3^{+2.1}_{-0.1} \times 10^{-8}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$ (corrected for absorption), which translates into an observed peak luminosity of $9.6 \times 10^{37}\mathrm{~erg~s}^{-1}$. However, the true burst peak was likely missed due to the automatic switch of XRT modes. If we extrapolate the burst lightcurve to the time $t=-3$ s (i.e., the time of the mode-switch), we can deduce a 0.01-100 keV peak luminosity of $1.3 \times 10^{38}\mathrm{~erg~s}^{-1}$. Although the true peak of the type-I X-ray bursts will remain uncertain, it is clear that it likely was close to the Eddington luminosity for a neutron star [$2.0 \times 10^{38}\mathrm{~erg~s}^{-1}$ for a hydrogen-rich and $3.8 \times 10^{38}\mathrm{~erg~s}^{-1}$ for a hydrogen-poor photosphere; e.g., @kuulkers03_xrb]. Another burst was observed on June 14, 2006, which had an exponential decay timescale of $\sim$20 s (see Fig. 4b). This time, no automated switch of XRT modes occurred, so that the burst was fully detected in the PC mode. Due to the high count rate associated with the burst, the PC image was severely piled-up and a proper spectral fitting of the burst peak was not possible. Therefore, we used the burst count rate to find the peak flux and luminosity. To obtain the correct count rates, the observed ones have to be corrected for the loss in photons caused by bad columns and pixels using an exposure map, and a pile-up correction needs to be applied. For the latter, we extracted the source photons from an annular source region, avoiding the piled-up inner pixels. We determined the proper correction factor for the observed PC count rate following analysis threads on the *Swift* webpages. This way, we found that the burst must have reached a peak count rate of $15~\mathrm{cnts~s}^{-1}$ in the PC mode. Employing PIMMS with a hydrogen column density of $N_{\mathrm{H}}=23.1 \times 10^{22}~\mathrm{cm}^{-2}$ (the 2006 outburst value) and temperatures of $kT=1.0-3.0~\mathrm{keV}$ (roughly the range inferred for the first burst), we can estimate an unabsorbed 0.01-100 keV flux of $(0.68-1.0) \times 10^{-8}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$. The corresponding 0.01-100 keV peak luminosity is $(0.52-1.1)\times 10^{38}\mathrm{~erg~s}^{-1}$, i.e., comparable to the X-ray burst that occurred on June 3, 2006. CXOGC J174535.5-290124 ---------------------- By the beginning of August 2006, a transient was detected in outburst approximately $1.3'$ SE from $\mathrm{Sgr~A}^{\ast}$. The XRT position for this source, which is listed in Table \[tab:vfxts\], is only $\sim 14''$ away from the above discussed AX J1745.6-2901, which had returned to quiescence a month earlier. We obtained an improved position for this transient from an archival *Chandra* observation performed on August 22, 2006 (see Table \[tab:chandra\]) and find that its coordinates are not consistent with the *Chandra* position of AX J1745.6-2901 (Table \[tab:chandra\]), but do coincide with that of the known X-ray transient CXOGC J174535.5-290124 [@muno05_apj622]. CXOGC J174535.5-290124 is a subluminous X-ray transient that was discovered during a *Chandra* campaign of the GC [@muno04_apj613]. Whereas the source was not detected in 1999 and 2000 [yielding a 2-8 keV upper limit for the quiescent luminosity of $L_{\mathrm{X}}<9 \times 10^{30}~\mathrm{erg~s}^{-1}$; @muno05_apj622], it was found in outburst with *Chandra* on several occasions between 2001 and 2005, displaying typical 2-8 keV luminosities of $10^{33-34}~\mathrm{erg~s}^{-1}$ [@muno05_apj622; @wijn05_atel638; @deeg08_atel_gc]. The source was also detected in outburst during *XMM-Newton* observations obtained in September 2006, when it displayed a 2-10 keV X-ray luminosity of $2 \times 10^{34}~\mathrm{erg~s}^{-1}$ [@wijn06_atel892]. This is in agreement with the 2006 *Swift* data of CXOGC J174535.5-290124 (see Table \[tab:spectra\]), which showed an average outburst luminosity of $1.7 \times 10^{34}~\mathrm{erg~s}^{-1}$ and an observed peak luminosity of $3.0 \times 10^{34}~\mathrm{erg~s}^{-1}$ (both in the 2-10 keV energy band). The source was observed in outburst until the *Swift* observations stopped in November 2006. The outburst of late 2006 thus had a duration of at least 12 weeks. This yields a lower limit on the outburst fluency of $1.6 \times 10^{-5}~\mathrm{erg~cm}^{-2}$ in the 2-10 keV energy band. The nearby transient AX J1745.6-2901 is typically a factor 10-100 brighter during outburst than CXOGC J174535.5-290124, and due to their small separation, we cannot deduce any information on CXOGC J174535.5-290124 from *Swift* data when AX J1745.6-2901 is active. However, *Chandra* does have the required spatial resolution to separate these two transients, even when AX J1745.6-2901 is in outburst. Inspection of archival *Chandra* data of both 2006 and 2007 revealed that CXOGC J174535.5-290124 was in outburst simultaneously with AX J1745.6-2901 in April 2006 (Obs ID 6639), although it was not active during *Chandra* observations carried out in May, June and early July 2006. Thus, CXOGC J174535.5-290124 must have returned to quiescence by the end of April 2006, but it reappeared in August 2006, when *Swift* detected the source. Since AX J1745.6-2901 was continuously active during the *Swift*/GC monitoring observations of 2007, we cannot deduce any information on the activity CXOGC J174535.5-290124 from the 2007 *Swift* data. The source was not found active in archival *Chandra* data obtained in February and July 2007, nor in proprietary *Chandra* observations carried out in March, April and May 2007 (Degenaar et al. 2008 in preparation). In 2008, CXOGC J174535.5-290124 is reported active during pointed *Chandra*/HRC-I observations performed on May 10-11 [@deeg08_atel_gc]. During that observation the 2-10 keV X-ray luminosity was approximately $2 \times 10^{33}~\mathrm{erg~s}^{-1}$, i.e., a factor of 10 lower than outburst level detected with *Swift* in 2006. Despite its low peak luminosity, CXOGC J174535.5-290124 appears to be active quite regularly. However, its duty cycle is not completely clear; the *Swift* observations show that in 2006 the system was in quiescence for about 3 months in between two outbursts, while *Chandra* data suggests that it was likely quiescent for more than 6 months in 2007 (if an outburst duration of $\sim 3$ months is typical). Tentatively assuming a recurrence time of 3-12 months and a typical outburst duration of 12 weeks, the duty cycle for CXOGC J174535.5-290124 is $\sim 20-50 \%$. The various detections of CXOGC J174535.5-290124 vary between a few times $10^{33-34}~\mathrm{erg~s}^{-1}$, so we adopt a mean 2-10 keV outburst luminosity of $1 \times 10^{34}~\mathrm{erg~s}^{-1}$. This results in a long-term averaged accretion rate of $ 5 \times 10^{-13}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 1 \times 10^{-12}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for a neutron star primary or $ 7 \times 10^{-14}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 2 \times 10^{-13}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ in case of a black hole accretor (see Table \[tab:mdot\]). CXOGC J174540.0-290005 ---------------------- In late October 2006, @kennea06_atel920 reported on activity from an X-ray transient, Swift J174540.2-290005, located $\sim 20''$ N from $\mathrm{Sgr~A}^{\ast}$. In an archival *Chandra* observation performed on August 22, 2006, we find one X-ray source within the XRT error radius of Swift J174540.2-290005 (see Table \[tab:vfxts\]). The *Chandra* position of this source (see Table \[tab:chandra\]) is consistent with that of CXOGC J174540.0-290005 [@muno05_apj622], positively identifying Swift J174540.2-290005 with this *Chandra*-discovered X-ray transient. CXOGC J174540.0-290005 was detected in outburst only once before, in 2003, when it displayed a luminosity of $3.4 \times 10^{34}~\mathrm{erg~s}^{-1}$ [2-8 keV, @muno05_apj622]. This is a factor of a few lower than the peak luminosity of $2.3 \times 10^{35}~\mathrm{erg~s}^{-1}$ that was detected by *Swift* in 2006 (2-10 keV, see Table \[tab:spectra\]). @muno05_apj622 derived an upper limit for the quiescent luminosity of this system of $<4 \times 10^{31}~\mathrm{erg~s}^{-1}$ (2-8 keV). The 2006 outburst of CXOGC J174540.0-290005 lasted almost 2 weeks, and the inferred outburst fluency is $1.3 \times 10^{-5}~\mathrm{erg~cm}^{-2}$ (2-10 keV). No other outburst from CXOGC J174540.0-290005 was detected during the *Swift* observing campaign of 2006 and 2007. If the observed outburst duration of 2 weeks is typical for this source, than its outbursts are most easily missed. However, CXOGC J174540.0-290005 was in FOV during the entire *Swift* campaign of 2006, which encompassed almost daily observations and lasted for 35 weeks. No activity from the source was detected in 33 weeks prior to the outburst that occurred in late October. Therefore, we can assume an upper limit on the duty cycle of $\lesssim 6\%$. This is consistent with the fact that no outburst was detected during the 2007 *Swift* monitoring observations. However, since the source was detected with *Chandra* in 2003 [@muno05_apj622], we can also assume that the recurrence time of the system is less than 3 years. From this we can deduct that the duty cycle is likely more than $1\%$. Using these two bounds combined with an averaged 2-10 keV outburst luminosity of $1 \times 10^{35}~\mathrm{erg~s}^{-1}$, we can put a limit on the time-averaged mass-accretion rate of $ 3 \times 10^{-13}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 1.5 \times 10^{-12}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for a neutron star compact primary, or $ 4 \times 10^{-14}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 2.1 \times 10^{-13}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ in case it is a black hole. Following the reported activity of Swift J174540.2-290005 [@kennea06_atel920], @wang06_atel935 performed IR observations of the source field on October 30-31, 2006. Whereas no sources within the XRT source position uncertainty showed an expected increase in IR brightness [e.g., @clark00_IR; @russell06], @wang06_atel935 concluded that none of them was the counterpart to CXOGC J174540.0-290005. However, we note that the *Swift*/XRT data shows that at the time of the reported IR observations, the X-ray outburst had already ceased and any correlated IR luminosity might have returned to its pre-outburst level accordingly. Swift J174553.7-290347 {#subsec:bron4} ---------------------- A fourth X-ray transient, which we designate Swift J174553.7-290347 (see Table \[tab:vfxts\]), is located $\sim 4.5'$ SW from $\mathrm{Sgr~A}^{\ast}$ and was found active for a duration of approximately 2 weeks in June 2006 (see Fig. \[fig:lc\]). The source reached a peak luminosity of $ 2.0 \times 10^{35}~\mathrm{erg~s}^{-1}$, while the average outburst luminosity was $ 4.9 \times 10^{34}~\mathrm{erg~s}^{-1}$ (both 2-10 keV). We were able to obtain an improved position of Swift J174553.7-290347 from an archival *Chandra* observation carried out on June 17, 2006 (see Table \[tab:chandra\]). The source coordinates suggests a possible association with the *Chandra* detected X-ray point source CXOGC J174553.8-290346 [@muno03], although the offset between the source positions is $\sim 1''$. During the *Chandra* campaign of the GC [@muno03; @muno04_apj613; @muno05_apj622], CXOGC J174553.8-290346 was detected as a low luminosity X-ray source ($L_{\mathrm{X}} \sim10^{32}~\mathrm{erg~s}^{-1}$, 2-8 keV) that showed no signs of long- or short-term variability [@muno03]. The spectral shape of CXOGC J174553.8-290346 is not reported in literature, but the reduced *Chandra* data (both source and background spectra as well as proper response files) from the campaign are made available online (see footnote \[foot:chan\] on page ). For comparison with the current outburst data, we downloaded the reduced *Chandra* data and fitted the background corrected spectrum with an absorbed powerlaw model (after the spectra were grouped to contain at least 20 photons per bin). With the absorption column density fixed at the outburst value of Swift J174553.7-290347, $N_{H}=24.4~\times 10^{22}~\mathrm{cm}^{-2}$, this results in a fit with an unusual steep spectrum; $\Gamma=5.5\pm 2.0$. The 2-10 keV unabsorbed X-ray flux for this fit is $5.0^{+6.7}_{-3.0} \times 10^{-14}~\mathrm{erg~cm}^{-2}~\mathrm{s}^{-1}$ and the associated X-ray luminosity would be $ 3.8 \times 10^{32}~\mathrm{erg~s}^{-1}$. Leaving the hydrogen column density as a free parameter results in a fit with $N_{H}=11.6^{+13.2}_{-7.0}~\times 10^{22}~\mathrm{cm}^{-2}$ and $\Gamma =3.1^{+1.5}_{-1.9}$ and an X-ray luminosity of $ \sim 6.8 \times 10^{31}~\mathrm{erg~s}^{-1}$ (2-10 keV). Both the *Swift* outburst spectrum of Swift J174553.7-290347 and the *Chandra* spectrum of CXOGC J174553.8-290346 are plotted in combination with the fitted models in Fig. \[fig:spec\] (the plotted spectral model for CXOGC J174553.8-290346 is for $N_{H}=24.4~\times 10^{22}~\mathrm{cm}^{-2}$). During the entire 2006-2007 *Swift* campaign, the new X-ray transient Swift J174553.7-290347 only displayed this 2-week outburst (see Fig. \[fig:lc\]), for which we can infer a 2-10 keV outburst fluency of $ 8.0 \times 10^{-6}~\mathrm{erg~cm}^{-2}$. The source was not detected during 22 weeks of consecutive observations in 2007, which we can use as a lower limit on the quiescent timescale of this source (which is consistent with the 2006 behavior). Thus, the duty cycle of Swift J174553.7-290347 is likely less than $8\%$. The estimate for the long-term average accretion rate of this transient is then $\lesssim 10^{-12}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for a neutron star X-ray binary or $\lesssim 2 \times 10^{-13}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ in case of an accreting black hole. Swift J174622.1-290634 ---------------------- Approximately $11'$ SW from $\mathrm{Sgr~A}^{\ast}$, the X-ray transient Swift J174622.1-290634 is active from mid-May till late-June 2006 (see Fig. \[fig:lc\]). We obtained improved coordinates for this new X-ray transient from an archival *Chandra* observation carried out on July 7, 2006, during which the source was detected (see Table \[tab:chandra\]). This system cannot be identified with any known X-ray source [it was outside FOV of the *Chandra* monitoring campaign of the GC; @muno03; @muno04_apj613; @muno05_apj622]. The average outburst luminosity during the *Swift*/XRT observations was $1.2 \times 10^{34}~\mathrm{erg~s}^{-1}$ and the observed peak luminosity was $7.0 \times 10^{34}~\mathrm{erg~s}^{-1}$ (both in the 2-10 keV energy band). The 2-10 keV outburst fluency for the 5-week outburst of Swift J174622.1-290634 is $5.0 \times 10^{-6}~\mathrm{erg~cm}^{-2}$. Swift J174622.1-290634 lies relatively far from $\mathrm{Sgr~A}^{\ast}$ and was not always within the FOV, due to varying pointing centers and roll-angles of the *Swift*/XRT observations. However, the observations were spread such that if the observed outburst duration of 5 weeks is typical for the source, any other outburst occurring during the 2006 monitoring campaign would have been detected by *Swift*/XRT. During the 6-week interval that the *Swift* observatory was offline in 2007, Swift J174622.1-290634 could in principle have experienced an accretion outburst of 5 weeks. However, the system was not detected during *XMM-Newton* observations of the GC performed on September 6, 2007 (i.e., halfway the interval that the *Swift* observatory was offline), indicating that this is not the case. We therefore assume that the source was in quiescence for the entire 2007 *Swift* monitoring campaign, which lasted for 31 weeks. The duty cycle of Swift J174622.1-290634 is thus likely less than $14\%$, which makes its time-averaged accretion rate $\lesssim 4 \times 10^{-13} ~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for an accreting neutron star or $\lesssim 6 \times 10^{-14} ~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for a black hole X-ray binary.\ GRS 1741.9-2853 --------------- The neutron star X-ray transient GRS 1741.9-2853 (located $\sim 10 '$ NE from $\mathrm{Sgr~A}^{\ast}$) was in FOV during most of the *Swift* monitoring observations (see Fig. \[fig:lc\]). The source has been detected in an active state several times since its initial discovery in 1990 [@sunyaev1990], displaying typical peak luminosities of a few times $10^{36}~\mathrm{erg~s}^{-1}$ [e.g., @muno03_grs; @wijn06_monit]. In September 2006, GRS 1741.9-2853 displayed some low level activity, lasting approximately a week (see Fig. \[fig:lc\], $\sim 200$ days after the start of the monitoring observations). The source reached a peak luminosity of $8.9 \times 10^{34}~\mathrm{erg~s}^{-1}$ (2-10 keV), which is an order of magnitude lower than its full outburst luminosity, but still about 1000 times higher than its quiescent level [$\sim 10^{32}~\mathrm{erg~s}^{-1}$ in the 2-8 keV band; @muno03_grs]. The fluency of this small outburst is $2.6 \times 10^{-6}~\mathrm{erg~cm}^{-2}$ (2-10 keV). Renewed activity from GRS 1741.9-2853 was reported in early 2007, as observed with *INTEGRAL* [@kuulkers07_atel1008], *Swift* [@wijnands07_atel1006], *XMM-Newton* [@porquet07] and *Chandra* [@muno07_atel1013]. During its 2007 activity, three type-I X-ray bursts were reported [@wijnands07_atel1006; @porquet07] and several of such thermonuclear bursts have been observed in the past [see @muno03_grs and references therein]. GRS 1741.9-2853 was seen active right from the start of the 2007 *Swift*/GC on March 3, 2007. It remained as such for approximately 5 weeks, displaying an average 2-10 keV outburst luminosity of $1.3 \times 10^{36}~\mathrm{erg~s}^{-1}$, until it returned to quiescence by the beginning of April. During this outburst, *Swift*/XRT detected a peak luminosity of $2.0 \times 10^{36}~\mathrm{erg~s}^{-1}$ (2-10 keV). For the observed outburst duration of 5 weeks, the 2-10 keV fluency of the 2007 outburst is $5.3 \times 10^{-4}~\mathrm{erg~cm}^{-2}$. However, GRS 1741.9-2853 was already seen active during *INTEGRAL* observations performed on February 15, i.e., 2 weeks before the start of the *Swift*/GC campaign. Moreover, @wijnands07_atel1006 noted that GRS 1741.9-2853 is located within the $3'$ error circle of an X-ray burst detected by the Burst Alert Telescope (BAT) onboard *Swift* on January 22, 2007 [@grb_grs07]. As there were no other sources detected within the BAT error circle, it is likely that GRS 1741.9-2853 was the origin of this burst, suggesting that the source was already active for over 8 weeks before the start of the *Swift*/GC campaign. Therefore, the outburst fluency inferred from the *Swift*/XRT observations should be considered as a lower limit and the true value might be $>1.4 \times 10^{-3}~\mathrm{erg~cm}^{-2}$ (2-10 keV), in case the outburst lasted 13 weeks, or longer. Small outbursts like the one occurring in 2006 with $L_{\mathrm{X}}^{\mathrm{peak}}\sim 10^{35}~\mathrm{erg~s}^{-1}$ and $t_{\mathrm{ob}}=1$ week, have a negligible effect on the total mass-accretion rate, when compared to longer and brighter outbursts like the one observed in 2007. Therefore, we will not include the 2006 outburst in calculating the mass-accretion rate for GRS 1741.9-2853 and assume a minimal quiescent timescale of 35 weeks (the span of the 2006 monitoring observations). Adapting a typical outburst duration of 13 weeks (which is likely the minimum duration of the 2007 outburst), we can than place an upper limit on the the duty cycle of GRS 1741.9-2853 of $\lesssim 30 \%$. On the other hand, GRS 1741.9-2853 has been detected at 2-10 keV X-ray luminosities of $\sim 10^{36}~\mathrm{erg~s}^{-1}$ for a total of 5 times since its initial discovery 18 years ago [see @wijn06_monit for the long-term lightcurve of this source, showing its various outbursts from 1990 till 2005]. Therefore, we assume a lower limit on the duty cycle of $\gtrsim 7\%$. Combining these bounds with a typical 2-10 keV outburst luminosity of $\sim 10^{36}~\mathrm{erg~s}^{-1}$, we estimate a long-term accretion rate of $ 2 \times 10^{-11}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 8 \times 10^{-11}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for GRS 1741.9-2853. XMM J174457-2850.3 {#subsec:bron7} ------------------ XMM J174457-2850.3 is an X-ray transient located about $13.7\arcmin$ NE from $\mathrm{Sgr~A}^{\ast}$. The source was discovered in 2001 [@sakano05], using XMM-Newton observations, at a peak luminosity of $ 5 \times 10^{34}~\mathrm{erg~s}^{-1}$, but with a quiescent luminosity of $1.2 \times 10^{32}~\mathrm{erg~s}^{-1}$ (both in the 2-10 keV energy range). Since then, the source has been repeatedly reported active at luminosities ranging from a few times $10^{33}~\mathrm{erg~s}^{-1}$ up to $\sim 10^{36}~\mathrm{erg~s}^{-1}$ [@wijn06_monit; @muno07_atel1013]. Due to its large angular separation from $\mathrm{Sgr~A}^{\ast}$, there were only 16 pointings (spaced between July and November 2007) during the *Swift*/GC campaign, in which XMM J174457-2850.3 was in FOV (see Fig. \[fig:lc\]). Restricted by a small number of photons, we could not fit to the spectrum of XMM J174457-2850.3. We therefore employed PIMMS to convert the observed XRT count rates to fluxes using an absorbed powerlaw model with $N_{\mathrm{H}}=6.0 \times 10^{22}~\mathrm{cm}^{-2}$ and $\Gamma=1.3$ [as found by @sakano05]. During the first set of 6 observations (performed between July 5 and July 14, 2007; for a total of 6 ksec), the source had a 2-10 keV X-ray luminosity of $\sim 1.5 \times 10^{33}~\mathrm{erg~s}^{-1}$. This is at a similar level as was found for the source in February 2007 by @muno07_atel1013. However, on August 4, the source was clearly detected during a single $\sim 1.7~$ksec observation at $L_{\mathrm{X}} \sim 1.1 \times 10^{34}~\mathrm{erg~s}^{-1}$ (2-10 keV). XMM J174457-2850.3 was again in FOV during a series of 6 *Swift* monitoring observations carried out between October 24 and November 2, 2007 (which had a total exposure time of $\sim 11.1~$ksec). At this time, the source activity was lower again; it displayed a 2-10 keV luminosity of $\sim 1.4 \times 10^{33}~\mathrm{erg~s}^{-1}$. It is possible that XMM J174457-2850.3 did not reach a luminosity exceeding $\sim 10^{34}~\mathrm{erg~s}^{-1}$ during the above described episode. Nevertheless, the *Swift* monitoring observations show that if the source went into an active state around July-August 2007, the outburst was shorter than $\sim 3$ months, since the source was detected at lower luminosities again in late October 2007. For an average 2-10 keV outburst luminosity of $\sim 10^{36}~\mathrm{erg~s}^{-1}$ (the maximum value ever observed for this source), the 2-10 keV fluency of this possible outburst would have been $\lesssim 7.5 \times 10^{-4}~\mathrm{erg~cm}^{-2}$. If the system was active at a 2-10 keV luminosity of $\sim 10^{34}~\mathrm{erg~s}^{-1}$ for three months, the outburst fluency lowers to $\lesssim 7.5 \times 10^{-6}~\mathrm{erg~cm}^{-2}$ (in the 2-10 keV band). XMM J174457-2850.3 was detected above its quiescent level several times since its discovery 7 years ago. However, it is unclear whether the source always reaches full outburst with $L_{\mathrm{X}} \sim 10^{36}~\mathrm{erg~s}^{-1}$, or undergoes enhanced levels of activity with luminosities of several times $\sim 10^{33-34}~\mathrm{erg~s}^{-1}$. This makes it difficult to estimate its recurrence time and outburst duration. Since 2001, XMM J174457-2850.3 was detected at outburst luminosities of $\sim 10^{36}~\mathrm{erg~s}^{-1}$ for 5 times. If we roughly assume a typical outburst duration of $\lesssim 3$ months, a lower limit for the duty cycle of the system is $\gtrsim 5 \%$. For an upper limit on the activity of XMM J174457-2850.3, we may crudely estimate that it goes into outburst twice a year, in which case the duty cycle is almost $50\%$. Adopting a typical outburst luminosity of $\sim 10^{36}~\mathrm{erg~s}^{-1}$, then results in an estimated long-term mass-accretion rate of $1 \times 10^{-11}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 1 \times 10^{-10}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ for a neutron star compact primary, or $2 \times 10^{-12}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1} \lesssim \langle \dot{M}_{\mathrm{long}} \rangle \lesssim 2 \times 10^{-11}~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ in case it is a black hole. Discussion {#sec:discuss} ========== Source Year $t_{\mathrm{ob}}$ F $\langle \dot{M}\rangle _{\mathrm{NS}}$ $\langle \dot{M} \rangle _{\mathrm{BH}}$ ------------------------ ------ ------------------- ------------------------------- ----------------------------------------- ------------------------------------------ AX J1745.6-2901 2006 $>16$ $\gtrsim 1.8 \times 10^{-4}$ 2007 $>78$ $\gtrsim 6.5 \times 10^{-3}$ $\sim (5-15) \times 10^{-11}$ CXOGC J174553.5-290124 2006 $>12$ $\gtrsim 1.6 \times 10^{-5}$ $\sim (5-13) \times 10^{-13}$ $\sim (7-18) \times 10^{-14}$ CXOGC J174540.0-290005 2006 2 $1.3\times 10^{-5}$ $ \sim (3-15) \times 10^{-13}$ $ \sim (4-21) \times 10^{-14}$ Swift J174553.7-290347 2006 2 $8.0\times 10^{-6}$ $\lesssim 1 \times 10^{-12}$ $\lesssim 2 \times 10^{-13}$ Swift J174622.1-290634 2006 5 $5.0\times 10^{-6}$ $\lesssim 4 \times 10^{-13}$ $\lesssim 6 \times 10^{-14}$ GRS 1741.9-2853 2006 1 $2.6 \times 10^{-6}$ 2007 $>13$ $\gtrsim 1.4 \times 10^{-3}$ $\sim (2-8) \times 10^{-11}$ XMM J174457-2850.3 2007 $<12$ $\lesssim 7.5 \times 10^{-4}$ $\sim (1-10) \times 10^{-11}$ $\sim (2-20) \times 10^{-12}$ \[tab:mdot\] Outburst duration in weeks. Outburst fluency in units of $\mathrm{erg~cm}^{-2}$ in the 2-10 keV energy band, which was calculated by multiplying the mean unabsorbed outburst flux by the outburst duration. Estimated long-term averaged accretion rate ($\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$) for a neutron star with $M=1.4~\mathrm{M_{\odot}}$ and $R=10~\mathrm{km}$ Estimated long-term averaged accretion rate ($\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$) for a black hole with $M=10~\mathrm{M_{\odot}}$ and $R=30~\mathrm{km}$. AX J1745.6-2901 and GRS 1741.9-2853 both display type-I X-ray bursts and are thus confirmed neutron star systems. \[tab:obdurations\] We have presented the spectral analysis of seven X-ray transients that were found to be active during a monitoring campaign of the field around $\mathrm{Sgr~A}^{\ast}$ using *Swift*/XRT, carried out in 2006-2007. Two new transients were discovered (Swift J174622.1-290634 and Swift J174553.7-290347) and renewed activity from five known sources was observed (AX J1745.6-2901, CXOGC J174553.5-290124, CXOGC J174540.0-290005, GRS 1741.9-2853 and XMM J174457-2850.3). Adopting source distances of 8 kpc, we can infer peak luminosities in the range of $\sim 1 \times 10^{34} - 6 \times 10^{36}~\mathrm{erg~s}^{-1}$ in the 2-10 keV energy band. The two transients AX J1745.6-2901 and GRS 1741.9-2853 are hybrid systems, that display very-faint outbursts with 2-10 keV peak luminosities of $L_{\mathrm{X}}<10^{36}~\mathrm{erg~s}^{-1}$, as well as outbursts with luminosities in the range of $10^{36-37}~\mathrm{erg~s}^{-1}$, which are classified as faint. The other five systems display 2-10 keV peak luminosities of $10^{34-36}~\mathrm{erg~s}^{-1}$, i.e., in the very-faint regime. We have observed a large variation in spectral properties, outburst luminosities and outburst durations (see Tables \[tab:spectra\] and \[tab:mdot\]). In that respect, the subluminous transients are not different from the well-known bright systems. The nature of the detected transients ------------------------------------- AX J1745.6-2901 and GRS 1741.9-2853 are both known X-ray bursters, which makes it very likely that these are neutron stars in LMXBs, since type-I X-ray bursts have never been detected from high-mass X-ray binaries. For AX J1745.6-2901 a LMXB nature is confirmed by its orbital period of 8.4 hours. The nature of the remaining five transients is unknown. However, XMM J174457-2850.3, CXOGC J174553.5-290124 and CXOGC J174540.0-290005, all have been in outburst more than once in the past decade. This likely rules out a white dwarf accretor, since recurrent novae display outburst cycles of decades rather than years. This suggests a neutron star or black hole nature for XMM J174457-2850.3, CXOGC J174553.5-290124 and CXOGC J174540.0-290005. For CXOGC J174540.0-290005, observations reported by @wang06_atel935 could not detect a near-IR counterpart, while the observations would have detected a main sequence star down to spectral type B5. This suggests that if CXOGC J174540.0-290005 is an X-ray binary, it is likely a LMXB. The two new transients Swift J174553.7-290347 and Swift J174622.1-290634 were observed at 2-10 keV peak luminosities of $2.0 \times 10^{35}~\mathrm{erg~s}^{-1}$ and $7.0 \times 10^{34}~\mathrm{erg~s}^{-1}$. Although such luminosities are quite uncommon for white dwarf systems, @mukai08 showed a few examples of classical novae that reach peak values of several times $10^{34-35}~\mathrm{erg~s}^{-1}$. Thus, in absence of other outbursts from Swift J174553.7-290347 and Swift J174622.1-290634, we cannot exclude the possibility that these two systems harbor accreting white dwarfs. subluminous X-ray transients in quiescence {#quiescence} ------------------------------------------ The quiescent luminosity of X-ray transients sometimes holds clues to the nature of the system. The *ASCA* burster AX J1745.6-290 is very likely associated with CXOGC J174535.6-290133, which was detected several times with *Chandra* at a level of a few times $10^{32}~\mathrm{erg~s}^{-1}$ (see Sect. \[subsec:bron1\]). This is consistent with the neutron star nature of AX J1745.6-290, since black hole systems with an orbital period of $\sim$ 8 hours are significantly fainter [e.g., @narayan97; @menou99; @lasota07]. GRS 1741.9-2853 is also a confirmed neutron star system and displays a similar quiescent level of $\sim 10^{32}~\mathrm{erg~s}^{-1}$ [2-8 keV, @muno03_grs]. The possible quiescent counterpart of the new subluminous X-ray transient Swift J174553.7-290347, the *Chandra*-detected X-ray source CXOGC J174553.8-290346, displays a 2-10 keV X-ray luminosity of $\sim 7 \times 10^{31}-4 \times 10^{32}~\mathrm{erg~s}^{-1}$, depending on the assumed spectral model (see Sect. \[subsec:bron4\]). The quiescent luminosity of XMM J174457-2850.3 is also in this regime; $\sim 10^{32}~\mathrm{erg~s}^{-1}$ [2-10 keV; @sakano05]. If Swift J174553.7-290347 and XMM J174457-2850.3 are X-ray binaries, their quiescent luminosities are relatively high and might point towards a neutron star nature [e.g., @lasota07], athough the orbital period of both these systems is unknown. We note that the absorption towards our transients is very high ($> 6 \times 10^{22}~\mathrm{cm}^{-2}$). Therefore, any thermal emission from the neutron star surface cannot be observed and we can only detect contributions from a powerlaw component, which is frequently observed for neutron stars at similarly low quiescent luminosities [e.g., @jonker07_eos]. Two other transients, CXOGC J174553.5-290124 and CXOGC J174540.0-290005, were not detected in quiescence, but have upper limits on their luminosities of $< 9 \times 10^{30}~\mathrm{erg~s}^{-1}$ and $< 4 \times 10^{31}~\mathrm{erg~s}^{-1}$ respectively [2-8 keV; @muno05_apj622]. Such low quiescent luminosities are more common for black hole X-ray binaries than for neutron star systems [e.g., @garcia01; @lasota07, but see; @jonker06; @jonker07]. The outbursts of subluminous X-ray transients {#subsec:outbursts} --------------------------------------------- The disk instability model [e.g., @king98; @dubus99; @lasota01] provides a framework to describe the outburst cycles of transient LMXBs. However, it is unclear why some X-ray transients, such as the ones discussed here, undergo outbursts with very low peak luminosities. AX J1745.6-2901 has a known orbital period of 8.4 hours, which allows for a maximum luminosity of $\sim 2 \times 10^{38}~\mathrm{erg~s}^{-1}$ [for a hydrogen-dominated disk and a neutron star mass of $1.4~\mathrm{M_{\odot}}$; @lasota07]. Yet, its observed peak luminosity is over an order of magnitude lower (see Table \[tab:spectra\]). Since AX J1745.6-2901 displays eclipses, we must look at the system at high inclination. For several eclipsing X-ray binaries observations suggest that these are intrinsically bright but appear faint because the bright center of the system is blocked by the outer edge of the disk and the corona [e.g., @parmar00; @kallman03; @muno05_apj633]. This may also be the case for AX J1745.6-2901, for which @maeda1996 derived an inclination angle of $i \sim 70^{\circ}$. To include inclination effects, the observed X-ray luminosity should be corrected by a factor $\xi_{p}$, which relates to the inclination, $i$, as $\xi_{p}^{-1}= 2 |\cos i|$ [@fujimoto88; @lapidus85]. In 2007, AX J1745.6-2901 displayed a 2-10 keV peak luminosity of $6.1 \times 10^{36}~\mathrm{erg~s}^{-1}$, which corrects to $9.2 \times 10^{36}~\mathrm{erg~s}^{-1}$ for the suggested inclination of $i \sim 70^{\circ}$ ($\xi_{p} \sim 1.5$). It is thus conceivable that AX J1745.6-2901 is a bright X-ray transient that is obscured due to line of sight effects, although it would still seem to be at the lower end of the luminosity range for bright systems (peak luminosities of $\sim 10^{37-39}~\mathrm{erg~s}^{-1}$ in the 2-10 keV energy band). For comparison, the quasi-persistent neutron star system MXB 1659-29 has an orbital period of 7.1 hours [@mxb1659_eclipses], which is close that of AX J1745.6-2901. However, MXB 1659-29 displays an average 2-10 keV outburst luminosity of $7 \times 10^{36} ~(d/10 ~\mathrm{kpc})~\mathrm{erg~s}^{-1}$ [@oosterbroek01_mxb; @sidoli01_mxb], which is about a factor of 4 higher than the average 2-10 keV outburst luminosity observed for AX J1745.6-2901 in 2007; $1.6 \times 10^{36} ~\mathrm{erg~s}^{-1}$. Possibly, the inclination of AX J1745.6-2901 is somewhat higher than the $i \sim 70^{\circ}$ suggested by @maeda1996. AX J1745.6-2901 might have a subluminous appearance due to line of sight effects, but it is important to note that statistical arguments show that such effects cannot account for the entire population of subluminous X-ray transients, and that most systems must have low intrinsic luminosities [see the discussion of @wijn06_monit]. Although taking into account inclination effects potentially pushes AX J1745.6-2901 into the regime of bright X-ray transients, this does not provide an explanation for the peculiar outburst behavior of the source. As discussed in Sect. \[subsec:bron1\], the system was likely in quiescence for several years before it was seen active in 2006 for more than 4 months. At that time, the source reached a peak luminosity of $9.2 \times 10^{35}~\mathrm{erg~s}^{-1}$, which would classify the system as very-faint. However, after several months of quiescence (see Fig. \[fig:lc\]), the source reappeared displaying a peak luminosity of $6.1 \times 10^{36}~\mathrm{erg~s}^{-1}$ (i.e., in the faint regime) and remained active for over 1.5 years (see Sect. \[subsec:bron1\]). Thus, the outburst observed in 2006 was subluminous by about a factor of 6 compared to the 2007 outburst, yet the system maintained this low luminosity for months. It is unclear if this behavior can be explained in terms of a disk instability model. In 1993 and 1994, different outburst luminosities of $2 \times 10^{35}~\mathrm{erg~s}^{-1}$ and $9 \times 10^{35}~\mathrm{erg~s}^{-1}$ were reported for AX J1745.6-2901 [3-10 keV, @maeda1996]. This is on the same time scale as the discussed *Swift* detections, suggesting that the behavior observed in 2006 and 2007 could be typical for the source. GRS 1741.9-2853 also displayed two separate outbursts with very different characteristics in terms of peak luminosity and outburst duration during the *Swift*/XRT monitoring observations. A short, $\sim1-$week outburst was detected in 2006, which had a 2-10 keV peak luminosity of $9.2 \times 10^{34}~\mathrm{erg~s}^{-1}$. A few months later, the source exhibited a much longer ($\gtrsim 13$ weeks) outburst, that reached a peak luminosity of $2.0 \times 10^{36}~\mathrm{erg~s}^{-1}$ (2-10 keV). Possibly, the short 2006 outburst of GRS 1741.9-2853 was an X-ray precursor for the 2007 outburst. Such behavior is observed for several bright X-ray transients [both neutron star and black hole systems, see @chen97 and references therein]. Both Swift J174553.7-290347 and CXOGC J174540.0-290005 displayed short, $\sim$ 2-week outbursts that had an average luminosity of a few times $10^{34}~\mathrm{erg~s}^{-1}$. This kind of activity resembles the small accretion outburst of GRS 1741.9-2853 in 2006 (see Fig. \[fig:lc\]), but for these two systems no longer outbursts have been observed. XMM J174457-2850.3 seems to undergo X-ray activity at different luminosity levels as well (see Sect. \[subsec:bron7\]). It is unclear what causes the varying accretion luminosities. However, this phenomenon is also observed for bright X-ray transients and is thus not restricted to the subluminous systems discussed here. Current disk instability models do not provide an obvious explanation for accretion outbursts that last several years, rather than the usual weeks to months, such as observed for AX J1745.6-2901. A few bright systems are known to undergo quasi-persistent outbursts [see e.g., @wijnands04_quasip]. There are also two X-ray transients that exhibit prolonged outbursts at low luminosities. XMMU J174716.1-281048 has likely been continuously active since its initial discovery in 2003, displaying a typical 2-10 keV luminosity of a few times $10^{34}~\mathrm{erg~s}^{-1}$ [e.g., @delsanto07; @degenaar07_xmmsource]. Furthermore, AX J1754.2-2754 recently made a transition to quiescence [@bassa08], after exhibiting an accretion outburst with a 2-10 keV luminosity of several times $10^{34-35}~\mathrm{erg~s}^{-1}$, which likely lasted for 7-8 years [@sakano02; @delsanto07_ascabron; @chelovekov07_ascabron]. This source was again found active in July 2008 [@jonker08]. The detection of type-I X-ray bursts identifies both these systems as neutron star LMXBs, just like AX J1745.6-2901. X-ray bursts from subluminous X-ray transients ---------------------------------------------- The properties of type-I X-ray bursts are set by the conditions in the flash layer such as the temperature, thickness, hydrogen abundance and the fraction of carbon-nitrogen-oxygen (CNO) elements in the layer [e.g., @fujimoto81; @bildsten98; @peng2007]. These conditions can vary drastically as the mass-accretion rate onto the neutron star ($\dot{M}$) varies, which results in flashes with different characteristics for different $\dot{M}$ regimes [e.g., @fujimoto81; @peng2007; @cooper07]. The *Swift*/XRT monitoring observations of 2006 caught two type-I X-ray bursts from AX J1745.6-2901 (see Sect. \[subsec:bron1\]). The average 2-10 keV luminosity of the 2006 outburst was $3.9 \times 10^{35}~\mathrm{erg~s}^{-1}$, from which we can estimate an instantaneous mass accretion rate onto the neutron star of $\sim1 \times 10^{-10}~\mathrm{M_{\odot}~yr}^{-1}$. If we include a correction factor to account for inclination effects, as discussed in Sect. \[subsec:outbursts\], this value increases to $\sim1.5 \times 10^{-10}~\mathrm{M_{\odot}~yr}^{-1}$. The bursts had a duration of $50-60~\mathrm{seconds}$ (see Fig. 4), which suggests triggering in a mixed hydrogen/helium environment. This is in line with the classical predictions for the estimated mass-accretion rate [e.g., @fujimoto81]. We discussed in Sect. \[subsec:bron1\], that the type-I X-ray bursts observed from AX J1745.6-2901 have 0.01-100 keV peak luminosities of $\sim 10^{38}~\mathrm{erg~s}^{-1}$, close to the Eddington limit of a neutron star. These peak values should also be corrected for the inclination effects discussed in Sect. \[subsec:outbursts\]. However, the X-ray burst (originating form the neutron star surface) and the outburst emission (emerging from the accretion disk) are attributed to geometrically different regions and may therefore have different degrees of isotropy [@fujimoto88; @lapidus85]. Although, the X-ray burst emission will be partly intercepted and re-radiated by the accretion disk, it was shown that the degree of anisotropy is less than for the emission coming from the accretion disk [@fujimoto88; @lapidus85]. Inclination effects are expected to reduce the X-ray burst emission by a factor $\xi_{b}^{-1}= 0.5 + |\cos i |$ [@fujimoto88; @lapidus85]. For the suggested inclination of $i=70^{\circ}$ [@maeda1996], we thus obtain a correction factor of $\xi_{b}=1.2$. This implies peak luminosities for the type-I X-ray bursts observed from AX J1745.6-2901 on June 3 and June 14, 2006, of $(1.2-1.6) \times 10^{38}~\mathrm{erg~s}^{-1}$ and $(0.62-1.3) \times 10^{38}~\mathrm{erg~s}^{-1}$ (0.01-100 keV) respectively. This is below, but close, to the Eddington luminosity for a neutron star [$2.0 \times 10^{38}\mathrm{~erg~s}^{-1}$ for a hydrogen-rich and $3.8 \times 10^{38}\mathrm{~erg~s}^{-1}$ for a hydrogen-poor photosphere; e.g., @kuulkers03_xrb]. Long-term average accretion rates {#mdot_estimate} --------------------------------- Presuming that the detected transients are accreting systems, we attempted to estimate their long-term time-averaged accretion rates using the method and described in Sect. \[subsec:accrates\]. We explored the scenarios of both neutron star and black hole accretors (except for AX J1745.6-2901 and GRS 1741.9-2853, since these are confirmed neutron star systems), which resulted in the estimated long-term mass-accretion rates[^8] listed in Table \[tab:obdurations\]. The two confirmed neutron star systems, AX J1745.6-290 and GRS 1741.9-2853 have the highest estimated accretion rates of the seven discussed transients ($\sim 10^{-11}-10^{-10}~\mathrm{M_{\odot}~yr}^{-1}$). This arises from the fact that GRS 1741.9-2853 is active quite regularly and AX J1745.6-290 can be in outburst for a very long time (over 1.5 years). The regime estimated for these two sources can be well explained within current LMXB evolution models. The same is likely true for XMM J174457-2850.3, which was active several times since its discovery in 2001 and has an estimated long-term mass-accretion rate of $\gtrsim 10^{-11}~\mathrm{M_{\odot}~yr}^{-1}$. The estimates for the remaining four systems, CXOGC J174553.5-290124, CXOGC J174540.0-290005, Swift J174553.7-290347 and Swift J174622.1-290634 are much lower; $\lesssim 10^{-12}~\mathrm{M_{\odot}~yr}^{-1}$ for accreting neutron stars and even an order of magnitude lower for black hole X-ray binaries, $\lesssim 10^{-13}~\mathrm{M_{\odot}~yr}^{-1}$ (see Table \[tab:obdurations\]). Comparing our results with a theoretical toy-model of @king_wijn06, who explored the mechanism of Roche-lobe overflow at low accretion rates, suggests that if these transients are LMXBs, their low time-averaged mass-accretion rates might pose difficulties explaining their existence, without invoking exotic scenarios such as accretion from a planetary donor or an intermediate mass black hole as the accreting primary [@king_wijn06]. These are thus interesting systems to track and monitor in the future. Apart from evolutionary scenarios and line-of-sight effects, there other possible explanations for the subluminous X-ray appearance of these transients. For example, in particular for the systems containing a black hole, the liberated accretion power may not be primarily dissipated as X-rays but rather via radiatively inefficient flows [e.g., @blandford99; @fender03; @narayan]. Furthermore, in neutron star systems the “propeller mechanism” can possibly operate, so that only a small fraction of the mass transferred from the donor can be accreted onto the neutron star [e.g., @illarionov1975; @alpar2001; @romanova2005]. The discussed examples of AX J1745.6-2901, GRS 1741.9-2853 and XMM J174457-2850.3 illustrate that X-ray transients can display different behavior in terms of peak luminosity, outburst duration and recurrence time from year to year. It is currently not understood whether these variations should be interpreted as, e.g., being due to changes in the mass-transfer rate from the donor star or as the result of instabilities in the accretion disk. Such issues need to be resolved before we can fully comprehend the nature of subluminous X-ray transients. Acknowledgments {#acknowledgments .unnumbered} =============== The auhors thank Anna Watts for commenting on an early version of this manuscript and an anonymous referee for giving valuable suggestions. We acknowledge the use of public data from the *Swift* data archive. This work was supported by the Netherlands Organization for Scientific Research (NWO). [^1]: All fluxes and luminosities quoted in this paper are for the 2-10 keV energy band, unless otherwise stated. [^2]: The campaign continues in 2008, but a detailed discussion of the 2008 data is beyond the scope of this paper. [^3]: See http://heasarc.gsfc.nasa.gov/docs/swift/analysis for standard *Swift* analysis threads [^4]: See also http://www.swift.ac.uk/XRT.shtml.\[foot:expo\] [^5]: See also http://www.swift.ac.uk/pileup.shtml.\[foot:pileup\] [^6]: Listed at http://asc.harvard.edu. [^7]: Available at http://www.astro.psu.edu/users/niel/galcen-xray-data/galcen-xray-data.html [@muno04_apj613].\[foot:chan\] [^8]: Note the caveat mentioned for the black hole cases in Sect. \[subsec:accrates\].
--- abstract: 'We construct the fundamental solution (the heat kernel) $p^{\kappa}$ to the equation $\partial_t =\LL^{\kappa}$, where under certain assumptions the operator $\LL^{\kappa}$ takes the form, $$\LL^{\kappa}f(x):= \int_{{\mathbb{R}}^d}( f(x+z)-f(x)- {\mathds{1}}_{|z|<1} \left<z,\nabla f(x)\right>)\kappa(x,z)J(z)\, dz\,.$$ We concentrate on the case when the order of the operator is positive and smaller or equal 1 (but without excluding higher orders up to 2). Our approach rests on imposing conditions on the expression $$\int_{r{\leqslant}|z|<1} z \kappa(x,z)J(z)dz .$$ The result is new even for $1$-stable L[é]{}vy measure $J(z)=|z|^{-d-1}$.' address: | Karol Szczypkowski\ Wydzia Matematyki, Politechnika Wrocawska\ Wyb. Wyspiańskiego 27\ 50-370 Wrocaw\ Poland author: - Karol Szczypkowski title: 'Fundamental solution for super-critical non-symmetric L[é]{}vy-type operators' --- [**AMS 2010 Mathematics Subject Classification**]{}: Primary 60J35, 47G20; Secondary 60J75, 47D03. [**Keywords and phrases:**]{} heat kernel estimates, Lévy-type operator, non-symmetric operator, non-local operator, non-symmetric Markov process, Feller semigroup, Levi’s parametrix method. Introduction ============ This paper is a sequel to [@GS-2018], where among other things the sub-critical case was covered. It improves and widely extends the results of [@PJ], see also [@CZ-new]. We note that the present paper is by no means included in a general setting of [@MR3652202]. We start by introducing the notation necessary to formulate main results. Let $d\in\N$ and $\nu:[0,\infty)\to[0,\infty]$ be a non-increasing function satisfying $$\int_{{{{\mathbb{R}}^{d}}}} (1\land |x|^2) \nu(|x|)dx<\infty\,.$$ We consider $J: {{{\mathbb{R}}^{d}}}\to [0, \infty]$ such that for some ${\gamma_0}\in [1,\infty)$ and all $x\in {{{\mathbb{R}}^{d}}}$, $$\label{e:psi1} {\gamma_0}^{-1} \nu(|x|){\leqslant}J(x) {\leqslant}{\gamma_0}\nu(|x|)\,.$$ Further, suppose that $\kappa(x,z)$ is a Borel function on ${\mathbb{R}}^d\times {{{\mathbb{R}}^{d}}}$ such that $$\label{e:intro-kappa} 0<\kappa_0{\leqslant}\kappa(x,z){\leqslant}\kappa_1\, ,$$ and for some $\beta\in (0,1)$, $$\label{e:intro-kappa-holder} |\kappa(x,z)-\kappa(y,z)|{\leqslant}\kappa_2|x-y|^{\beta}\, .$$ For $r>0$ we define $$h(r):= \int_{{{{\mathbb{R}}^{d}}}} \left(1\land \frac{|x|^2}{r^2}\right) \nu(|x|)dx\,,\qquad \quad K(r):=r^{-2} \int_{|x|<r}|x|^2 \nu(|x|)dx\,.$$ The above functions play a prominent role in the paper. Our main assumption is *the weak scaling condition* at the origin: there exist ${\alpha_h}\in (0,2]$ and $C_h \in [1,\infty)$ such that $$\label{eq:intro:wlsc} h(r){\leqslant}C_h\,\lambda^{{\alpha_h}}\,h(\lambda r)\, ,\quad \lambda{\leqslant}1, r{\leqslant}1\, .$$ In a similar fashion we consider the existence of ${\beta_h}\in (0,2]$ and $c_h\in (0,1]$ such that $$\label{eq:intro:wusc} h(r){\geqslant}c_h\,\lambda^{{\beta_h}}\,h(\lambda r)\, ,\quad \lambda{\leqslant}1, r{\leqslant}1\, .\\$$ Further, suppose there are (finite) constants ${\kappa_3}, {\kappa_4}{\geqslant}0$ such that $$\begin{aligned} \label{e:intro-kappa-crit} \sup_{x\in{{{\mathbb{R}}^{d}}}}\left| \int_{r{\leqslant}|z|<1} z\, \kappa(x,z) J(z)dz \right| &{\leqslant}{\kappa_3}rh(r)\,, \qquad r\in (0,1],\\ \left| \int_{r{\leqslant}|z|<1} z\, \big[ \kappa(x,z)- \kappa(y,z)\big] J(z)dz \right| &{\leqslant}{\kappa_4}|x-y|^{\beta} rh(r)\,, \qquad r\in (0,1]. \label{e:intro-kappa-crit-H}\end{aligned}$$ We consider two sets of assumptions. 1. 1. – hold, ${\alpha_h}=1$; and hold; 2. – hold, $0<{\alpha_h}{\leqslant}{\beta_h}<1$ and $1-{\alpha_h}<\beta \land {\alpha_h}$; and hold. Finally, the operator we discuss is of the form $$\begin{aligned} \LL^{\kappa}f(x)&:= \int_{{\mathbb{R}}^d}( f(x+z)-f(x)- {\mathds{1}}_{|z|<1} \left<z,\nabla f(x)\right>)\kappa(x,z)J(z)\, dz \,. \label{e:intro-operator-a1-crit1}\end{aligned}$$ To be more specific we apply the above operator (in a strong or weak sense) only when it is well defined according to the following definition. We denote by $\LL^{\kappa,\varepsilon}f$ the expression with $J(z)$ replaced by $ J(z){\mathds{1}}_{|z|>\varepsilon}$, $\varepsilon \in [0,1]$. Let $f\colon {{{\mathbb{R}}^{d}}}\to {\mathbb{R}}$ be a Borel measurable function. Strong operator : \ The operator $\LL^{\kappa}f$ is well defined if the corresponding integral converges absolutely and the gradient $\nabla f(x)$ exists for every $x\in{{{\mathbb{R}}^{d}}}$. Weak operator : \ The operator $\LL^{\kappa,0^+}f$ is well defined if the limit exists for every $x\in{{{\mathbb{R}}^{d}}}$, $$\LL^{\kappa,0^+}f(x):=\lim_{\varepsilon \to 0^+}\LL^{\kappa,\varepsilon}f(x)\,,$$ where for $\varepsilon \in (0,1]$ the (strong) operators $\LL^{\kappa,\varepsilon}f$ are well defined. The operator $\LL^{\kappa,0^+}$ is an extension of $\LL^{\kappa,0}= \LL^{\kappa}$, meaning that if $\LL^{\kappa}f$ is well defined, then is so $\LL^{\kappa,0^+}f$ and $\LL^{\kappa,0^+}f=\LL^{\kappa}f$. Therefore, it is desired to prove the existence of a solution to the equation $\partial_t=\LL^{\kappa}$ and the uniqueness of a solution to $\partial_t=\LL^{\kappa,0^+}$. Here are our main results. \[t:intro-main\] Assume $\Qa$ or $\Qb$. Let $T>0$. There is a unique function $p^{\kappa}(t,x,y)$ on $(0,T]\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$ such that - For all $t\in(0,T]$, $x,y\in {{{\mathbb{R}}^{d}}}$, $x\neq y$, $$\label{e:intro-main-1} \partial_t p^{\kappa}(t,x,y)=\LL_x^{\kappa,0^+}p^{\kappa}(t,x, y)\,.$$ - The function $p^{\kappa}(t,x,y)$ is jointly continuous on $(0,T]\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$ and for any $f\in C_c^{\infty}({{{\mathbb{R}}^{d}}})$, $$\label{e:intro-main-5} \lim_{t\to 0^+}\sup_{x\in {{{\mathbb{R}}^{d}}}}\left| \int_{{{{\mathbb{R}}^{d}}}}p^{\kappa}(t,x,y)f(y)\, dy-f(x)\right|=0\, .$$ - For every $t_0\in (0,T)$ there are $c>0$ and $f_0\in L^{1}({{{\mathbb{R}}^{d}}})$ such that for all $t\in (t_0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\label{e:intro-main-2} |p^{\kappa}(t,x,y)|\le c f_0(x-y)\,,$$ and $$\label{e:intro-main-4} |\LL_x^{\kappa, \varepsilon}p^{\kappa}(t,x,y)|{\leqslant}c \,,\qquad \varepsilon \in (0,1]\,.$$ - For every $t\in (0,T]$ there is $c>0$ such that for all $x,y\in{{{\mathbb{R}}^{d}}}$, $$\label{e:intro-main-a1} |\nabla_x p^{\kappa}(t,x,y)|{\leqslant}c\,.$$ In the next theorem we collect more qualitative properties of $p^{\kappa}(t,x,y)$. To this end, for $t>0$ and $x\in {\mathbb{R}}^d$ we define [*the bound function*]{}, $$\label{e:intro-rho-def} {\Upsilon}_t(x):=\left( [h^{-1}(1/t)]^{-d}\land \frac{tK(|x|)}{|x|^{d}} \right) .$$ \[t:intro-further-properties\] Assume $\Qa$ or $\Qb$. The following hold true. 1. (Non-negativity) The function $p^{\kappa}(t,x,y)$ is non-negative on $(0,\infty)\times{{{\mathbb{R}}^{d}}}\times{{{\mathbb{R}}^{d}}}$. 2. (Conservativeness) For all $t>0$, $x\in{{{\mathbb{R}}^{d}}}$, $$\int_{{{{\mathbb{R}}^{d}}}}p^{\kappa}(t,x,y) dy =1\, .$$ 3. (Chapman-Kolmogorov equation) For all $s,t > 0$, $x,y\in {\mathbb{R}}^d$, $$\int_{{\mathbb{R}}^d}p^{\kappa}(t,x,z)p^{\kappa}(s,z,y)\, dz =p^{\kappa}(t+s,x,y)\, .$$ 4. (Upper estimate) For every $T>0$ there is $c>0$ such that for all $t\in (0,T]$, $x,y\in {{{\mathbb{R}}^{d}}}$, $$p^{\kappa}(t,x,y) {\leqslant}c {\Upsilon}_t(y-x)\, .$$ 5. (Fractional derivative) For every $T>0$ there is $c>0$ such that for all $t\in (0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} |\LL_x^{\kappa } p^{\kappa}(t, x, y)|{\leqslant}c t^{-1}{\Upsilon}_t(y-x)\,.\end{aligned}$$ 6. (Gradient) For every $T>0$ there is $c>0$ such that for all $t\in (0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\left|\nabla_x p^{\kappa}(t,x,y)\right|{\leqslant}c\! \left[h^{-1}(1/t)\right]^{-1} {\Upsilon}_t(y-x)\,.$$ 7. (Continuity) The function $\LL_x^{\kappa} p^{\kappa}(t,x,y)$ is jointly continuous on $(0,\infty)\times {{{\mathbb{R}}^{d}}}\times{{{\mathbb{R}}^{d}}}$. 8. (Strong operator) For all $t>0$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\partial_t p^{\kappa}(t,x,y)= \LL_x^{\kappa}\, p^{\kappa}(t,x,y)\,.$$ 9. (Hölder continuity) For all $T>0$, $\gamma \in [0,1] \cap[0,{\alpha_h})$, there is $c>0$ such that for all $t\in (0,T]$ and $x,x',y\in {{{\mathbb{R}}^{d}}}$, $$\left|p^{\kappa}(t,x,y)-p^{\kappa}(t,x',y)\right| {\leqslant}c (|x-x'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma} \big( {\Upsilon}_t(y-x)+ {\Upsilon}_t(y-x') \big).$$ 10. (Hölder continuity) For all $T>0$, $\gamma \in [0,\beta)\cap [0,{\alpha_h})$, there is $c>0$ such that for all $t\in (0,T]$ and $x,y,y'\in {{{\mathbb{R}}^{d}}}$, $$\left|p^{\kappa}(t,x,y)-p^{\kappa}(t,x,y')\right| {\leqslant}c (|y-y'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma} \big( {\Upsilon}_t(y-x)+ {\Upsilon}_t(y-x') \big).$$ The constants in [(4) – (6)]{} may be chosen to depend only on $d, {\gamma_0}, \kappa_0, \kappa_1, \kappa_2, {\kappa_3}, {\kappa_4}, \beta, {\alpha_h}, C_h, h, T$. The same holds for [(9)]{} and [(10)]{} but with additional dependence on $\gamma$. For $t>0$ we define $$\label{e:intro-semigroup} P_t^{\kappa}f(x)=\int_{{{{\mathbb{R}}^{d}}}} p^{\kappa}(t,x,y)f(y)\, dy\, ,\quad x\in {{{\mathbb{R}}^{d}}}\, ,$$ whenever the integral exists in the Lebesgue sense. We also put $P_0^{\kappa}=\mathrm{Id}$ the identity operator. \[thm:onC0Lp\] Assume $\Qa$ or $\Qb$. The following hold true. 1. $(P^{\kappa}_t)_{t{\geqslant}0}$ is an analytic strongly continuous positive contraction semigroup on 2. $(P^{\kappa}_t)_{t{\geqslant}0}$ is an analytic strongly continuous semigroup on every $(L^p({{{\mathbb{R}}^{d}}}),\|\cdot\|_p)$, 3. Let $(\mathcal{A}^{\kappa},D(\mathcal{A}^{\kappa}))$ be the generator of $(P_t^{\kappa})_{t{\geqslant}0}$ on $(C_0({{{\mathbb{R}}^{d}}}),\|\cdot\|_{\infty})$.\ Then 1. $C_0^2({{{\mathbb{R}}^{d}}}) \subseteq D(\mathcal{A}^{\kappa})$ and $\mathcal{A}^{\kappa}=\LL^{\kappa}$ on $C_0^2({{{\mathbb{R}}^{d}}})$, 2. $(\mathcal{A}^{\kappa},D(\mathcal{A}^{\kappa}))$ is the closure of $(\LL^{\kappa}, C_c^{\infty}({{{\mathbb{R}}^{d}}}))$, 3. the function $x\mapsto p^{\kappa}(t,x,y)$ belongs to $D(\mathcal{A}^{\kappa})$ for all $t>0$, $y\in{{{\mathbb{R}}^{d}}}$, and $$\mathcal{A}^{\kappa}_x\, p^{\kappa}(t,x,y)= \LL_x^{\kappa}\, p^{\kappa}(t,x,y)=\partial_t p^{\kappa}(t,x,y)\,,\qquad x\in{{{\mathbb{R}}^{d}}}\,.$$ 4. Let $(\mathcal{A}^{\kappa},D(\mathcal{A}^{\kappa}))$ be the generator of $(P_t^{\kappa})_{t{\geqslant}0}$ on $(L^p({{{\mathbb{R}}^{d}}}),\|\cdot\|_p)$, $p\in [1,\infty)$.\ Then 1. $C_c^2({{{\mathbb{R}}^{d}}}) \subseteq D(\mathcal{A}^{\kappa})$ and $\mathcal{A}^{\kappa}=\LL^{\kappa}$ on $C_c^2({{{\mathbb{R}}^{d}}})$, 2. $(\mathcal{A}^{\kappa},D(\mathcal{A}^{\kappa}))$ is the closure of $(\LL^{\kappa}, C_c^{\infty}({{{\mathbb{R}}^{d}}}))$, 3. the function $x\mapsto p^{\kappa}(t,x,y)$ belongs to $D(\mathcal{A}^{\kappa})$ for all $t>0$, $y\in{{{\mathbb{R}}^{d}}}$, and in $L^p({{{\mathbb{R}}^{d}}})$, $$\mathcal{A}^{\kappa} \, p^{\kappa}(t,\cdot,y)= \LL^{\kappa}\, p^{\kappa}(t,\cdot,y)=\partial_t p^{\kappa}(t,\cdot,y)\,.$$ Finally, we provide a lower bound for the heat kernel $p^{\kappa}(t,x,y)$. For abbreviation we write ${\sigma}$ for the set of constants $({\gamma_0},\kappa_0,\kappa_1,{\kappa_3},{\alpha_h}, C_h,h)$. \[thm:lower-bound\] Assume $\Qa$ or $\Qb$. The following hold true. - There are $T_0=T_0(d,\nu,{\sigma},\kappa_2,{\kappa_4}, \beta)>0$ and $c=c(d,\nu,{\sigma}, \kappa_2, {\kappa_4}, \beta)>0$ such that for all $t\in (0,T_0]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\label{e:intro-main-11} p^{\kappa}(t,x,y){\geqslant}c\left( [h^{-1}(1/t)]^{-d}\wedge t \nu \left( |x-y|\right)\right).$$ - If additionally $\nu$ is positive, then for every $T>0$ there is $c=c(d,T,\nu,{\sigma},\kappa_2,{\kappa_4}, \beta)>0$ such that holds for $t\in(0,T]$ and $x,y\in{{{\mathbb{R}}^{d}}}$.\ - If additionally there are $\bar{\beta}\in [0,2)$ and $\bar{c}>0$ such that $\bar{c} \lambda^{d+\bar{\beta}} \nu (\lambda r) {\leqslant}\nu(r)$, $\lambda {\leqslant}1$, $r>0$, then for every $T >0$ there is $c=c(d,T,\nu,{\sigma},\kappa_2,{\kappa_4},\beta,\bar{c},\bar{\beta})>0$ such that for all $t\in(0,T]$ and $x,y\in{{{\mathbb{R}}^{d}}}$, $$\label{e:intro-main-111} p^{\kappa}(t,x,y) {\geqslant}c {\Upsilon}_t(y-x)\,.$$ \[rem:smaller\_beta\] If , hold, then $|\kappa(x,z)-\kappa(y,z)|{\leqslant}(2\kappa_1 \vee \kappa_2)|x-y|^{\beta_1}$ for every $\beta_1 \in [0,\beta]$. Our results allow uniquely solve [*the martingale problem*]{} to the operator $(\LL^{\kappa}, C_c^{\infty}({{{\mathbb{R}}^{d}}}))$. They also have applications to [*the Kato class*]{} of the semigroup $(P^{\kappa}_t)_{t{\geqslant}0}$. For details see [@GS-2018 Remark 1.5 and 1.6]. There has been recently a lot of interest in constructing semigroups for L[é]{}vy-type operators [@MR3353627], [@MR3652202], [@MR3500272], [@MR3817130], [@GS-2018], [@MR2163294], [@MR2456894], [@FK-2017], [@BKS-2017], [@KR-2017], [@MR3294616], [@MR3544166], [@PJ], [@CZ-new], [@CZ-survey]. Such operators arise naturally due to the Courr[è]{}ge-Waldenfels theorem [@MR1873235 Theorem 4.5.21], [@MR3156646 Theorem 2.21]. In principle, those operators are not symmetric, so the $L^2$-theory or Dirichlet forms do not apply in this context. In addition, here we deal with another difficulty that comes from a possible non-symmetry of the internal structure, that is from the non-symmetry of the L[é]{}vy measure $\kappa(x,z)J(z)dz$. It may cause a non-zero [*internal drift*]{} that emerges from the compensation term $\int_{|z|<1}z \kappa(x,z)J(z)dz$ in . Note that $$\begin{aligned} \label{eq:L_split} \LL^{\kappa}f(x)= \int_{{{{\mathbb{R}}^{d}}}}( f(x+z)-f(x)- {\mathds{1}}_{|z|<r} \left<z,\nabla f(x)\right>)\,\kappa(x,z)J(z) dz \nonumber \\ +\left(\int_{{{{\mathbb{R}}^{d}}}} z \left( {\mathds{1}}_{|z|<r}-{\mathds{1}}_{|z|<1} \right) \kappa(x,z)J(z) dz\right) \cdot \nabla f(x)\,.\end{aligned}$$ The influence of the internal drift (usually at a time scale $r=h^{-1}(1/t)$) may differ according to the order of the operator that we measure by the growth of the function $h$ in and . For instance, if the order is greater than one, i.e., if , , and with ${\alpha_h}>1$ hold, then by Lemma \[lem:int\_J\] the inequalities and are automatically satisfied. This tacitly facilitates the analysis of the sub-critical non-symmetric case, see [@GS-2018], [@MR3652202], [@PJ], [@CZ-new], [@MR1744782]. This is no longer the case in general if ${\alpha_h}=1$ (critical case) or ${\alpha_h}< 1$ (super-critical case), which makes the study of the operator harder. In those cases the first order drift term is not necessarily suppressed by the non-local part of the operator. Note that under the symmetry condition, i.e., the symmetry of $J$ and $\kappa(x,z)=\kappa(x,-z)$, $x,z\in{{{\mathbb{R}}^{d}}}$, we have $$\begin{aligned} \label{eq:0} \sup_{r\in (0,1]} \sup_{x\in{{{\mathbb{R}}^{d}}}}\left| \int_{r{\leqslant}|z|<1} z \kappa(x,z)J(z)dz \right| =0.\end{aligned}$$ Therefore it may seem natural to impose as an assumption in a general (non-symmetric) case if say ${\alpha_h}=1$ in , as it was done in [@PJ] and [@CZ-new] for $1$-stable L[é]{}vy case $J(z)=|z|^{-d-1}$. However, in view of our results, we see that such assumption is excessively restrictive. Indeed, for $\nu(r)=r^{-d-1}$ the inequalities and read as $$\begin{aligned} \sup_{x\in{{{\mathbb{R}}^{d}}}}\left| \int_{r{\leqslant}|z|<1} z\, \kappa(x,z) J(z)dz \right| &{\leqslant}c \,, \qquad r\in (0,1],\\ \left| \int_{r{\leqslant}|z|<1} z\, \big[ \kappa(x,z)- \kappa(y,z)\big] J(z)dz \right| &{\leqslant}c |x-y|^{\beta}\,, \qquad r\in (0,1]. \end{aligned}$$ Similarly to the conditions and prescribe certain cancellations in the expressions of their left-hand sides. The admissible rate $rh(r)$ is inherited from from the L[é]{}vy measure $J(z)dz$, see and the definition of $h$. For the next example let us consider $\nu(r)=r^{-d-1}\log(2+1/r)$, which is slightly more singular at zero than that for $1$-stable L[é]{}vy case. Then holds with ${\alpha_h}=1$, but not with any ${\alpha_h}>1$, and holds with every ${\beta_h}>1$, but not with ${\beta_h}=1$. We also have that $\nu(r)$ is comparable with $r^{-d}h(r)$, see [@GS-2018 Lemma 5.3 and 5.4]. Thus and allow respectively logarithmic unboundedness as $r\to 0$ as follows $$\begin{aligned} \sup_{x\in{{{\mathbb{R}}^{d}}}}\left| \int_{r{\leqslant}|z|<1} z\, \kappa(x,z) J(z)dz \right| &{\leqslant}c \log(2+1/r) \,, \qquad r\in (0,1],\\ \left| \int_{r{\leqslant}|z|<1} z\, \big[ \kappa(x,z)- \kappa(y,z)\big] J(z)dz \right| &{\leqslant}c |x-y|^{\beta} \log(2+1/r)\,, \qquad r\in (0,1]. \end{aligned}$$ On the other hand, if – hold with $0<{\alpha_h}{\leqslant}{\beta_h}<1$, then $rh(r)\to 0$ whenever $r\to 0$. For instance, if $\nu(r)=r^{-d-\alpha}$ and $\alpha \in (1/2,1)$ (to have $1-\alpha<\alpha$, see $\Qb$), then $r h(r)=r^{1-\alpha}h(1)$. The tool used in this paper is the parametrix method, proposed by E. Levi [@zbMATH02644101] to solve elliptic Cauchy problems. It was successfully applied in the theory of partial differential equations [@zbMATH02629782], [@MR1545225], [@MR0003340], [@zbMATH03022319], with an overview in the monograph [@MR0181836], as well as in the theory of pseudo-differential operators [@MR2093219], [@MR3817130], [@MR3652202], [@FK-2017], [@MR3294616]. In particular, operators comparable in a sense with the fractional Laplacian were intensively studied [@MR0492880], [@MR616459], [@MR972089], [@MR1744782], [@MR2093219], also very recently [@MR3500272], [@PJ], [@CZ-new], [@KR-2017]. More detailed historical comments on the development of the method can be found in [@MR0181836 Bibliographical Remarks] and in the introductions of [@MR3652202] and [@BKS-2017]. Basically we follow the scheme of [@GS-2018], which in turn was motivated by [@MR3817130] and [@MR3500272]. The fundamental solution $p^{\kappa}$ is expected to be give by $$\begin{aligned} p^{\kappa}(t,x,y)= p^{\mathfrak{K}_y}(t,x,y)+\int_0^t \int_{{{{\mathbb{R}}^{d}}}}p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dzds\,,\end{aligned}$$ where $q(t,x,y)$ solves the equation $$\begin{aligned} q(t,x,y)=q_0(t,x,y)+\int_0^t \int_{{{{\mathbb{R}}^{d}}}}q_0(t-s,x,z)q(s,z,y)\, dzds\,,\end{aligned}$$ and $q_0(t,x,y)=\big(\LL_x^{{\mathfrak K}_x}-\LL_x^{{\mathfrak K}_y}\big) p^{\mathfrak{K}_y}(t,x,y)$. Here $p^{\mathfrak{K}_w}$ is the heat kernel of the L[é]{}vy operator $\LL^{\mathfrak{K}_w}$ obtained from the operator $\LL^{\kappa}$ by freezing its coefficients: $\mathfrak{K}_w(z)=\kappa(w,z)$. In our setting we draw the initial knowledge of $p^{{\mathfrak K}_w}$ from [@GS-2017], which we then exploit in Section \[sec:analysis\_LL\] and \[sec:analysis\_LL\_2\] to establish further properties. We would like to stress that whenever referring to [@GS-2017] we mean the first version of the preprint. Already in this preliminary part we essentially incorporate and , which differs from [@GS-2018]. We also see the effect of the internal drift and the fact that the order of the operator does not have to be strictly larger than one, e.g., Theorem \[thm:delta\_crit\]. In Section \[sec:q\] we carry out the construction of $p^{\kappa}$. In view of future developments the following remark is notable. We emphasize that the construction of $p^{\kappa}$ is possible, and many preliminary facts hold true, under a weaker assumption 1. 1. – hold, ${\alpha_h}\in (0,1]$; and hold. In particular, see Lemma \[l:estimates-q0-crit1\], Theorem \[t:definition-of-q-crit1\], Lemma \[lem:phi\_cont\_xy-crit1\], Lemma \[l:phi-y-abs-cont-crit1\] and . The subsequent non-trivial step is to verify that $p^{\kappa}$ is the actual solution. To this end in Section \[sec:phi\] we need extra constraints which eventually result in $\Qa$ and $\Qb$, see for instance Lemma \[e:L-on-phi-y-crit1\]. In Section \[sec:p\_kappa\] we collect initial properties of $p^{\kappa}$. In Section \[sec:Main\] we establish a nonlocal maximum principle, analyze the semigroup $(P_t^{\kappa})_{t{\geqslant}0}$, complement the fundamental properties of $p^{\kappa}$ and prove Theorems \[t:intro-main\]–\[thm:lower-bound\]. Section \[sec:appA\] contains auxiliary results. Proofs that are the same as in [@GS-2018] are reduced to minimum, we only point out which facts are needed. Other related papers treat for instance (symmetric) singular L[é]{}vy measures [@BKS-2017], [@KR-2017] or (symmetric) exponential L[é]{}vy measures [@KL-2018]. We also list some papers that use different techniques to associate a semigroup to an operator like symbolic calculus [@MR0367492], [@MR0499861], [@MR666870], [@MR1659620], [@MR1254818], [@MR1917230], [@MR2163294], [@MR2456894], Dirichlet forms [@MR2778606], [@MR898496], [@MR2492992], [@MR2443765], [@MR2806700] or perturbation series [@MR1310558], [@MR2283957], [@MR2643799], [@MR2876511], [@MR3550165], [@MR3295773]. For probabilistic methods and applications we refer the reader to [@MR3022725], [@MR3544166], [@MR1341116], [@MR3765882], [@K-2015], [@KR-2017]. Throughout the article $\omega_d=2\pi^{d/2}/\Gamma(d/2)$ is the surface measure of the unit sphere in ${\mathbb{R}}^d$. By $c(d,\ldots)$ we denote a generic positive constant that depends only on the listed parameters $d,\ldots$. As usual $a\land b=\min\{a,b\}$ and $a\vee b = \max\{a,b\}$. We use “$:=$" to denote a definition. In what follows the constants ${\gamma_0}$, $\kappa_0$, $\kappa_1$, $\kappa_2$, $\beta$, ${\kappa_3}$, ${\kappa_4}$, ${\alpha_h}$, $C_h$, ${\beta_h}$, $c_h$ can be regarded as fixed. We will also need a non-increasing function $${\Theta}(t):= 1+\ln\left(1 \vee \left[h^{-1}(1/t)\right]^{-1}\right),\qquad t>0\,.$$ Excluding Section \[sec:Main\] and \[sec:appA\] we explicitly formulate all assumptions in lemmas, corollaries, propositions and theorems. [**In the whole Section \[sec:Main\] we assume that either $\Qa$ or $\Qb$ holds**]{}. Analysis of the heat kernel of $\LL^{\mathfrak{K}}$ {#sec:analysis_LL} =================================================== Assume that a function $\mathfrak{K}\colon {{{\mathbb{R}}^{d}}}\to [0,\infty)$ is such that $$\begin{aligned} \label{ineq:k-bounded} 0<\kappa_0 {\leqslant}\mathfrak{K}(z) {\leqslant}\kappa_1\,,\end{aligned}$$ and $$\begin{aligned} \label{ineq:k-int_control} \left| \int_{r{\leqslant}|z|<1} z\, \mathfrak{K}(z) J(z)dz \right|{\leqslant}{\kappa_3}rh(r)\,, \qquad r\in (0,1].\end{aligned}$$ For $J(z)$ satisfying and we consider an operator (cf. ) $$\LL^{\mathfrak{K}}f(x):= \int_{{{{\mathbb{R}}^{d}}}}( f(x+z)-f(x)- {\mathds{1}}_{|z|<1} \left<z,\nabla f(x)\right>)\,\mathfrak{K}(z)J(z)\, dz \,.$$ The operator uniquely determines a L[é]{}vy process and its density $p^{\mathfrak{K}}(t,x,y)=p^{\mathfrak{K}}(t,y-x)$ (see [@GS-2018 Section 6]; in particular, [@GS-2018 (96)] holds by , [@GS-2018 (86)] and ). To simplify the notation we introduce $$\begin{aligned} \delta_{1.r}^{\mathfrak{K}} (t,x,y;z)&:=p^{\mathfrak{K}}(t,x+z,y)-p^{\mathfrak{K}}(t,x,y)-{\mathds{1}}_{|z|<r}\left< z,\nabla_x p^{\mathfrak{K}}(t,x,y)\right>,\end{aligned}$$ and $\delta^{\mathfrak{K}}=\delta_{1.1}^{\mathfrak{K}}$. Thus we have $$\begin{aligned} \LL_x^{\mathfrak{K}_1} \,p^{\mathfrak{K}_2}(t,x,y)=\int_{{{{\mathbb{R}}^{d}}}}\delta^{\mathfrak{K}_2} (t,x,y;z)\, \mathfrak{K}_1(z)J(z)dz\,.\end{aligned}$$ The result below is the initial point of the whole paper. \[prop:gen\_est\_crit\] Assume , , , . For every $T>0$ and $\bbbeta\in \mathbb{N}_0^d$ there exists a constant $c=c(d,T,\bbbeta,{\sigma})$ such that for all $t\in (0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} |\partial_x^{\bbbeta} p^{\mathfrak{K}}\left(t,x,y\right)|{\leqslant}c \left[h^{-1}(1/t) \right]^{-|\bbbeta|} {\Upsilon}_t(y-x)\,.\end{aligned}$$ The result follows from [@GS-2017 Theorem 5.6 and Remark 5.7]. \[prop:gen\_est\_low-crit1\] Assume , , , . For every $T,\theta>0$ there exists a constant $\tilde{c}=\tilde{c}(d,T,\theta,\nu,{\sigma})$ such that for all $t\in (0,T]$ and $|x-y|{\leqslant}\theta h^{-1}(1/t)$, $$\begin{aligned} p^{\mathfrak{K}}\left(t,x,y\right){\geqslant}\tilde{c} \left[ h^{-1}(1/t)\right]^{-d}\,.\end{aligned}$$ We use [@GS-2017 Corollary 5.11] with $x-y- t{b}_{[h_0^{-1}(1/t)]}$ in place of $x$ as we have by that $|t{b}_{[h_0^{-1}(1/t)]}|{\leqslant}a h_0^{-1}(1/t)$ for $a=a(d,T,{\sigma})$. To shorten the notation we define the following expressions. For $t>0$, $x,y,z\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} {\mathcal{F}}_{1}&:={\Upsilon}_t(y-x-z){\mathds{1}}_{|z|{\geqslant}h^{-1}(1/t)}+ \left[ \left(\frac{|z|}{h^{-1}(1/t)} \right)^2 \land \left(\frac{|z|}{h^{-1}(1/t)} \right) \right] {\Upsilon}_t(y-x),\\ {\mathcal{F}}_{2}&:={\Upsilon}_t(y-x-z){\mathds{1}}_{|z|{\geqslant}h^{-1}(1/t)}+ \left[ \left(\frac{|z|}{h^{-1}(1/t)}\right)\wedge 1\right] {\Upsilon}_t(y-x).\end{aligned}$$ Hereinafter we add arguments $(t,x,y;z)$ when referring to functions given above. \[lem:pk-collected\] Assume , , , . For every $T>0$ there exists a constant $c=c(d,T,{\sigma})$ such that for all $r>0$, $t\in (0,T]$, $x,x',y,z\in{{{\mathbb{R}}^{d}}}$ we have $$\begin{aligned} \left|p^{\mathfrak{K}}(t,x+z,y)-p^{\mathfrak{K}}(t,x,y)\right|&{\leqslant}c\, {\mathcal{F}}_2(t,x,y;z)\,,\label{ineq:est_diff_1} \\ \left|\nabla_x p^{\mathfrak{K}}(t,x+z,y)-\nabla_x p^{\mathfrak{K}}(t,x,y)\right|&{\leqslant}c \left[h^{-1}(1/t)\right]^{-1} {\mathcal{F}}_2(t,x,y;z)\,,\label{ineq:est_grad_1}\\ |\delta_{1.r}^{\mathfrak{K}}(t,x,y;z)| &{\leqslant}c \big( {\mathcal{F}}_{1}(t,x,y;z){\mathds{1}}_{|z|<r}+{\mathcal{F}}_{2}(t,x,y;z){\mathds{1}}_{|z|{\geqslant}r}\big)\,, \label{ineq:est_delta_1_crit}\end{aligned}$$ and whenever $|x'-x|<h^{-1}(1/t)$, then $$\begin{aligned} \label{ineq:diff_delta_1_crit} |\delta_{1.r}^{\mathfrak{K}}(t,x',y;z)-\delta_{1.r}^{\mathfrak{K}}(t,x,y;z)| {\leqslant}c\left(\frac{|x'-x|}{h^{-1}(1/t)}\right) \big( {\mathcal{F}}_{1}(t,x,y;z){\mathds{1}}_{|z|<r}+{\mathcal{F}}_{2}(t,x,y;z){\mathds{1}}_{|z|{\geqslant}r}\big)\,.\end{aligned}$$ The inequalities follow from Proposition \[prop:gen\_est\_crit\], cf. [@GS-2018 Lemma 2.3–2.8]. Due to [@GS-2018 Corollary 5.10] the inequalities and can be written equivalently as $$\begin{aligned} \left|p^{\mathfrak{K}}(t,x',y)-p^{\mathfrak{K}}(t,x,y)\right|&{\leqslant}c \left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) \big( {\Upsilon}_t(y-x') + {\Upsilon}_t(y-x)\big)\,, \\ \left|\nabla_x p^{\mathfrak{K}}(t,x',y)-\nabla_x p^{\mathfrak{K}}(t,x,y)\right|&{\leqslant}c \left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) \left[h^{-1}(1/t)\right]^{-1}\big( {\Upsilon}_t(y-x') + {\Upsilon}_t(y-x)\big)\,.\end{aligned}$$ We choose the prior form to reduce the number of cases to discuss when integrating those expressions. On the other hand, from the latter we easily get what follows (cf. [@GS-2018]). \[lem:pkw\_holder\] Assume , , , . For every $T>0$ there exists a constant $c=c(d,T,{\sigma})$ such that for all $t\in(0,T]$, $x,x',y,w \in {{{\mathbb{R}}^{d}}}$ and $\gamma\in [0,1]$, $$\begin{aligned} |p^{\mathfrak{K}}(t,x',y)-p^{\mathfrak{K}}(t,x,y) | {\leqslant}c (|x-x'|^{\gamma}\land 1) \left[h^{-1}(1/t)\right]^{-\gamma} \big( {\Upsilon}_t(y-x') + {\Upsilon}_t(y-x)\big).\end{aligned}$$ In the next lemma we estimate $\LL_x^{\mathfrak{K}_1} p^{\mathfrak{K}}(t,x,y)$. \[lem:Lkp\_abs\] Assume , and let $\mathfrak{K}$, $\mathfrak{K}_1$ satisfy , . For every $T>0$ there exists a constant $c=c(d,T,{\sigma})$ such that for all $t\in (0,T]$, $x,y,w\in{{{\mathbb{R}}^{d}}}$ we have $$\begin{aligned} \label{ineq:Lkp_abs} \left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}} (t,x,y;z) \, \mathfrak{K}_1(z) J(z)dz \right| {\leqslant}c t^{-1} {\Upsilon}_t(y-x)\,.\end{aligned}$$ Let ${\rm I}$ be the left hand side of . We note that the integral defining ${\rm I}$ converges absolutely. Using with $r=h^{-1}(1/t)$, and Lemma \[lem:pk-collected\], $$\begin{aligned} {\rm I} &{\leqslant}c\int_{|z|{\geqslant}h^{-1}(1/t)} {\mathcal{F}}_{2} (t,x,y;z) \, \mathfrak{K}_1(z) J(z)dz +c \int_{|z|< h^{-1}(1/t)} {\mathcal{F}}_{1} (t,x,y;z) \, \mathfrak{K}_1(z) J(z)dz\\ &\quad + \left| \int_{{{{\mathbb{R}}^{d}}}} z \left({\mathds{1}}_{|z|<h^{-1}(1/t)} - {\mathds{1}}_{|z|<1}\right) \mathfrak{K}_1(z) J(z)dz\right| |\nabla_x p^{\mathfrak{K}}(t,x,y)|\,.\end{aligned}$$ By and Proposition \[prop:gen\_est\_crit\] the last term is bounded by $c\, t^{-1}{\Upsilon}_t(y-x)$. The same holds for the first two terms by , , [@GS-2018 Lemma 5.1 and 5.9]. \[thm:delta\_crit\] Assume , , , . For every $T>0$ the inequalities $$\begin{aligned} &\int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}} (t,x,y;z)|\, J(z)dz {\leqslant}c\, {\vartheta}(t)\, t^{-1} {\Upsilon}_t(y-x)\,, \label{e:fract-der-est1-crit}\\ \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}} (t,x',y;z)-&\delta^{\mathfrak{K}} (t,x,y;z)|\, J(z)dz {\leqslant}c \left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) {\vartheta}(t)\, t^{-1} \big( {\Upsilon}_t(y-x') + {\Upsilon}_t(y-x)\big),\nonumber $$ hold for all $t\in(0,T]$, $x,x',y\in{{{\mathbb{R}}^{d}}}$ with 1. ${\vartheta}(t)={\Theta}(t)$ and $c=c(d,T,{\sigma})$ if ${\alpha_h}=1$, 2. ${\vartheta}(t)=t \,[h^{-1}(1/t)]^{-1}$ and $c=c(d,T,{\sigma},{\beta_h},c_h)$ if holds for $0<{\alpha_h}{\leqslant}{\beta_h}<1$. Let $r=h^{-1}(1/t)$. By we get $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} &|\delta^{\mathfrak{K}} (t,x,y;z)|\, J(z)dz \\ &{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} |\delta_{1.r}^{\mathfrak{K}} (t,x,y;z)|\, J(z)dz + \int_{{{{\mathbb{R}}^{d}}}} |z| \left| {\mathds{1}}_{|z|<r}-{\mathds{1}}_{|z|<1} \right| J(z)\, dz\, |\nabla_x p^{\mathfrak{K}}(t,x,y)| \\ &{\leqslant}c \int_{|z|{\geqslant}h^{-1}(1/t)} {\mathcal{F}}_{2} (t,x,y;z) \, J(z)dz +c \int_{|z|< h^{-1}(1/t)} {\mathcal{F}}_{1} (t,x,y;z) \, J(z)dz\\ &+\int_{{{{\mathbb{R}}^{d}}}} |z| \left| {\mathds{1}}_{|z|<h^{-1}(1/t)}-{\mathds{1}}_{|z|<1} \right| J(z) dz\, \left[h^{-1}(1/t)\right]^{-1} {\Upsilon}_t(y-x)\,.\end{aligned}$$ The first inequality follows from [@GS-2018 Lemma 5.1 and 5.9] and Lemma \[lem:int\_J\]. Now we prove the second inequality. If $|x'-x|{\geqslant}h^{-1}(1/t)$, then $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \left(|\delta^{\mathfrak{K}} (t,x',y;z)|+|\delta^{\mathfrak{K}} (t,x,y;z)|\right)J(z)dz {\leqslant}c \, {\vartheta}(t) t^{-1} \left( {\Upsilon}_t(y-x')+{\Upsilon}_t(y-x) \right)\,.\end{aligned}$$ If $|x'-x|< h^{-1}(1/t)$ we rely on , and again [@GS-2018 Lemma 5.1 and 5.9] and Lemma \[lem:int\_J\]. \[lem:rozne\_1\] Assume , and let $\mathfrak{K}_1$, $\mathfrak{K}_2$ satisfy , . For all $t>0$, $x,y\in{{{\mathbb{R}}^{d}}}$ and $s\in (0,t)$, $$\begin{aligned} \frac{d}{d s} \int_{{{{\mathbb{R}}^{d}}}} &p^{\mathfrak{K}_1}(s,x,z) p^{\mathfrak{K}_2}(t-s,z,y)\,dz\\ &= \int_{{{{\mathbb{R}}^{d}}}} \LL_x^{\mathfrak{K}_1}p^{\mathfrak{K}_1}(s,x,z) \, p^{\mathfrak{K}_2}(t-s,z,y)\,dz - \int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_1}(s,x,z)\, \LL_z^{\mathfrak{K}_2} p^{\mathfrak{K}_2}(t-s,z,y) \,dz\,,\end{aligned}$$ and $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \LL^{\mathfrak{K}}_x p^{\mathfrak{K}_1}(s,x,z) p^{\mathfrak{K}_2}(t-s,z,y)\,dz = &\int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_1}(s,x,z) \, \LL_z^{\mathfrak{K}} p^{\mathfrak{K}_2}(t-s,z,y)\,dz\,.\end{aligned}$$ The proof is the same as in [@GS-2018 Lemma 2.10]. We do not use Theorem \[thm:delta\_crit\] here, but a fact that for every $0<t_0<T$ there exists a constant $c=c(d,T,t_0,{\sigma})$ such that for all $t\in[t_0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{ineq:aux_Q0} \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_1} (t,x,y;z)|\, J(z)dz {\leqslant}c\, t^{-1}{\Upsilon}_t(y-x){\leqslant}c t_0^{-1}{\Upsilon}_{t_0}(y-x)\,,\end{aligned}$$ which is valid under the assumptions of the lemma, see . Analysis of the heat kernel of $\LL^{\mathfrak{K}_w}$ {#sec:analysis_LL_2} ===================================================== Consider $J(z)$ satisfying and , and $\kappa(x,z)$ satisfying and . For a fixed $w\in {{{\mathbb{R}}^{d}}}$ we define $\mathfrak{K}_w(z)=\kappa(w,z)$, for which then and hold. Let $p^{\mathfrak{K}_w}(t,x,y)$ be the heat kernel of the operator $\LL^{{\mathfrak K}_w}$ like in Section \[sec:analysis\_LL\]. That procedure is known as freezing the coefficients of the operator $\LL^{\kappa}$ given in . For all $t>0$, $x,y,w\in{{{\mathbb{R}}^{d}}}$, $$\label{eq:p_gen_klas} \partial_t p^{\mathfrak{K}_w}(t,x,y)= \LL_x^{\mathfrak{K}_w} p^{\mathfrak{K}_w}(t,x,y)\,,$$ where for every $w'\in{{{\mathbb{R}}^{d}}}$ we have $$\LL_x^{\mathfrak{K}_{w'}} p^{\mathfrak{K}_w}(t,x,y) =\int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_w}(t,x,y;z) \kappa(w',z)J(z)dz\,.$$ We will often use the decomposition in the form $$\begin{aligned} \LL_x^{\mathfrak{K}_w} p^{\mathfrak{K}}(t,x,y) =&\int_{{{{\mathbb{R}}^{d}}}} \delta_{1.r}^{\mathfrak{K}}(t,x,y;z)\, \kappa(w,z)J(z) dz \\ &+\left(\int_{{{{\mathbb{R}}^{d}}}} z \left( {\mathds{1}}_{|z|<r}-{\mathds{1}}_{|z|<1} \right) \kappa(w,z)J(z)\, dz\right) \cdot \nabla_x p^{\mathfrak{K}}(t,x,y)\,.\end{aligned}$$ First we deal with $ \big( \LL_x^{\mathfrak{K}_{w'}}-\LL_x^{\mathfrak{K}_w} \big)p^{\mathfrak{K}}(t,x,y) $. Assume $\Qzero$ and let $\mathfrak{K}$ satisfy , . For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4})$ such that for all $t\in (0,T]$, $x,y, w, w'\in{{{\mathbb{R}}^{d}}}$ we have $$\begin{aligned} \label{ineq:Lkp_abs-H} \left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}} (t,x,y;z) \, \left( \kappa(w',z)- \kappa(w,z)\right) J(z)dz \right| {\leqslant}c \left( |w'-w|^{\beta}\land 1 \right) t^{-1} {\Upsilon}_t(y-x)\,.\end{aligned}$$ If $|w'-w|{\geqslant}1$ we apply . Let ${\rm I}$ be the left hand side of and $|w'-w|< 1$. We also note that the integral defining ${\rm I}$ converges absolutely. Using with $r=h^{-1}(1/t)$, and , $$\begin{aligned} {\rm I} &{\leqslant}c\int_{|z|{\geqslant}h^{-1}(1/t)} {\mathcal{F}}_{2} (t,x,y;z) \, |\kappa(w',z)- \kappa(w,z)| J(z)dz\\ &\quad+c \int_{|z|< h^{-1}(1/t)} {\mathcal{F}}_{1} (t,x,y;z) \, |\kappa(w',z)- \kappa(w,z)| J(z)dz\\ &\quad + \left| \int_{{{{\mathbb{R}}^{d}}}} z \left({\mathds{1}}_{|z|<h^{-1}(1/t)} - {\mathds{1}}_{|z|<1}\right) \big( \kappa(w',z)- \kappa(w,z)\big) J(z)dz\right| |\nabla_x p^{\mathfrak{K}}(t,x,y)|\,.\end{aligned}$$ By and Proposition \[prop:gen\_est\_crit\] the last term is bounded by $|w'-w|^{\beta}t^{-1}{\Upsilon}_t(y-x)$. The same is true for the first two terms by , , [@GS-2018 Lemma 5.1 and 5.9]. We prove the estimate for $\big(\LL_{x'}^{\mathfrak{K}_{w'}}-\LL_{x'}^{\mathfrak{K}_w} \big) p^{\mathfrak{K}}(t,x',y)-\big(\LL_{x}^{\mathfrak{K}_{w'}}-\LL_{x}^{\mathfrak{K}_w} \big) p^{\mathfrak{K}}(t,x,y)$. Assume $\Qzero$ and let $\mathfrak{K}$ satisfy , . For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4})$ such that for all $t\in(0,T]$, $x,x',y, w, w'\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{ineq:Lkp_abs-H-H} &\left| \int_{{{{\mathbb{R}}^{d}}}} \left( \delta^{\mathfrak{K}} (t,x',y;z)-\delta^{\mathfrak{K}} (t,x,y;z) \right) \left( \kappa(w',z)- \kappa(w,z)\right) J(z)dz \right| \nonumber \\ &\qquad {\leqslant}c \left(\frac{|x'-x|}{h^{-1}(1/t)} \land 1\right) \left( |w'-w|^{\beta}\land 1 \right) t^{-1} \big( {\Upsilon}_t(y-x') + {\Upsilon}_t(y-x)\big)\,.\end{aligned}$$ If $|x'-x|{\geqslant}h^{-1}(1/t)$ we apply . Let ${\rm I}$ be the left hand side of and $|x'-x|< h^{-1}(1/t)$. By with $r=h^{-1}(1/t)$ and , $$\begin{aligned} {\rm I} &{\leqslant}c \left(\frac{|x'-x|}{h^{-1}(1/t)}\right) \int_{|z|{\geqslant}h^{-1}(1/t)} {\mathcal{F}}_{2} (t,x,y;z) \, |\kappa(w',z)- \kappa(w,z)| J(z)dz\\ &\quad+c \left(\frac{|x'-x|}{h^{-1}(1/t)}\right) \int_{|z|< h^{-1}(1/t)} {\mathcal{F}}_{1} (t,x,y;z) \, |\kappa(w',z)- \kappa(w,z)| J(z)dz\\ &\quad + \left| \int_{{{{\mathbb{R}}^{d}}}} z \left({\mathds{1}}_{|z|<h^{-1}(1/t)} - {\mathds{1}}_{|z|<1}\right) \big( \kappa(w',z)- \kappa(w,z)\big) J(z)dz\right| |\nabla_{x'} p^{\mathfrak{K}}(t,x',y)-\nabla_{x}p^{\mathfrak{K}}(t,x,y)|\,.\end{aligned}$$ By and we bound the last expression by $(|w'-w|^{\beta}\land 1 ) (|x'-x|/h^{-1}(1/t)) t^{-1}{\Upsilon}_t(y-x)$. For the first two terms we rely on , , [@GS-2018 Lemma 5.1 and 5.9]. \[prop:Hcont\_kappa\_crit1\] Assume $\Qzero$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4})$ such that for all $t\in (0,T]$, $x,y,w,w'\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} |p^{\mathfrak{K}_{w'}}(t,x,y)-p^{\mathfrak{K}_w}(t,x,y)| &{\leqslant}c\, (|w'-w|^{\beta}\land 1)\,{\Upsilon}_t(y-x)\,,\\ |\nabla_x p^{\mathfrak{K}_{w'}}(t,x,y)-\nabla_x p^{\mathfrak{K}_w}(t,x,y)| &{\leqslant}c (|w'-w|^{\beta}\land 1) \left[h^{-1}(1/t)\right]^{-1} {\Upsilon}_t(y-x) \,,\\ \left| \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_{w'}}(t,x,y)- \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_{w}}(t,x,y)\right| &{\leqslant}c (|w'-w|^{\beta}\land 1)\, t^{-1}{\Upsilon}_t(y-x) \,. $$ Moreover, for every $T>0$ the inequality $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_{w'}} (t,x,y;z)-\delta^{\mathfrak{K}_w} (t,x,y;z)| \,J(z)dz &{\leqslant}c (|w'-w|^{\beta}\land 1) \,{\vartheta}(t)\, t^{-1}{\Upsilon}_t(y-x) \,. \label{e:delta-difference-abs-crit1}\end{aligned}$$ hold for all $t\in (0,T]$, $x,y,w,w'\in{{{\mathbb{R}}^{d}}}$ with 1. ${\vartheta}(t)={\Theta}(t)$ and $c=c(d,T,{\sigma},\kappa_2,{\kappa_4})$ if ${\alpha_h}=1$, 2. ${\vartheta}(t)=t \,[h^{-1}(1/t)]^{-1}$ and $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},{\beta_h},c_h)$ if holds for $0<{\alpha_h}{\leqslant}{\beta_h}<1$. In what follows we use [@GS-2018 Corollary 5.14, Lemma 5.6] and the monotonicity of $h^{-1}$ without further comment, cf. [@GS-2018 Theorem 2.11].\ (i) Like in [@GS-2018] using Lemma \[lem:rozne\_1\] we get $$\begin{aligned} p^{\mathfrak{K}_1}(t,x,y)-p^{\mathfrak{K}_2}(t,x,y) &= \lim_{\varepsilon_1 \to 0^+} \int_{\varepsilon_1}^{t/2} \int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_1}(s,x,z) \left( \LL_z^{\mathfrak{K}_1} - \LL_z^{\mathfrak{K}_2}\right) p^{\mathfrak{K}_2}(t-s,z,y)\,dzds\\ &+ \lim_{\varepsilon_2\to 0^+ } \int_{t/2}^{t-\varepsilon_2} \int_{{{{\mathbb{R}}^{d}}}} \left( \LL_x^{\mathfrak{K}_1} - \LL_x^{\mathfrak{K}_2}\right) p^{\mathfrak{K}_1}(s,x,z) p^{\mathfrak{K}_2}(t-s,z,y)\,dzds \,.\end{aligned}$$ By Proposition \[prop:gen\_est\_crit\] and , $$\begin{aligned} &\int_{\varepsilon}^{t/2} \int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_{w'}}(s,x,z)\, | \!\left( \LL_z^{\mathfrak{K}_{w'}} - \LL_z^{\mathfrak{K}_w}\right) p^{\mathfrak{K}_w}(t-s,z,y)|\,dzds\\ &{\leqslant}c\, (|w'-w|^{\beta}\land 1) \int_{\varepsilon}^{t/2} \int_{{{{\mathbb{R}}^{d}}}} {\Upsilon}_s (z-x)\, (t-s)^{-1}{\Upsilon}_{t-s}(y-z) \,dzds\\ &{\leqslant}c\, (|w'-w|^{\beta}\land 1)\, {\Upsilon}_t(y-x) \int_{\varepsilon}^{t/2} t^{-1}ds\,.\end{aligned}$$ Similarly, $$\begin{aligned} &\int_{t/2}^{t-\varepsilon} \int_{{{{\mathbb{R}}^{d}}}} |\!\left( \LL_x^{\mathfrak{K}_{w'}} - \LL_x^{\mathfrak{K}_w}\right) p^{\mathfrak{K}_{w'}}(s,x,z) |\, p^{\mathfrak{K}_w}(t-s,z,y)\,dzds {\leqslant}c \, (|w'-w|^{\beta}\land 1)\, {\Upsilon}_t(y-x)\,.\end{aligned}$$ \(ii) Let $w_0\in{{{\mathbb{R}}^{d}}}$ be fixed. Define $\mathfrak{K}_0(z)=(\kappa_0/(2\kappa_1)) \kappa(w_0,z)$ and $\widehat{\mathfrak{K}}_w (z)=\mathfrak{K}_w(z)- \mathfrak{K}_0(z)$. By the construction of the L[é]{}vy process we have $$\begin{aligned} \label{eq:przez_k_0-impr} p^{\mathfrak{K}_w}(t,x,y)=\int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_0}(t,x,\xi) p^{\widehat{\mathfrak{K}}_w}(t,\xi,y)\,d\xi\,.\end{aligned}$$ Then by and Proposition \[prop:gen\_est\_crit\], $$\begin{aligned} |\nabla_x p^{\mathfrak{K}_{w'}}(t,x,y)-\nabla_x p^{\mathfrak{K}_w}(t,x,y)| & {\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \left| \nabla_x p^{\mathfrak{K}_0}(t, x,\xi) \right| \left| p^{\widehat{\mathfrak{K}}_{w'}}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_w}(t,\xi,y)\right| d\xi\\ &{\leqslant}c (|w'-w|\land 1) \left[h^{-1}(1/t)\right]^{-1} {\Upsilon}_t(y-x)\,.\end{aligned}$$ \(iii) By we have $$\begin{aligned} \delta^{\mathfrak{K}_{w'}} (t,x,y;z)-\delta^{\mathfrak{K}_w} (t,x,y;z) = \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_0}(t,x,\xi;z) \left(p^{\widehat{\mathfrak{K}}_{w'}}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_w}(t,\xi,y)\right) d\xi.\end{aligned}$$ Then by , $$\begin{aligned} \left| \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_{w'}}(t,x,y)- \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_{w}}(t,x,y)\right| &{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \left| \LL_x^{\mathfrak{K}_x}p^{\mathfrak{K}_0}(t,x,\xi ) \right| \left| p^{\widehat{\mathfrak{K}}_{w'}}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_w}(t,\xi,y)\right| d\xi\\ &{\leqslant}c (|w'-w|\land 1)\, t^{-1} {\Upsilon}_{t}(y-x)\,.\end{aligned}$$ \(iv) By Theorem \[thm:delta\_crit\], $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} &|\delta^{\mathfrak{K}_{w'}} (t,x,y;z)-\delta^{\mathfrak{K}_w} (t,x,y;z)| \,J(z)dz \\ &{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \left( \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_0}(t,x,\xi;z)| \,J(z)dz\right) \left| p^{\widehat{\mathfrak{K}}_{w'}}(t,\xi,y)-p^{\widehat{{\mathfrak{K}}}_w}(t,\xi,y)\right| d\xi\\ &{\leqslant}c (|w'-w|\land 1)\int_{{{{\mathbb{R}}^{d}}}} {\vartheta}(t) t^{-1}{\Upsilon}_t(\xi-x) {\Upsilon}_t(y-\xi)\, d\xi \\ &{\leqslant}c(|w'-w|\land 1) {\vartheta}(t) t^{-1} {\Upsilon}_t(y-x)\,.\end{aligned}$$ \[lem:cont\_frcoef\] Assume $\Qzero$. The functions $p^{\mathfrak{K}_w}(t,x,y)$ and $\nabla_x p^{\mathfrak{K}_w}(t,x,y)$ are jointly continuous in $(t, x, y,w) \in (0,\infty)\times ({{{\mathbb{R}}^{d}}})^3$. The function $\LL_x^{\mathfrak{K}_{v}} p^{\mathfrak{K}_{w}}(t,x,y)$ is jointly continuous in $(t,x,y,w,v)\in (0,\infty)\times ({{{\mathbb{R}}^{d}}})^4$. Furthermore, $$\begin{aligned} \label{e:some-estimates-2c-crit1} \lim_{t \to 0^+ } \sup_{x\in{{{\mathbb{R}}^{d}}}} \left| \int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_y}(t,x,y)\, dy -1\right|=0\end{aligned}$$ The result follows from Proposition \[prop:Hcont\_kappa\_crit1\], [@GS-2018 Lemma 6.1], and Lemma \[l:convolution\], cf. [@GS-2018 Lemma 3.1, 3.2 and 3.4]. \[e:some-estimates-2bb-crit1\] Assume $\Qzero$. Let $\beta_1\in [0,\beta]\cap [0,{\alpha_h})$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$ such that for all $t\in (0,T]$, $x\in{{{\mathbb{R}}^{d}}}$, $$\left|\int_{{{{\mathbb{R}}^{d}}}} \nabla_x p^{\mathfrak{K}_y} (t,x,y)\,dy \right| {\leqslant}c\! \left[h^{-1}(1/t)\right]^{-1+\beta_1}\,.$$ The inequality results from , Proposition \[prop:Hcont\_kappa\_crit1\] and Lemma \[l:convolution\], cf. [@GS-2018 Lemma 3.4]. \[l:some-estimates-3b-crit1\] Assume $\Qzero$. Let $\beta_1\in [0,\beta]\cap [0,{\alpha_h})$. For every $T>0$ the inequality $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \left|\int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_y} (t,x,y;z) \,dy \right| J(z)dz &{\leqslant}c\, {\vartheta}(t)\, t^{-1}\left[h^{-1}(1/t)\right]^{\beta_1}, \end{aligned}$$ holds for all $t\in (0,T]$, $x\in{{{\mathbb{R}}^{d}}}$ with 1. ${\vartheta}(t)={\Theta}(t)$ and $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$ if ${\alpha_h}=1$, 2. ${\vartheta}(t)=t \,[h^{-1}(1/t)]^{-1}$ and $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,{\beta_h},c_h)$ if holds for $0<{\alpha_h}{\leqslant}{\beta_h}<1$. The proof requires the use of , , and Lemma \[l:convolution\], cf. [@GS-2018 Lemma 3.4]. \[l:some-estimates-3b-crit1-impr\] Assume $\Qzero$. Let $\beta_1\in [0,\beta]\cap [0,{\alpha_h})$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$ such that for all $t\in (0,T]$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \left| \int_{{{{\mathbb{R}}^{d}}}} \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_y} (t,x,y) \,dy \right| &{\leqslant}c t^{-1}\left[h^{-1}(1/t)\right]^{\beta_1}\,.\end{aligned}$$ By Proposition \[prop:Hcont\_kappa\_crit1\] we have $$\begin{aligned} \left| \int_{{{{\mathbb{R}}^{d}}}} \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_y} (t,x,y) \,dy \right|= \left| \int_{{{{\mathbb{R}}^{d}}}}\left( \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_y} (t,x,y)- \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_x} (t,x,y) \right)dy \right| {\leqslant}c \int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{\beta_1}}(t,x-y)dy\,.\end{aligned}$$ The result follows from Lemma \[l:convolution\](a). Levi’s construction of heat kernels {#sec:constr} =================================== We use the notation of the previous section. For $\gamma,\beta\in {\mathbb{R}}$ we introduce the following function $$\begin{aligned} \label{def:err} {\rho_{\gamma}^{\beta}}(t,x):= \left[h^{-1}(1/t)\right]^{\gamma} \left(|x|^{\beta}\land 1\right) t^{-1} {\Upsilon}_t(x)\,.\end{aligned}$$ Construction of $q(t,x,y)$ {#sec:q} -------------------------- For $(t,x,y)\in (0,\infty)\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$ define $$\begin{aligned} q_0(t,x,y):= \int_{{{{\mathbb{R}}^{d}}}}\delta^{\mathfrak{K}_y}(t,x,y;z)\left(\kappa(x,z)-\kappa(y,z)\right)J(z)dz = \big(\LL_x^{{\mathfrak K}_x}-\LL_x^{{\mathfrak K}_y}\big) p^{\mathfrak{K}_y}(t,x,y)\,.$$ \[l:estimates-q0-crit1\] Assume $\Qzero$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2, {\kappa_4}){\geqslant}1$ such that for all $\beta_1\in[0,\beta]$, $t\in (0,T]$ and $x,x',y,y'\in{{{\mathbb{R}}^{d}}}$ $$\begin{aligned} \label{e:q0-estimate-crit1} |q_0(t,x,y)|{\leqslant}c {\rho_{0}^{\beta_1}}(t,y-x)\,,\end{aligned}$$ and for every $\gamma\in [0,\beta_1]$, $$\begin{aligned} &|q_0(t,x,y)-q_0(t,x',y)|\nonumber\\ &{\leqslant}c \left(|x-x'|^{\beta_1-\gamma}\land 1\right)\left\{\left({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\right)(t,x-y) +\left({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\right)(t,x'-y)\right\},\label{e:estimate-step3-crit1}\end{aligned}$$ and $$\begin{aligned} &|q_0(t,x,y)-q_0(t,x,y')|\nonumber \\ &{\leqslant}c \left(|y-y'|^{\beta_1-\gamma}\land 1\right)\left\{\left({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\right)(t,x-y) +\left({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\right)(t,x-y')\right\}. \label{e:estimate-q0-2-crit1}\end{aligned}$$ \(i) follows from .\ (ii) For $|x-x'|{\geqslant}1$ the inequality holds by and [@GS-2018 (92)]: $$\begin{aligned} |q_0(t,x,y)| {\leqslant}c {\rho_{0}^{\beta_1}}(t,y-x) {\leqslant}c\left[ h^{-1}(1/T)\vee 1\right]^{\beta_1-\gamma} {\rho_{\gamma-\beta_1}^{\beta_1}}(t,y-x)\,.\end{aligned}$$ For $1{\geqslant}|x-x'|{\geqslant}h^{-1}(1/t)$ the result follows from and $$\begin{aligned} |q_0(t,x,y)| {\leqslant}c {\rho_{0}^{\beta_1}}(t,y-x) = c \left[ h^{-1}(1/t)\right]^{\beta_1-\gamma} {\rho_{\gamma-\beta_1}^{\beta_1}}(t,y-x) {\leqslant}c |x-x'|^{\beta_1-\gamma} {\rho_{\gamma-\beta_1}^{\beta_1}}(t,y-x)\,.\end{aligned}$$ Now, and provide that $$\begin{aligned} & |q_0(t,x,y)-q_0(t,x',y)|=\left|\int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_y} (t,x,y;z)(\kappa(x,z)-\kappa(y,z))\,J(z)dz\right.\\ & \hspace{0.1\linewidth} -\left. \int_{{{{\mathbb{R}}^{d}}}}\delta^{\mathfrak{K}_y}(t,x',y;z)(\kappa(x',z)-\kappa(y,z))\,J(z)dz\right|\\ & \hspace{0.05\linewidth} {\leqslant}\left| \int_{{{{\mathbb{R}}^{d}}}}\left( \delta^{\mathfrak{K}_y}(t,x,y;z)-\delta^{\mathfrak{K}_y}(t,x',y;z)\right) \left(\kappa(x,z)-\kappa(y,z)\right) J(z)dz\right| \\ & \hspace{0.1\linewidth} + \left| \int_{{{{\mathbb{R}}^{d}}}}\left( \delta^{\mathfrak{K}_y} (t,x',y;z)\right) \left(\kappa(x,z)-\kappa(x',z)\right) J(z)dz\right|\\ & \hspace{0.1\linewidth} + c \left(|x-x'|^{\beta_1}\land 1\right)\int_{{{{\mathbb{R}}^{d}}}}|\delta^{\mathfrak{K}_y}(t,x',y;z)|\,J(z)dz\\ {\leqslant}c &\left(|x-y|^{\beta_1}\land 1\right) \left(\frac{|x-x'|}{h^{-1}(1/t)} \land 1\right) \big({\rho_{0}^{0}} (t,x-y)+{\rho_{0}^{0}}(t,x'-y)\big) + c \left(|x-x'|^{\beta_1}\land 1\right) {\rho_{0}^{0}}(t,x'-y).\end{aligned}$$ Applying $(|x-y|^{\beta_1}\land 1){\leqslant}(|x-x'|^{\beta_1}\land 1) + (|x'-y|^{\beta_1}\land 1)$ we obtain $$\begin{aligned} |q_0(t,x,y)-q_0(t,x',y)|{\leqslant}\ &c \left(\frac{|x-x'|}{h^{-1}(1/t)} \land 1\right) \big({\rho_{0}^{\beta_1}} (t,x-y)+{\rho_{0}^{\beta_1}}(t,x'-y)\big)\\ &+c \left(|x-x'|^{\beta_1}\land 1\right){\rho_{0}^{0}}(t,x'-y).\end{aligned}$$ Thus in the last case $|x-x'|{\leqslant}h^{-1}(1/t)\land 1$ we have $|x-x'|/ h^{-1}(1/t){\leqslant}|x-x'|^{\beta_1-\gamma} \left[h^{-1}(1/t)\right]^{\gamma-\beta_1}$ and $|x-x'|^{\beta_1}{\leqslant}|x-x'|^{\beta_1 -\gamma} \left[h^{-1}(1/t)\right]^{\gamma}$.\ (iii) We treat the cases $|y-y'|{\geqslant}1$ and $1{\geqslant}|y-y'|{\geqslant}h^{-1}(1/t)$ like in part (ii). Now note that by $\delta^{\mathfrak{K}}(t,x,y;z)=\delta^{\mathfrak{K}}(t,-y,-x;z)$, , and Proposition \[prop:Hcont\_kappa\_crit1\], $$\begin{aligned} &|q_0(t,x,y)-q_0(t,x,y')|\\ &{\leqslant}\left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_y}(t,x,y;z)\left(\kappa(y',z)-\kappa(y,z)\right)J(z)dz\right| \\ & \ \ \ +\left| \int_{{{{\mathbb{R}}^{d}}}}\left(\delta^{\mathfrak{K}_y}(t,x,y;z)-\delta^{\mathfrak{K}_y}(t,x,y';z)\right)\left(\kappa(x,z)-\kappa(y',z)\right)J(z)dz \right|\\ &\ \ \ +\left|\int_{{{{\mathbb{R}}^{d}}}}\left(\delta^{\mathfrak{K}_y}(t,x,y';z)-\delta^{\mathfrak{K}_{y'}}(t,x,y';z)\right)\kappa(x,z) J(z)dz\right|\\ &\ \ \ +\left|-\int_{{{{\mathbb{R}}^{d}}}}\left(\delta^{\mathfrak{K}_y}(t,x,y';z)-\delta^{\mathfrak{K}_{y'}}(t,x,y';z)\right)\kappa(y',z) J(z)dz \right|\\ &{\leqslant}c \left( |y-y'|^{\beta_1}\land 1\right) {\rho_{0}^{0}}(t,x-y)\\ &\quad +c \left( |x-y'|^{\beta_1}\land 1\right) \left(\frac{|y-y'|}{h^{-1}(1/t)} \land 1\right) \left({\rho_{0}^{0}}(t,x-y)+{\rho_{0}^{0}}(t,x-y')\right)\\ &\quad + c \left( |y-y'|^{\beta_1}\land 1\right) {\rho_{0}^{0}}(t,x-y') \,.\end{aligned}$$ Applying $(|x-y'|^{\beta_1}\land 1){\leqslant}(|x-y|^{\beta_1}\land 1) + (|y-y'|^{\beta_1}\land 1)$ we obtain $$\begin{aligned} |q_0(t,x,y)-q_0(t,x,y')|{\leqslant}\ & c \left(\frac{|y-y'|}{h^{-1}(1/t)} \land 1\right) \big({\rho_{0}^{\beta_1}} (t,x-y)+{\rho_{0}^{\beta_1}}(t,x-y')\big)\\ &+c \left(|y-y'|^{\beta_1}\land 1\right) \big( {\rho_{0}^{0}}(t,x-y)+{\rho_{0}^{0}}(t,x-y')\big).\end{aligned}$$ This proves in the case $|y-y'|{\leqslant}h^{-1}(1/t)\land 1$. For $n\in \N$ and $(t,x,y)\in (0, \infty)\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$ we inductively define $$q_n(t,x,y):=\int_0^t \int_{{{{\mathbb{R}}^{d}}}}q_0(t-s,x,z)q_{n-1}(s,z,y)\, dzds\,.$$ \[t:definition-of-q-crit1\] Assume $\Qzero$. The series $q(t,x,y):=\sum_{n=0}^{\infty}q_n(t,x,y)$ is absolutely and locally uniformly convergent on $(0, \infty)\times {\mathbb{R}}^d \times {\mathbb{R}}^d$ and solves the integral equation $$\begin{aligned} \label{e:integral-equation-crit1} q(t,x,y)=q_0(t,x,y)+\int_0^t \int_{{{{\mathbb{R}}^{d}}}}q_0(t-s,x,z)q(s,z,y)\, dzds\, .\end{aligned}$$ Moreover, for every $T> 0$ and $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$ there is a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1)$ such that on $(0,T]\times{{{\mathbb{R}}^{d}}}\times{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{e:q-estimate-crit1} |q(t,x,y)|{\leqslant}c \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(t,x-y)\,,\end{aligned}$$ and for any $\gamma\in (0,\beta_1]$ there is $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1,\gamma)$ such that on $(0, T]\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$, $$\begin{aligned} &|q(t,x,y)-q(t,x',y)|\nonumber\\ &{\leqslant}c \left(|x-x'|^{\beta_1-\gamma}\land 1\right) \left\{\big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(t,x-y)+\big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(t,x'-y)\right\}\,, \label{e:difference-q-estimate-crit1}\end{aligned}$$ and $$\begin{aligned} &|q(t,x,y)-q(t,x,y')|\nonumber\\ &{\leqslant}c \left(|y-y'|^{\beta_1-\gamma}\land 1\right) \left\{\big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(t,x-y)+\big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(t,x-y')\right\}\,. \label{e:difference-q-estimate_1-crit1}\end{aligned}$$ The proof fully relies on Lemma \[l:estimates-q0-crit1\] and \[l:convolution\], and is the same as in [@GS-2018 Theorem 3.7]. Properties of $\phi_y(t,x,s)$ and $\phi_y(t,x)$ {#sec:phi} ----------------------------------------------- Let $$\label{e:phi-y-def} \phi_y(t,x,s):=\int_{{{{\mathbb{R}}^{d}}}} p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dz, \quad x \in {{{\mathbb{R}}^{d}}}, \,\, 0< s<t\,,$$ and $$\label{e:def-phi-y-2} \phi_y(t,x):=\int_0^t \phi_y(t,x,s)\, ds =\int_0^t \int_{{{{\mathbb{R}}^{d}}}}p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dzds\, .$$ \[lem:phi\_cont\_xy-crit1\] Assume $\Qzero$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1)$ such that for all $t\in (0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} |\phi_y(t,x)|{\leqslant}c t \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(t,x-y)\,.\end{aligned}$$ For any $T>0$ and $\gamma \in [0,1]\cap [0,{\alpha_h})$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1,\gamma)$ such that for all $t\in (0,T]$, $x,x',y\in {{{\mathbb{R}}^{d}}}$, $$\begin{aligned} |\phi_{y}(t,x)-\phi_{y}(t,x')|&{\leqslant}c (|x-x'|^{\gamma}\land 1) \, t \left\{ \big( {\rho_{\beta_1-\gamma}^{0}}+{\rho_{-\gamma}^{\beta_1}}\big)(t,x-y)+ \big( {\rho_{\beta_1-\gamma}^{0}}+{\rho_{-\gamma}^{\beta_1}}\big)(t,x'-y) \right\}.\end{aligned}$$ For any $T>0$ and $\gamma \in (0,\beta)$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1,\gamma)$ such that for all $t\in (0,T]$, $x,y,y'\in {{{\mathbb{R}}^{d}}}$, $$\begin{aligned} |\phi_{y}(t,x)-\phi_{y'}(t,x)|&{\leqslant}c (|y-y'|^{\beta_1-\gamma}\land 1)\, t \left\{ \big( {\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(t,x-y)+ \big( {\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(t,x-y') \right\}.\end{aligned}$$ The proof fully relies on Lemma \[lem:pkw\_holder\], Proposition \[prop:gen\_est\_crit\], Theorem \[t:definition-of-q-crit1\] and Lemma \[l:convolution\], and is the same as in [@GS-2018 Lemma 3.8]. \[lem:phi\_cont\_joint-crit1\] Assume $\Qzero$. The function $\phi_y(t,x)$ is jointly continuous in $(t,x,y)\in (0,\infty)\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$. The proof is the same as in [@GS-2018 Lemma 3.9] and relies on Proposition \[prop:gen\_est\_crit\], , [@GS-2018 (94)], Lemma \[l:convolution\] and  \[lem:cont\_frcoef\], [@GS-2018 Lemma 5.6 and 5.15]. Assume $\Qzero$. For all $0<s<t$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{eq:grad_phi_pomoc-crit1} \nabla_x \phi_y(t,x,s)=\int_{{{{\mathbb{R}}^{d}}}} \nabla_x p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dz\,,\end{aligned}$$ $$\begin{aligned} \label{e:L-on-phi-y2-crit1} \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s) =\int_{{{{\mathbb{R}}^{d}}}} \LL_x^{\mathfrak{K}_x}p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dz\,.\end{aligned}$$ We get by , , Lemma \[l:convolution\] and the dominated convergence theorem. Now, by and , $$\begin{aligned} \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s) =\int_{{{{\mathbb{R}}^{d}}}} \left(\int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z} (t-s, x,z;w) q(s,z, y) \,dz\right) \kappa(x,w)J(w) dw\,. \label{e:L-on-phi-y2-first-crit1}\end{aligned}$$ Finally, we use Fubini’s theorem justified by , and Lemma \[l:convolution\](b). \[lem:int\_grad\_phi\] Assume $\Qzero$ and $1-{\alpha_h}<\beta\land {\alpha_h}$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1)$ such that for all $t\in (0,T]$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \int_0^t \left| \nabla_x \phi_y(t,x,s) \right| ds \, dy{\leqslant}c \left[h^{-1}(1/t)\right]^{-1+\beta_1}\,.\end{aligned}$$ First we assume that $1-{\alpha_h}<\beta_1$ and we let $\gamma \in (0,\beta_1)$ satisfying $1-{\alpha_h}<\beta_1-\gamma$. By , Proposition \[prop:gen\_est\_crit\], , Lemma \[e:some-estimates-2bb-crit1\] and , $$\begin{aligned} \left| \nabla_x \phi_y(t,x,s) \right| &{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \left| \nabla_x p^{\mathfrak{K}_z}(t-s,x,z) \right| \left| q(s,z,y) -q(s,x,y)\right| dz\\ &\quad + \left| \int_{{{{\mathbb{R}}^{d}}}} \nabla_x p^{\mathfrak{K}_z}(t-s,x,z)\, dz \right| \left| q(s,x,y)\right|\\ &{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} (t-s){\rho_{-1}^{\beta_1-\gamma}}(t-s, x-z) \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,z-y)\,dz \\ &\quad +\int_{{{{\mathbb{R}}^{d}}}} (t-s){\rho_{-1}^{\beta_1-\gamma}}(t-s, x-z) \,dz\, \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,x-y) \\ &\quad+ \left[ h^{-1}(1/(t-s))\right]^{-1+\beta_1} \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,x-y)\,.\end{aligned}$$ Finally we integrate in $y$ over ${{{\mathbb{R}}^{d}}}$ using Lemma \[l:convolution\](a) and then in $s$ over $(0,t)$ using [@GS-2018 Lemma 5.15]. In case of $\beta_1 {\leqslant}1-{\alpha_h}$ we use the monotonicity of $h^{-1}$. \[l:gradient-phi-y-crit1\] Assume $\Qzero$ and $1-{\alpha_h}<\beta\land {\alpha_h}$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta)$ such that for all $t \in(0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{e:gradient-phi-y-crit1} &\nabla_x\phi_y(t,x)=\int_0^t \int_{{{{\mathbb{R}}^{d}}}} \nabla_x p^{\mathfrak{K}_z}(t-s,x,z) q(s,z,y)\, dzds\,,\\ \nonumber\\ \label{e:gradient-phi-y-estimate-crit1} &\left|\nabla_x\phi_y(t,x) \right|{\leqslant}c \!\left[ h^{-1}(1/t)\right]^{-1} t \,{\rho_{0}^{0}}(t,x-y)\,.\end{aligned}$$ The proof is like in [@GS-2018 Lemma 3.10] and rests on , Proposition \[prop:gen\_est\_crit\], , Lemma \[l:convolution\], [@GS-2018 (93), (94), Lemma 5.3 and 5.15, Proposition 5.8], , Lemma \[e:some-estimates-2bb-crit1\], a fact that ${\alpha_h}>1/2$. \[lem:some-est\_gen\_phi\_xy-crit1\] Assume $\Qzero$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For all $T>0$, $\gamma \in(0,\beta_1]$ the inequalities $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}}\left(\int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_z} (t-s, x,z;w)||q(s,z, y)| \,dz\right) \kappa(x,w)J(w) dw\nonumber \hspace{0.15\linewidth}\\ {\leqslant}c_1\int_{{{{\mathbb{R}}^{d}}}} {\vartheta}(t-s){\rho_{0}^{0}}(t-s, x-z) \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,z-y)\,dz\,, \label{e:Fubini1-crit1} \\ \nonumber \\ \int_{{{{\mathbb{R}}^{d}}}} \left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z}(t-s,x,z;w)q(s,z,y)\,dz \right| \kappa(x,w)J(w)dw {\leqslant}c_2 \big( {\rm I}_1+{\rm I}_2+{\rm I}_3 \big), \label{ineq:some-est_gen_phi_xy-crit1}\end{aligned}$$ where $$\begin{aligned} {\rm I}_1+{\rm I}_2+{\rm I}_3:= & \int_{{{{\mathbb{R}}^{d}}}} {\vartheta}(t-s) {\rho_{0}^{\beta_1-\gamma}}(t-s,x-z) \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,z-y) \,dz \\ & + {\vartheta}(t-s) (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma} \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,x-y) \\ & + \,{\vartheta}(t-s) (t-s)^{-1}\left[h^{-1}(1/(t-s))\right]^{\beta_1} \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,x-y)\,,\end{aligned}$$ hold for all $0<s<t{\leqslant}T$, $x,y\in{{{\mathbb{R}}^{d}}}$ with 1. ${\vartheta}(t)={\Theta}(t)$ and $c_1=c_1(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$, $c_2=c_2(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,\gamma)$ if ${\alpha_h}=1$, 2. ${\vartheta}(t)=t \,[h^{-1}(1/t)]^{-1}$ and $c_1=c_1(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,{\beta_h},c_h)$, $c_2=c_2(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,\gamma,{\beta_h},c_h)$ if holds for $0<{\alpha_h}{\leqslant}{\beta_h}<1$. The inequality follows from , and . Next, let ${\rm I}_0$ be the left hand side of . By , , , , Lemma \[l:some-estimates-3b-crit1\] and \[l:convolution\](a), $$\begin{aligned} {\rm I}_0&{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_z}(t-s,x,z;w)| |q(s,z,y)-q(s,x,y)|\,dz\, \kappa(x,w)J(w)dw\\ &\quad + \int_{{{{\mathbb{R}}^{d}}}} \left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z}(t-s,x,z;w) \,dz\right| \kappa(x,w)J(w)dw \, |q(s,x,y)|\\ &{\leqslant}c \int_{{{{\mathbb{R}}^{d}}}} \left( \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_z}(t-s,x,z;w)|\,J(w)dw \right) \left(|x-z|^{\beta_1-\gamma}\land 1\right) \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,z-y) \,dz \\ &\quad + c \int_{{{{\mathbb{R}}^{d}}}} \left( \int_{{{{\mathbb{R}}^{d}}}} |\delta^{\mathfrak{K}_z}(t-s,x,z;w)|\,J(w)dw \right) \left(|x-z|^{\beta_1-\gamma}\land 1\right) dz\, \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,x-y) \\ &\quad + c \, {\vartheta}(t-s)\, (t-s)^{-1}\left[h^{-1}(1/(t-s))\right]^{\beta_1} \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,x-y) {\leqslant}c ({\rm I}_1+{\rm I}_2+{\rm I}_3)\,.\end{aligned}$$ \[ineq:I\_0\_oszagorne-crit1\] Assume $\Qzero$ and $1-{\alpha_h}<\beta\land {\alpha_h}$. For any $\beta_1\in (0,\beta]$ such that $1-{\alpha_h}<\beta_1<{\alpha_h}$ and $0<\gamma_1{\leqslant}\gamma_2{\leqslant}\beta_1$ satisfying $$1-{\alpha_h}<\beta_1-\gamma_1\,,\qquad\qquad 2\beta_1-\gamma_2<{\alpha_h}\,,$$ the inequality $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \int_0^t &\left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z}(t-s,x,z;w)q(s,z,y)\,dz \right| ds\,\kappa(x,w)J(w)dw\nonumber \\ &\hspace{0.3\linewidth}{\leqslant}c \,{\vartheta}(t) \big({\rho_{0}^{\beta_1}}+{\rho_{\gamma_1}^{\beta_1-\gamma_1}}+{\rho_{\beta_1+\gamma_1-\gamma_2}^{0}}\big)(t,x-y)\,,\end{aligned}$$ holds for all $t\in(0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$ with 1. ${\vartheta}(t)={\Theta}(t)$ and $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,\gamma_1,\gamma_2)$ if ${\alpha_h}=1$, 2. ${\vartheta}(t)=t \,[h^{-1}(1/t)]^{-1}$ and $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,\gamma_1,\gamma_2,{\beta_h},c_h)$ if holds for $0<{\alpha_h}{\leqslant}{\beta_h}<1$. Let ${\rm I}_0$ be the left hand side of . In two cases discussed below we apply Lemma \[l:convolution\](b), the monotonicity of $h^{-1}$ and ${\Theta}$ (see also Lemma \[lem:cal\_TCh\]), and $\Ab$ of [@GS-2018 Lemma 5.3]. For $s\in (0,t/2]$ we use to get $$\begin{aligned} {\rm I}_0& {\leqslant}c\, {\vartheta}(t-s)\bigg\{\left( (t-s)^{-1}\left[h^{-1}(1/(t-s))\right]^{\beta_1}+(t-s)^{-1}\left[h^{-1}(1/s)\right]^{\beta_1} +s^{-1}\left[h^{-1}(1/s)\right]^{\beta_1} \right) \\ &\hspace{0.52\linewidth} \times \,{\rho_{0}^{0}}(t,x-y) + (t-s)^{-1}{\rho_{0}^{\beta_1}}(t,x-y) \bigg\}\\ &{\leqslant}c\,{\vartheta}(t) \bigg\{\left( t^{-1}\left[h^{-1}(1/t)\right]^{\beta_1} +s^{-1} \left[h^{-1}(1/s)\right]^{\beta_1} \right) {\rho_{0}^{0}}(t,x-y) + t^{-1}{\rho_{0}^{\beta_1}}(t,x-y) \bigg\}.\end{aligned}$$ For $s\in (t/2,t)$ we use with $\gamma=\gamma_1$. Then $$\begin{aligned} {\rm I}_1 & {\leqslant}c {\vartheta}(t-s)\bigg\{ (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma_1} \left[h^{-1}(1/s)\right]^{\gamma_1} + s^{-1} \left[h^{-1}(1/s)\right]^{\beta_1+\gamma_1-\gamma_2} \\ &\hspace{0.32\linewidth}+ (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{2\beta_1-\gamma_2} \left[h^{-1}(1/s)\right]^{\gamma_1-\beta_1} \bigg\} {\rho_{0}^{0}}(t,x-y)\\ &\quad + c {\vartheta}(t-s) (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma_1} \left[h^{-1}(1/s)\right]^{\gamma_1-\beta_1} {\rho_{0}^{\beta_1}}(t,x-y)\\ &\quad +c {\vartheta}(t-s) s^{-1} \left[h^{-1}(1/s)\right]^{\gamma_1} {\rho_{0}^{\beta_1-\gamma_1}}(t,x-y) \\ & {\leqslant}c {\vartheta}(t-s) \bigg\{ (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma_1} \left[h^{-1}(1/t)\right]^{\gamma_1} + t^{-1} \left[h^{-1}(1/t)\right]^{\beta_1+\gamma_1-\gamma_2}\\ &\hspace{0.32\linewidth}+(t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{2\beta_1-\gamma_2} \left[h^{-1}(1/t)\right]^{\gamma_1-\beta_1} \bigg\} {\rho_{0}^{0}}(t,x-y)\\ &\quad + c {\vartheta}(t-s) (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma_1} \left[h^{-1}(1/t)\right]^{\gamma_1-\beta_1} {\rho_{0}^{\beta_1}}(t,x-y)\\ &\quad +c {\vartheta}(t-s) t^{-1} \left[h^{-1}(1/t)\right]^{\gamma_1} {\rho_{0}^{\beta_1-\gamma_1}}(t,x-y)\,.\end{aligned}$$ Next, like above with [@GS-2018 (94)], $$\begin{aligned} {\rm I}_2 & {\leqslant}c \, {\vartheta}(t-s) (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma_1} \big({\rho_{\gamma_1}^{0}}+{\rho_{\gamma_1-\beta_1}^{\beta_1}}\big)(t,x-y)\,.\end{aligned}$$ Similarly, ${\rm I}_3{\leqslant}c \,{\vartheta}(t-s) (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1} \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(t,x-y)$. Finally, by [@GS-2018 Lemma 5.15] and Lemma \[lem:cal\_TCh\], and a fact that ${\alpha_h}>1/2$, $$\begin{aligned} \int_0^t {\rm I}_0\,ds {\leqslant}c \,{\vartheta}(t) \big({\rho_{0}^{\beta_1}}+{\rho_{\gamma_1}^{\beta_1-\gamma_1}}+{\rho_{\beta_1+\gamma_1-\gamma_2}^{0}}\big)(t,x-y)\,.\end{aligned}$$ \[e:L-on-phi-y-crit1\] Assume $\Qa$ or $\Qb$. We have for all $t >0$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\LL_x^{\mathfrak{K}_x} \phi_y(t,x)= \int_0^t \int_{{{{\mathbb{R}}^{d}}}} \LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_z}(t-s,x,z) q(s,z,y)\, dzds\,.$$ By and in the first equality, and Lemma \[ineq:I\_0\_oszagorne-crit1\] and in the second (allowing us to change the order of integration twice) the prove is as follows $$\begin{aligned} \LL_x^{\mathfrak{K}_x} \phi_y(t,x) &=\int_{{{{\mathbb{R}}^{d}}}} \left( \int_0^t \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z}(t-s,x,z;w)q(s,z,y)\,dzds\right) \kappa(x,w)J(w)dw\\ &= \int_0^t \int_{{{{\mathbb{R}}^{d}}}}\left( \int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z}(t-s,x,z;w)\, \kappa(x,w)J(w)dw\right) q(s,z,y)\,dzds\,.\end{aligned}$$ \[lem:some-est\_gen\_phi\_xy-crit1-impr\] Assume $\Qzero$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For all $T>0$, $\gamma \in(0,\beta_1]$ there exist constants $c_1=c_1(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$ and $c_2=c_2(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,\gamma)$ such that for all $0<s<t{\leqslant}T$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \left| \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s)\right| & {\leqslant}c_1\int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{0}}(t-s, x-z) \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,z-y)\,dz\,, \label{e:Fubini1-crit1-impr} \\ \nonumber \\ \left| \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s)\right| &{\leqslant}c_2 \big( {\rm I}_1+{\rm I}_2+{\rm I}_3 \big), \label{ineq:some-est_gen_phi_xy-crit1-impr}\end{aligned}$$ where $$\begin{aligned} {\rm I}_1+{\rm I}_2+{\rm I}_3:= & \int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{\beta_1-\gamma}}(t-s,x-z) \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,z-y) \,dz \\ & + (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{\beta_1-\gamma} \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,x-y) \\ & + \, (t-s)^{-1}\left[h^{-1}(1/(t-s))\right]^{\beta_1} \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,x-y)\,.\end{aligned}$$ The first inequality follows from and . By , , , , Lemma \[l:some-estimates-3b-crit1-impr\] and \[l:convolution\](a), $$\begin{aligned} \left| \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s)\right|&{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \left| \LL_x^{\mathfrak{K}_x}p^{\mathfrak{K}_z}(t-s,x,z)\right| |q(s,z,y)-q(s,x,y)|\,dz \\ &\quad + \left| \int_{{{{\mathbb{R}}^{d}}}}\LL_x^{\mathfrak{K}_x}p^{\mathfrak{K}_z}(t-s,x,z) \,dz\right| \, |q(s,x,y)|\\ &{\leqslant}c \int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{0}}(t-s,x-z) \left(|x-z|^{\beta_1-\gamma}\land 1\right) \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,z-y) \,dz \\ &\quad + c \int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{0}}(t-s,x-z) \left(|x-z|^{\beta_1-\gamma}\land 1\right) dz\, \big({\rho_{\gamma}^{0}}+{\rho_{\gamma-\beta_1}^{\beta_1}}\big)(s,x-y) \\ &\quad + c \, (t-s)^{-1}\left[h^{-1}(1/(t-s))\right]^{\beta_1} \big({\rho_{0}^{\beta_1}}+{\rho_{\beta_1}^{0}}\big)(s,x-y) {\leqslant}c ({\rm I}_1+{\rm I}_2+{\rm I}_3)\,.\end{aligned}$$ Here is a consequence of , Lemma \[l:convolution\](a) and [@GS-2018 Lemma 5.15]. \[cor:int\_Lphi\] Assume $\Qzero$ and $1-{\alpha_h}<\beta\land {\alpha_h}$. Let $\beta_1 \in (0,\beta]\cap (0,{\alpha_h})$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta_1)$ such that for all $t\in(0,T]$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \int_0^t \left| \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s) \right| ds \,dy {\leqslant}c t^{-1} \left[h^{-1}(1/t)\right]^{\beta_1} \,.\end{aligned}$$ Assume $\Qzero$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For all $T>0$, $0<\gamma_1{\leqslant}\gamma_2{\leqslant}\beta_1$ satisfying $$0<\beta_1-\gamma_1\,,\quad \qquad 2\beta_1-\gamma_2<{\alpha_h}\,,$$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,\gamma_1,\gamma_2)$ such that for all $t\in(0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \int_0^t \left| \LL_x^{\mathfrak{K}_x}\phi_y(t,x,s)\right|ds {\leqslant}c \big({\rho_{0}^{\beta_1}}+{\rho_{\gamma_1}^{\beta_1-\gamma_1}}+{\rho_{\beta_1+\gamma_1-\gamma_2}^{0}}\big)(t,x-y)\,. \label{ineq:I_0_oszagorne-crit1-impr}\end{aligned}$$ The proof goes by the same lines as the proof of Lemma \[ineq:I\_0\_oszagorne-crit1\] but with ${\vartheta}$ replaced by $1$, and Lemma \[lem:some-est\_gen\_phi\_xy-crit1-impr\] in place of Lemma \[lem:some-est\_gen\_phi\_xy-crit1\]. \[lem:Lphi\_cont-crit\] Assume $\Qa$ or $\Qb$. The function $\LL_x^{\mathfrak{K}_x} \phi_y(t,x)$ is jointly continuous in $(t,x,y)\in (0,\infty)\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$. The proof is the same as in [@GS-2018 Lemma 3.13] and requires Lemma \[e:L-on-phi-y-crit1\], , , , , , [@GS-2018 (94), Lemma 5.15], Lemma \[l:convolution\] and \[lem:cont\_frcoef\]. \[l:phi-y-abs-cont-crit1\] Assume $\Qzero$. For all $t>0$, $x,y\in {{{\mathbb{R}}^{d}}}$, $x\neq y$, we have $$\begin{aligned} \phi_y(t,x) =\int_0^t \left(q(r,x,y)+ \int_0^r \int_{{{{\mathbb{R}}^{d}}}} \LL_x^{\mathfrak{K}_z} p^{\mathfrak{K}_z}(r-s,x,z) q(s,z,y)\, dzds\right) dr\,.\end{aligned}$$ Similarly to [@GS-2018 Lemma 3.14] we use , , , , , , , Lemma \[l:convolution\], [@GS-2018 (92), (93), (94)], , , Proposition \[prop:gen\_est\_crit\], [@GS-2018 Lemma 5.6], [@MR924157 Theorem 7.21]. \[e:phi-y-partial\_1-crit\] Assume $\Qa$ or $\Qb$. For all $x,y\in{{{\mathbb{R}}^{d}}}$, $x\neq y$, the function $\phi_y(t,x)$ is differentiable in $t>0$ and $$\begin{aligned} \partial_t \phi_y(t,x) = q_0(t,x,y)+ \LL_x^{\mathfrak{K}_x} \phi_y (t,x)\,.\end{aligned}$$ Properties of $p^\kappa(t, x, y)$ {#sec:p_kappa} --------------------------------- Now we define and study the function $$\begin{aligned} \label{e:p-kappa} p^{\kappa}(t,x,y):=p^{\mathfrak{K}_y}(t,x,y)+\phi_y(t,x)=p^{\mathfrak{K}_y}(t,x,y)+\int_0^t \int_{{{{\mathbb{R}}^{d}}}}p^{\mathfrak{K}_z}(t-s,x,z)q(s,z,y)\, dzds\,.\end{aligned}$$ \[lem:some-est\_p\_kappa-crit1\] Assume $\Qzero$ and $1-{\alpha_h}<\beta\land {\alpha_h}$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For every $T>0$ the inequalities $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} | \delta^{\kappa}(t,x,y;z) | \,\kappa(x,z)J(z)dz &{\leqslant}c_1\, {\vartheta}(t) {\rho_{0}^{0}}(t,x-y)\,, \label{ineq:some-est_p_kappa-crit1} \\ \int_{{{{\mathbb{R}}^{d}}}} \left|\int_{{{{\mathbb{R}}^{d}}}} \delta^{\kappa}(t,x,y;z)\, dy \right|\kappa(x,z)&J(z)dz {\leqslant}c_2\, {\vartheta}(t) t^{-1} \left[h^{-1}(1/t)\right]^{\beta_1}\,, \label{ineq:some-est_p_kappa_1-crit1}\end{aligned}$$ hold for all $t\in(0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$ with 1. ${\vartheta}(t)={\Theta}(t)$ and $c_1=c_1(d,T,{\sigma},\kappa_2,{\kappa_4},\beta)$, $c_2=c_2(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$ if ${\alpha_h}=1$, 2. ${\vartheta}(t)=t \,[h^{-1}(1/t)]^{-1}$ and $c_1=c_1(d,T,{\sigma},\kappa_2,{\kappa_4},\beta,{\beta_h},c_h)$, $c_2=c_2(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1,{\beta_h},c_h)$ if holds for $0<{\alpha_h}{\leqslant}{\beta_h}<1$. By and , $$\begin{aligned} \delta^{\kappa}(t,x,y;w)=\delta^{\mathfrak{K}_y}(t,x,y;w)+\int_0^t\int_{{{{\mathbb{R}}^{d}}}} \delta^{\mathfrak{K}_z}(t-s,x,z;w)q(s,z,y)\,dzds\,.\end{aligned}$$ We deduce from , Lemma \[ineq:I\_0\_oszagorne-crit1\], [@GS-2018 (92), (93)]. The inequality results from , [@GS-2018 Lemma 5.15], Lemma \[l:some-estimates-3b-crit1\], \[l:convolution\](a) and \[lem:cal\_TCh\]. \[e:fract-der-p-kappa-2b-crit1\] Assume $\Qzero$ and $1-{\alpha_h}<\beta\land {\alpha_h}$. Let $\beta_1\in (0,\beta]\cap (0,{\alpha_h})$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta_1)$ such that for all $t\in (0,T]$, $x\in{{{\mathbb{R}}^{d}}}$, $$\left| \int_{{{{\mathbb{R}}^{d}}}}\nabla_x p^{\kappa}(t,x,y)\,dy\right|{\leqslant}c \left[h^{-1}(1/t)\right]^{-1+\beta_1} \,,$$ We get the inequality from Lemma \[e:some-estimates-2bb-crit1\], , and Lemma \[lem:int\_grad\_phi\]. \[l:p-kappa-difference-crit-1\] Assume $\Qa$ or $\Qb$.\ (a) The function $p^{\kappa}(t,x,y)$ is jointly continuous on $(0, \infty)\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$. \(b) For every $T> 0$ there is a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta)$ such that for all $t\in (0,T]$ and $x,y\in {{{\mathbb{R}}^{d}}}$, $$|p^{\kappa}(t,x,y)|{\leqslant}c t {\rho_{0}^{0}}(t,x-y).$$ \(c) For all $t>0$, $x,y\in{{{\mathbb{R}}^{d}}}$, $x\neq y$, $$\partial_t p^{\kappa}(t,x,y)= \LL_x^{\kappa}\, p^{\kappa}(t,x,y)\,.$$ \(d) For every $T>0$ there is a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4},\beta)$ such that for all $t\in (0,T]$, $x,y\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{e:fract-der-p-kappa-1b-crit1} |\LL_x^{\kappa} p^{\kappa}(t, x, y)|{\leqslant}c {\rho_{0}^{0}}(t,x-y)\,,\end{aligned}$$ and if $1-{\alpha_h}<\beta\land {\alpha_h}$, then $$\begin{aligned} \label{e:fract-der-p-kappa-2-crit1} \left|\nabla_x p^{\kappa}(t,x,y)\right|{\leqslant}c\! \left[h^{-1}(1/t)\right]^{-1} t {\rho_{0}^{0}}(t,x-y)\,. \end{aligned}$$ \(e) For all $T>0$, $\gamma \in [0,1]\cap [0,{\alpha_h})$, there is a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta,\gamma)$ such that for all $t\in (0,T]$ and $x,x',y\in {{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \left|p^{\kappa}(t,x,y)-p^{\kappa}(t,x',y)\right| {\leqslant}c (|x-x'|^{\gamma}\land 1) \,t \left( {\rho_{-\gamma}^{0}} (t,x-y)+ {\rho_{-\gamma}^{0}}(t,x'-y) \right).\end{aligned}$$ For all $T>0$, $\gamma \in [0,\beta)\cap [0,{\alpha_h})$, there is a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta,\gamma)$ such that for all $t\in (0,T]$ and $x,y,y'\in {{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \left|p^{\kappa}(t,x,y)-p^{\kappa}(t,x,y')\right| {\leqslant}c (|y-y'|^{\gamma}\land 1)\, t \left( {\rho_{-\gamma}^{0}}(t,x-y)+ {\rho_{-\gamma}^{0}}(t,x-y') \right).\end{aligned}$$ \(f) The function $\LL_x^{\kappa}p^{\kappa}(t,x,y)$ is jointly continuous on $(0,\infty)\times {{{\mathbb{R}}^{d}}}\times {{{\mathbb{R}}^{d}}}$. The statement of (a) follows from Lemma \[lem:cont\_frcoef\] and \[lem:phi\_cont\_joint-crit1\]. Part (b) is a result of Proposition \[prop:gen\_est\_crit\] and Lemma \[lem:phi\_cont\_xy-crit1\]. The equation in (c) is a consequence of , and Corollary \[e:phi-y-partial\_1-crit\]: $\partial_t p^{\kappa}(t,x,y)=\LL_x^{\mathfrak{K}_x} p^{\mathfrak{K}_y}(t,x,y)+ \LL_x^{\mathfrak{K}_x} \phi_y(t,x)=\LL_x^{\mathfrak{K}_x} p^{\kappa}(t,x,y)$. We get by , , , [@GS-2018 (92), (93)] (see also Lemma \[e:L-on-phi-y-crit1\] and ). For the proof of we use Proposition \[prop:gen\_est\_crit\] and . The first inequality of part (e) follows from Lemma \[lem:pkw\_holder\] and \[lem:phi\_cont\_xy-crit1\], and [@GS-2018 (92), (93)]. The same argument suffices for the second inequality of part (e) when supported by $$|p^{\mathfrak{K}_y}(t,x,y)-p^{\mathfrak{K}_{y'}}(t,x,y')| {\leqslant}|p^{\mathfrak{K}_y}(t,-y,-x)-p^{\mathfrak{K}_{y}}(t,-y',-x)| + |p^{\mathfrak{K}_y}(t,x,y')-p^{\mathfrak{K}_{y'}}(t,x,y')|$$ and Proposition \[prop:Hcont\_kappa\_crit1\]. Part (f) follows from Lemma \[lem:cont\_frcoef\] and \[lem:Lphi\_cont-crit\]. Main Results and Proofs {#sec:Main} ======================= In the whole section we assume that either $\Qa$ or $\Qb$ holds. A nonlocal maximum principle ---------------------------- Recall that $\LL^{\kappa,0^+}f:=\lim_{\varepsilon \to 0^+}\LL^{\kappa,\varepsilon}f$ is an extension of $\LL^{\kappa}f:=\LL^{\kappa,0}f$. Moreover, the well-posedness of those operators require the existence of the gradient $\nabla f$. For proofs of the following see [@GS-2018 Theorem 4.1]. \[t:nonlocal-max-principle-crit\] Let $T>0$ and $u\in C([0,T]\times {{{\mathbb{R}}^{d}}})$ be such that $$\begin{aligned} \label{e:nonlocal-max-principle-1-crit} \| u(t,\cdot)-u(0,\cdot) \|_{\infty} \xrightarrow {t\to 0^+} 0\,, \qquad \qquad \sup_{t\in [0,T]} \| u(t,\cdot){\mathds{1}}_{|\cdot|{\geqslant}r} \|_{\infty} \xrightarrow {r\to \infty}0\,.\end{aligned}$$ Assume that $u(t,x)$ satisfies the following equation: for all $(t,x)\in (0,T]\times {{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{e:nonlocal-max-principle-4-crit} \partial_t u(t,x)=\LL_x^{\kappa,0^+}u(t,x)\, .\end{aligned}$$ If $\sup_{x\in{{{\mathbb{R}}^{d}}}} u(0,x){\geqslant}0$, then for every $t\in (0,T]$, $$\begin{aligned} \label{e:nonlocal-max-principle-5-crit} \sup_{x\in {\mathbb{R}}^d}u(t,x){\leqslant}\sup_{x\in {{{\mathbb{R}}^{d}}}}u(0,x)\, .\end{aligned}$$ \[cor:jedn\_max-crit\] If $u_1, u_2 \in C([0,T]\times {{{\mathbb{R}}^{d}}})$ satisfy , and $u_1(0,x)=u_2(0,x)$, then $u_1\equiv u_2$ on $[0,T]\times {{{\mathbb{R}}^{d}}}$. Properties of the semigroup $(P^{\kappa}_t)_{t\ge 0}$ ----------------------------------------------------- Define $$P_t^{\kappa}f(x)=\int_{{\mathbb{R}}^d}p^\kappa(t,x, y)f(y)dy.$$ We first collect some properties of ${\Upsilon}_t*f$. \[rem:conv\_Lp-crit\] We have ${\Upsilon}_t*f \in C_b({{{\mathbb{R}}^{d}}})$ for any $f\in L^p({{{\mathbb{R}}^{d}}})$, $p\in [1,\infty]$. Moreover, ${\Upsilon}_t*f\in C_0({{{\mathbb{R}}^{d}}})$ for any $f\in L^p({{{\mathbb{R}}^{d}}})\cup C_0({{{\mathbb{R}}^{d}}})$, $p\in [1,\infty)$. Further, there is $c=c(d)$ such that $\|{\Upsilon}_t*f \|_p{\leqslant}c \|f\|_p$ for all $t>0$, $p\in [1,\infty]$. The above follows from ${\Upsilon}_t\in L^1({{{\mathbb{R}}^{d}}})\cap L^{\infty}({{{\mathbb{R}}^{d}}})\subseteq L^q({{{\mathbb{R}}^{d}}})$ for every $q\in [1,\infty]$ (see [@GS-2018 Lemma 5.6]), and from properties of the convolution. \[lem:bdd\_cont-crit1\] (a) We have $P_t^{\kappa} f \in C_b({{{\mathbb{R}}^{d}}})$ for any $f\in L^p({{{\mathbb{R}}^{d}}})$, $p\in [1,\infty]$. Moreover, $P_t^{\kappa} f \in C_0({{{\mathbb{R}}^{d}}})$ for any $f\in L^p({{{\mathbb{R}}^{d}}})\cup C_0({{{\mathbb{R}}^{d}}})$, $p\in [1,\infty)$. For every $T>0$ there exists a constant $c=c(d,T,{\sigma},\kappa_2,{\kappa_4}, \beta)$ such that for all $t\in(0,T]$ we get $$\|P^{\kappa}_t f\|_p{\leqslant}c \|f\|_p\,.$$ (b) $P^{\kappa}_t\colon C_0({{{\mathbb{R}}^{d}}})\to C_0({{{\mathbb{R}}^{d}}})$, $t>0$, and for any bounded uniformly continuous function $f$, $$\lim_{t\to 0^+} \|P^{\kappa}_t f -f \|_{\infty}=0\,.$$ (c) $P^{\kappa}_t\colon L^p({{{\mathbb{R}}^{d}}})\to L^p({{{\mathbb{R}}^{d}}})$, $t>0$, $p\in [1,\infty)$, and for any $f\in L^p({{{\mathbb{R}}^{d}}})$, $$\lim_{t\to 0^+} \|P_t^{\kappa}f -f \|_p=0\,.$$ See [@GS-2018 Lemma 4.4] and Remark \[rem:conv\_Lp-crit\], Lemma \[l:p-kappa-difference-crit-1\], \[lem:phi\_cont\_xy-crit1\], \[l:convolution\](a), , [@GS-2018 Lemma 5.6] and Proposition \[prop:gen\_est\_crit\]. \[lem:grad\_Pt-crit1\] For any $f\in L^p({{{\mathbb{R}}^{d}}})$, $p\in [1,\infty]$, we have for all $t>0$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{eq:grad_Pt-crit1} \nabla_x \,P_t^{\kappa} f(x)= \int_{{{{\mathbb{R}}^{d}}}} \nabla_x\, p^{\kappa}(t,x,y) f(y)dy\,.\end{aligned}$$ For any bounded (uniformly) Hölder continuous function $f \in C^\eta_b({{{\mathbb{R}}^{d}}})$, $1-{\alpha_h}<\eta$, and all $t>0$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{eq:grad_Pt_1-crit1} \nabla_x \left( \int_0^t P^{\kappa}_s f(x)\,ds \right)= \int_0^t \nabla_x P^{\kappa}_s f(x)\,ds\,.\end{aligned}$$ By and [@GS-2018 Corollary 5.10] for $|\varepsilon|<h^{-1}(1/t)$, $$\begin{aligned} \left| \frac1{\varepsilon}( p^{\kappa}(t,x+\varepsilon e_i,y)-p^{\kappa}(t,x,y)) \right| |f(y)| {\leqslant}c \left[h^{-1}(1/t)\right]^{-1} {\Upsilon}_t (x-y) |f(y)|\,.\end{aligned}$$ The right hand side is integrable by Remark \[rem:conv\_Lp-crit\]. We can use the dominated convergence theorem, which gives . For $f \in C^\eta_b({{{\mathbb{R}}^{d}}})$ (we can assume that $\eta<{\alpha_h}$) we let $\widetilde{x}=x+\varepsilon\theta e_i$ and by , Lemma \[e:fract-der-p-kappa-2b-crit1\] and \[l:convolution\](a) we have $$\begin{aligned} &\left| \int_{{{{\mathbb{R}}^{d}}}} \frac1{\varepsilon}( p^{\kappa}(s,x+\varepsilon e_i,y)-p^{\kappa}(s,x,y)) f(y)\,dy \right| {\leqslant}\left| \int_{{{{\mathbb{R}}^{d}}}} \int_0^1 \partial_{x_i} p^{\kappa}(s,\widetilde{x},y) \, d\theta\, f(y)\,dy\right|\\ & {\leqslant}\left| \int_{{{{\mathbb{R}}^{d}}}} \int_0^1 \partial_{x_i} p^{\kappa}(s,\widetilde{x},y) \big[ f(y)-f(\widetilde{x})\big] \, d\theta \,dy\right| + \left| \int_{{{{\mathbb{R}}^{d}}}} \int_0^1 \partial_{x_i} p^{\kappa}(s,\widetilde{x},y)f(\widetilde{x})\, d\theta \,dy\right|\\ &{\leqslant}c \left[h^{-1}(1/s)\right]^{-1} \int_0^1 \int_{{{{\mathbb{R}}^{d}}}} s{\rho_{0}^{\eta}}(s, \widetilde{x}-y) \,dy\, d\theta + c\left[h^{-1}(1/s)\right]^{-1+\beta_1}\\ &{\leqslant}c \left[h^{-1}(1/s)\right]^{-1+\eta}+ c \left[h^{-1}(1/s)\right]^{-1+\beta_1}\,.\end{aligned}$$ The right hand side is integrable over $(0,t)$ by [@GS-2018 Lemma 5.15]. Finally, follows by dominated convergence theorem. \[l:L-int-commute0-crit1\] For any function $f\in L^p({{{\mathbb{R}}^{d}}})$, $p\in [1,\infty]$, and all $t>0$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{e:L-int-commute-2-crit1} \LL_x^{\kappa}P_t^{\kappa} f(x)=\int_{{{{\mathbb{R}}^{d}}}}\LL_x^{\kappa} \,p^{\kappa}(t,x, y)f(y)dy\, .\end{aligned}$$ Further, for every $T>0$ there exists a constant $c>0$ such that for all $f\in L^p({{{\mathbb{R}}^{d}}})$, $t\in (0,T]$, $$\begin{aligned} \label{e:LP-p-estimate-crit1} \| \LL^{\kappa}P_t^{\kappa} f\|_p{\leqslant}c t^{-1} \|f\|_p\,.\end{aligned}$$ By the definition and , $$\begin{aligned} \label{eq:LPf-crit1} \LL_x^{\kappa} P_t^{\kappa} f(x) =\int_{{{{\mathbb{R}}^{d}}}} \left( \int_{{{{\mathbb{R}}^{d}}}} \delta^{\kappa}(t,x,y;z) f(y)dy \right) \kappa(x,z)J(z)dz\,.\end{aligned}$$ The equality follows from Fubini’s theorem justified by and Remark \[rem:conv\_Lp-crit\]. The inequality follows then from , , Remark \[rem:conv\_Lp-crit\]. \[lem:for\_max-crit1\] Let $f\in C_0({{{\mathbb{R}}^{d}}})$. For $t>0$, $x\in{{{\mathbb{R}}^{d}}}$ we define $u(t,x)=P^{\kappa}_t f(x)$ and $u(0,x)=f(x)$. Then $u\in C([0,T]\times {{{\mathbb{R}}^{d}}})$, holds and $\partial_t u(t,x)=\LL_x^{\kappa}u(t,x)$ for all $t,T>0$, $x\in{{{\mathbb{R}}^{d}}}$. See [@GS-2018 Lemma 4.7]. \[l:L-int-commute-crit1\] For any bounded (uniformly) Hölder continuous function $f \in C^\eta_b({{{\mathbb{R}}^{d}}})$, $1-{\alpha_h}<\eta$, and all $t>0$, $x\in{{{\mathbb{R}}^{d}}}$, we have $\int_0^t | \LL_x^{\kappa} P_s^{\kappa}f(x)|ds <\infty$ and $$\begin{aligned} \label{e:L-int-commute-crit1} \LL_x^{\kappa}\left( \int_0^t P_s^{\kappa}f(x)\,ds\right) =\int_0^t \LL_x^{\kappa} P_s^{\kappa}f(x)\,ds\,.\end{aligned}$$ By the definition and Lemma \[lem:grad\_Pt-crit1\], $$\begin{aligned} \LL_x^{\kappa} \int_0^t P_s^{\kappa}f(x)\,ds &=\int_{{{{\mathbb{R}}^{d}}}} \left( \int_0^t \int_{{{{\mathbb{R}}^{d}}}} \delta^{\kappa} (s,x,y;z) f(y)dy ds \right) \kappa(x,z)J(z)dz\,.\end{aligned}$$ Note that by the poof will be finished if we can change the order of integration from $dsdz$ to $dzds$. To this end we use Fubini’s theorem justified by the following. We have $|f(y)-f(x)|{\leqslant}c (|y-x|^{\eta} \land 1)$ and we can assume that $\eta<{\alpha_h}$. Then $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} \int_0^t &\left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\kappa} (s,x,y;z) f(y)dy \right| ds \, \kappa(x,z)J(z)dz\\ &{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \int_0^t \left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\kappa} (s,x,y;z) \big[f(y)-f(x)\big] dy\right| ds \,\kappa(x,z)J(z)dz\\ &\quad+\int_{{{{\mathbb{R}}^{d}}}} \int_0^t \left| \int_{{{{\mathbb{R}}^{d}}}} \delta^{\kappa} (s,x,y;z) f(x) dy\right| ds\, \kappa(x,z)J(z)dz=: {\rm I}_1+{\rm I}_2\,.\end{aligned}$$ By we get ${\rm I}_1{\leqslant}c \int_0^t \int_{{{{\mathbb{R}}^{d}}}} {\vartheta}(s) {\rho_{0}^{\eta}}(s,y-x) dyds$, while by ${\rm I}_2{\leqslant}c \int_0^t {\vartheta}(s) s^{-1} \left[h^{-1}(1/s)\right]^{\beta_1}ds$. The integrals are finite by [@GS-2018 Lemma 5.15], Lemma \[l:convolution\](a) and \[lem:cal\_TCh\]. \[lem:gen\_sem\_step1-crit1\] For any $f\in C_b^{2}({{{\mathbb{R}}^{d}}})$ and all $t>0$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \label{eq:gen_sem_step1-crit1} P_t^{\kappa}f(x)-f(x)=\int_0^t P_s^{\kappa}\LL^{\kappa} f(x)\,ds\,.\end{aligned}$$ \(i) Note that $\LL^{\kappa}f \in C_0({{{\mathbb{R}}^{d}}})$ for any $f\in C_0^2({{{\mathbb{R}}^{d}}})$. \(ii) We will show that if $f\in C_0^{2,\eta}({{{\mathbb{R}}^{d}}})$, $1-{\alpha_h}<\eta{\leqslant}\beta$, then $\LL^{\kappa}f \in C^{\eta}({{{\mathbb{R}}^{d}}})$ is (uniformly) Hölder continuous. To this end we use [@MR2555009 Theorem 5.1]. For $x,z\in{{{\mathbb{R}}^{d}}}$ define $$E_zf(x)=f(x+z)-f(x)\,,\qquad F_zf(x)=f(x+z)-f(x)-\left<z,\nabla f(x)\right>.$$ Then $\LL^{\mathfrak{K}_y}f(x)=\int_{|z|<1}F_z f(x)\kappa(y,z)J(z)dz+\int_{|z|{\geqslant}1}E_zf(x)\kappa(y,z)J(z)dz$. Using , , and [@MR2555009 Theorem 5.1(b) and (e)], $$\begin{aligned} &|\LL^{\kappa}f(x) - \LL^{\kappa}f(y)| {\leqslant}|\LL^{\mathfrak{K}_x}f(x)-\LL^{\mathfrak{K}_y}f(x)|+|\LL^{\mathfrak{K}_y}f(x)-\LL^{\mathfrak{K}_y}f(y)|\\ & {\leqslant}c |x-y|^{\beta}+\int_{|z|<1}|F_zf(x)-F_zf(y)|\,\kappa(y,z)J(z)dz+ \int_{|z|{\geqslant}1}|E_zf(x)-E_zf(y)| \,\kappa(y,z)J(z)dz\\ & {\leqslant}c |x-y|^{\beta}+c |x-y|^{\eta} \int_{|z|<1}|z|^2\nu(|z|)dz+c |x-y| \int_{|z|{\geqslant}1}\nu(|z|)dz\,.\end{aligned}$$ \(iii) We have that holds if $f\in C_0^{2,\eta}({{{\mathbb{R}}^{d}}})$, $1-{\alpha_h}<\eta{\leqslant}\beta$. See [@GS-2018] and Lemma \[lem:for\_max-crit1\], \[l:L-int-commute-crit1\], [@MR924157 Theorem 7.21], Corollary \[cor:jedn\_max-crit\]. \(iv) We extend to $f\in C_b^2({{{\mathbb{R}}^{d}}})$ by approximation (see [@GS-2018]). \[lem:p-kappa-final-prop-crit1\] The function $p^{\kappa}(t,x,y)$ is non-negative, $\int_{{{{\mathbb{R}}^{d}}}} p^{\kappa}(t,x,y)dy= 1$ and $p^{\kappa}(t+s,x,y)=\int_{{{{\mathbb{R}}^{d}}}}p^{\kappa}(t,x,z)p^{\kappa}(s,z,y)dz$ for all $s,t>0$, $x,y\in{{{\mathbb{R}}^{d}}}$. Like in [@GS-2018 Lemma 4.10] we use Lemma \[lem:for\_max-crit1\], Theorem \[t:nonlocal-max-principle-crit\], Lemma \[l:p-kappa-difference-crit-1\], Corollary \[cor:jedn\_max-crit\], Proposition \[lem:gen\_sem\_step1-crit1\]. However, the proof of the convolution property in [@GS-2018] contains a gap: at that stage it is not clear why the function $p^{\kappa}(t+s,x,y)$ should satisfy the equation for all $x\in{{{\mathbb{R}}^{d}}}$. Here we present a correct proof that is valid for both papers. Let $T,s>0$ and $\varphi\in C_c^{\infty}({{{\mathbb{R}}^{d}}})$. For $t>0$, $x\in{{{\mathbb{R}}^{d}}}$ we define $$u_1(t,x)=P_t^{\kappa}f(x)\,,\qquad u_1(0,x)=f(x)=P_s^{\kappa}\varphi(x)\,,$$ and $$u_2(t,x)=P_{t+s}^{\kappa}\varphi(x)\,, \qquad u_2(0,x)=P_s^{\kappa}\varphi(x)\,.$$ By Lemma \[lem:bdd\_cont-crit1\](b) $f\in C_0({{{\mathbb{R}}^{d}}})$ and thus by Lemma \[lem:for\_max-crit1\] $u_1$ satisfies the assumptions of Corollary \[cor:jedn\_max-crit\]. Now, since $\varphi$ has compact support by Lemma \[l:p-kappa-difference-crit-1\](a) we get $u_2\in C([0,T]\times {{{\mathbb{R}}^{d}}})$. We will use [@GS-2018 (94)] several times in what follows. By Lemma \[l:p-kappa-difference-crit-1\] (c) and (d), $$\begin{aligned} \|u_2(t,\cdot)-u_2(0,\cdot)\|_{\infty} &{\leqslant}\sup_{x\in{{{\mathbb{R}}^{d}}}}\int_{{{{\mathbb{R}}^{d}}}}\int_0^t |\LL_x^{\kappa}\,p^{\kappa}(u+s,x,y)|du \,|\varphi(y)|dy\\ &{\leqslant}c t {\rho_{0}^{0}}(s,0)\int_{{{{\mathbb{R}}^{d}}}}|\varphi(y)|dy \to 0\,, \quad \mbox{as } t\to 0^+\,.\end{aligned}$$ Further, by Lemma \[l:p-kappa-difference-crit-1\](b) $$\begin{aligned} \sup_{t\in[0,T]}\|u_2(t\cdot){\mathds{1}}_{|\cdot|{\geqslant}r}\|_{\infty} &{\leqslant}c (T+s) \sup_{x\in{{{\mathbb{R}}^{d}}}} {\mathds{1}}_{|x|{\geqslant}r}\int_{{{{\mathbb{R}}^{d}}}}{\rho_{0}^{0}}(s,x-y)|\varphi(y)|dy\\ &=c (T/s+1)\sup_{x\in{{{\mathbb{R}}^{d}}}} {\mathds{1}}_{|x|{\geqslant}r} \left({\Upsilon}_t*|\varphi|\right)(x)\to 0\,, \quad \mbox{as } r\to \infty\,,\end{aligned}$$ because ${\Upsilon}_t*|\varphi|\in C_0({{{\mathbb{R}}^{d}}})$ (see Remark \[rem:conv\_Lp-crit\]). Finally, by the mean value theorem, Lemma \[l:p-kappa-difference-crit-1\](c), and the dominated convergence theorem $\partial_t u_2(t,x)=\int_{{{{\mathbb{R}}^{d}}}}\partial_t p^{\kappa}(t+s,x,y)\varphi(y) dy$. Then we apply Lemma \[l:p-kappa-difference-crit-1\](c) and Lemma \[l:L-int-commute0-crit1\] to obtain $\partial_t u_2(t,x)=\LL_x^{\kappa}u(t,x)$. Therefore, by Corollary \[cor:jedn\_max-crit\] $u_1=P_t^{\kappa}P_s^{\kappa}\varphi = P_{t+s}^{\kappa}\varphi = u_2$. The convolution property now follows by Fubini theorem, arbitrariness of $\varphi$ and by Lemma \[l:p-kappa-difference-crit-1\](a). Proofs of Theorems \[t:intro-main\]–\[thm:onC0Lp\] -------------------------------------------------- \ \ [**Proof of Theorem \[thm:onC0Lp\]**]{}. It is the same as in [@GS-2018] and relies on Lemma \[lem:bdd\_cont-crit1\], \[lem:p-kappa-final-prop-crit1\], Proposition \[lem:gen\_sem\_step1-crit1\], , , Remark \[rem:conv\_Lp-crit\], , , , , , Lemma \[l:p-kappa-difference-crit-1\](f) and (d), [@GS-2018 Corollary 5.10], , [@MR710486 Chapter 1, Theorem 2.4(c) and 2.2], [@MR710486 Chapter 2, Theorem 5.2(d)]. [**Proof of Theorem \[t:intro-further-properties\]**]{}. All the properties are collected in Lemma \[l:p-kappa-difference-crit-1\] and \[lem:p-kappa-final-prop-crit1\], except for part (8), which is given in Theorem \[thm:onC0Lp\] part 3(c). [**Proof of Theorem \[t:intro-main\]**]{}. The same as in [@GS-2018]. [**Proof of Theorem \[thm:lower-bound\]**]{}. The proof is the same as in [@GS-2018] with the following modification of demonstrating that $$\sup_{|\xi|{\leqslant}1/r} |q(z,\xi)| {\leqslant}c_2 h(r).$$ Since for $\varphi \in \RR$ we have $\left|e^{i\varphi}-1-i\varphi \right|{\leqslant}|\varphi|^2$, together with , $$\begin{aligned} |q(z,\xi)|&{\leqslant}\int_{{{{\mathbb{R}}^{d}}}} \left| e^{i\left<\xi, w\right>}-1 - i\left<\xi,w\right> {\mathds{1}}_{|w|<1\land 1/|\xi|} \right|\kappa(x,w)J(w)dw\\ &\quad +\left| i \left<\xi,\int_{{{{\mathbb{R}}^{d}}}} w \left({\mathds{1}}_{|w|<1\land 1/|\xi|} - {\mathds{1}}_{|w|<1}\right) \kappa(x,w)J(w)dw \right>\right|\\ &{\leqslant}|\xi|^2 \int_{|w|<1\land 1/|\xi|} |w|^2 \kappa(x,w)J(w)dw + \int_{|w|{\geqslant}1\land 1/|\xi|} 2\, \kappa(x,w)J(w)dw\\ &\quad + |\xi| \left| \int_{{{{\mathbb{R}}^{d}}}} w \left({\mathds{1}}_{|w|<1\land 1/|\xi|} - {\mathds{1}}_{|w|<1}\right) \kappa(x,w)J(w)dw\right| {\leqslant}c h(1\land 1/|\xi|)\,.\end{aligned}$$ Appendix - Unimodal L[é]{}vy processes {#sec:appA} ====================================== Let $d\in\N$ and $\nu:[0,\infty)\to[0,\infty]$ be a non-increasing function satisfying $$\int_{{{{\mathbb{R}}^{d}}}} (1\land |x|^2) \nu(|x|)dx<\infty\,.$$ For any such $\nu$ there exists a unique pure-jump isotropic unimodal L[é]{}vy process $X$ (see [@MR3165234], [@MR705619]). We define $h(r)$, $K(r)$ and ${\Upsilon}_t(x)$ as in the introduction. At this point we refer the reader to [@GS-2018 Section 5] for various important properties of those functions. Following [@GS-2018 Section 5] in the whole section [**we assume that**]{} $h(0^+)=\infty$. We consider the scaling conditions: there are ${\alpha_h}\in(0,2]$, $C_h\in[1,\infty)$ and $\theta_h\in(0,\infty]$ such that $$\label{eq:wlsc:h} h(r){\leqslant}C_h\lambda^{{\alpha_h}}h(\lambda r),\qquad \lambda{\leqslant}1,\, r< \theta_h;$$ there are ${\beta_h}\in (0,2]$, $c_h\in (0,1]$ and $\theta_h \in (0,\infty]$ such that $$\label{eq:wusc:h} c_h\,\lambda^{{\beta_h}}\,h(\lambda r){\leqslant}h(r)\, ,\quad \lambda{\leqslant}1, \,r< \theta_h.$$ The first and the latter inequality in the next lemma are taken from [@GS-2018 Section 5]. We keep them here for easier reference. \[lem:int\_J\] Let $h$ satisfy with ${\alpha_h}>1$, then $$\begin{aligned} \int_{r {\leqslant}|z|< \theta_h } |z|\nu(|z|)dz {\leqslant}\frac{(d+2) C_h}{{\alpha_h}-1} \, r h(r)\,, \qquad r>0\,.\end{aligned}$$ Let $h$ satisfy with ${\alpha_h}=1$, then $$\begin{aligned} \int_{r {\leqslant}|z|< \theta_h } |z|\nu(|z|)dz {\leqslant}[(d+2)C_h] \, \ln(\theta_h/r)\, r h(r)\,, \qquad r>0\,.\end{aligned}$$ Let $h$ satisfy with ${\beta_h}<1$, then $$\begin{aligned} \int_{|z|< r} |z| \nu(|z|)dz{\leqslant}\frac{d+2}{c_h(1-{\beta_h})}\, r h(r)\,,\qquad r< \theta_h \,.\end{aligned}$$ Under with ${\alpha_h}=1$ we have $$\begin{aligned} \int_{r {\leqslant}|z|< \theta_h } |z|\nu(|z|)dz {\leqslant}(d+2) \int_r^{\theta_h} h(s)ds {\leqslant}(d+2) C_h\int_r^{\theta_h} (r/s)h(r)ds\,,\end{aligned}$$ which ends the proof. \[lem:cal\_TCh\] Assume . Let $k,l {\geqslant}0$ and $\theta, \eta, \beta,\gamma\in{\mathbb{R}}$ satisfy $(\beta/2)\land (\beta/{\alpha_h})+1-\theta>0$, $(\gamma/2)\land(\gamma/{\alpha_h})+1-\eta>0$. For every $T>0$ there exists a constant $c=c({\alpha_h},C_h,h^{-1}(1/T)\vee 1, \theta, \eta,\beta,\gamma,k,l)$ such that for all $t\in (0,T]$, $$\begin{aligned} \int_0^t [{\Theta}(u)]^{l}\, u^{-\eta}\left[ h^{-1}(1/u)\right]^{\gamma}[{\Theta}(t-u)]^{k}\, (t-u)^{-\theta}\left[ h^{-1}(1/(t-u))\right]^{\beta}du \\ {\leqslant}c\, [{\Theta}(t)]^{l+k}\, t^{1-\eta-\theta}\left[h^{-1}(1/t)\right]^{\gamma+\beta}\,.\end{aligned}$$ Further, ${\Theta}(t/2){\leqslant}c\, {\Theta}(t)$, $t\in (0,T]$. The last part of the statement follows from [@GS-2018 Lemma 5.3 and Remark 5.2]. Note that it suffices to consider the integral over $(0,t/2)$. Again by [@GS-2018 Lemma 5.3 and Remark 5.2] we have for $c_0=C_h [h^{-1}(1/T)\vee 1]^2$ and $s\in (0,1)$, $$\left[ h^{-1}(t^{-1}s^{-1})\right]^{-1} {\leqslant}c_0\, s^{-1/{\alpha_h}}\left[h^{-1}(1/t)\right]^{-1}, \qquad h^{-1}(t^{-1}s^{-1}){\leqslant}s^{1/2}h^{-1}(t^{-1})\,.$$ Thus for $u\in (0,t/2)$ we get $$[{\Theta}(t-u)]^{k}(t-u)^{-\theta}[h^{-1}(1/(t-u))]^{\beta}{\leqslant}c \,[{\Theta}(t)]^{k}\, t^{-\theta}\,[h^{-1}(1/t)]^{\beta}\,,$$ and we concentrate on $$\int_0^{t/2} [{\Theta}(u)]^{l} u^{-\eta} \left[h^{-1}(1/u)\right]^{\gamma}du {\leqslant}c\,t^{1-\eta} \left[h^{-1}(1/t)\right]^{\gamma} \int_0^{1/2} [{\Theta}(ts)]^{l} s^{(\gamma/2)\land (\gamma/{\alpha_h})-\eta} ds \,.$$ Further we have $$\begin{aligned} &\int_0^{1/2} \left[ \ln\left(1\vee\left[ h^{-1}(t^{-1}s^{-1})\right]^{-1}\right)\right]^{l} s^{(\gamma/2)\land (\gamma/{\alpha_h})-\eta}ds\\ &{\leqslant}\int_0^{1/2} 2^{l} \left\{ [\ln (c_0 s^{-1})]^{l} +\left[ \ln \left(1\vee\left[ h^{-1}(t^{-1}\right]^{-1}\right)\right]^{l} \right\} s^{(\gamma/2)\land (\gamma/{\alpha_h})-\eta}ds {\leqslant}c \,[{\Theta}(t)]^{l}\,.\end{aligned}$$ Finally, $$\begin{aligned} \int_0^{1/2} [{\Theta}(ts)]^{l} s^{(\gamma/2)\land (\gamma/{\alpha_h})-\eta} ds {\leqslant}c\, [{\Theta}(t)]^{l}.\end{aligned}$$ Recall that, for $\gamma,\beta\in {\mathbb{R}}$, we consider the following function $${\rho_{\gamma}^{\beta}}(t,x):= \left[h^{-1}(1/t)\right]^{\gamma} \left(|x|^{\beta}\land 1\right) t^{-1} {\Upsilon}_t(x)\,.$$ The next lemma is taken from [@GS-2018 Section 5] and complemented with part (d). It is one of the mostly used technical result in the paper. Let $B(a,b)$ be the beta function, i.e., $B(a,b)=\int_0^1 s^{a-1} (1-s)^{b-1}ds$, $a,b>0$. \[l:convolution\] Assume and let $\beta_0\in(0,{\alpha_h}\land 1)$. - For every $T>0$ there exists a constant $c_1=c_1(d,\beta_0,{\alpha_h},C_h,h^{-1}(1/T)\vee 1)$ such that for all $t\in(0,T]$ and $\beta\in [0,\beta_0]$, $$\int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{\beta}}(t,x)\,dx {\leqslant}c_1 t^{-1} \left[h^{-1}(1/t)\right]^{\beta}\,.$$ - For every $T>0$ there exists a constant $c_2=c_2(d,\beta_0,{\alpha_h},C_h,h^{-1}(1/T)\vee 1) \ge 1$ such that for all $\beta_1,\beta_2,n_1,n_2,m_1,m_2 \in[0,\beta_0]$ with $n_1, n_2 {\leqslant}\beta_1+\beta_2$, $m_1{\leqslant}\beta_1$, $m_2{\leqslant}\beta_2$ and all $0<s<t{\leqslant}T$, $x\in{{{\mathbb{R}}^{d}}}$, $$\begin{aligned} \int_{{{{\mathbb{R}}^{d}}}} {\rho_{0}^{\beta_1}}(t-s&,x-z){\rho_{0}^{\beta_2}}(s,z) \,dz\\ {\leqslant}c_2 &\Big[ \left( (t-s)^{-1} \left[h^{-1}(1/(t-s))\right]^{n_1} + s^{-1}\left[h^{-1}(1/s)\right]^{n_2}\right) {\rho_{0}^{0}}(t,x)\\ &+(t-s)^{-1}\left[ h^{-1}(1/(t-s))\right]^{m_1} {\rho_{0}^{\beta_2}}(t,x) + s^{-1}\left[ h^{-1}(1/s)\right]^{m_2} {\rho_{0}^{\beta_1}}(t,x)\Big].\end{aligned}$$ - Let $T>0$. For all $\gamma_1, \gamma_2\in\RR$, $\beta_1,\beta_2,n_1,n_2,m_1,m_2 \in[0,\beta_0]$ with $n_1, n_2 {\leqslant}\beta_1+\beta_2$, $m_1{\leqslant}\beta_1$, $m_2{\leqslant}\beta_2$ and $\theta,\eta \in [0,1]$, satisfying $$\begin{aligned} (\gamma_1+n_1\land m_1)/2 \land (\gamma_1+n_1\land m_1)/{\alpha_h}+1-\theta>0\,,\\ (\gamma_2+n_2\land m_2)/2\land (\gamma_2+n_2\land m_2)/{\alpha_h}+1-\eta>0\,,\end{aligned}$$ and all $0<s<t{\leqslant}T$, $x\in{{{\mathbb{R}}^{d}}}$, we have $$\begin{aligned} \int_0^t\int_{{{{\mathbb{R}}^{d}}}}& (t-s)^{1-\theta}\, {\rho_{\gamma_1}^{\beta_1}}(t-s,x-z) \,s^{1-\eta}\,{\rho_{\gamma_2}^{\beta_2}}(s,z) \,dzds \nonumber\\ & {\leqslant}c_3 \, t^{2-\eta-\theta}\Big( {\rho_{\gamma_1+\gamma_2+n_1}^{0}} +{\rho_{\gamma_1+\gamma_2+n_2}^{0}} +{\rho_{\gamma_1+\gamma_2+m_1}^{\beta_2}} +{\rho_{\gamma_1+\gamma_2+m_2}^{\beta_1}} \Big)(t,x)\,,\label{e:convolution-3}\end{aligned}$$ where $ c_3= c_2 \, (C_h[h^{-1}(1/T)\vee 1]^2)^{-(\gamma_1\land 0+\gamma_2 \land 0)/{\alpha_h}} B\left( k+1-\theta, \,l+1-\eta\right) $ and $$\begin{aligned} k=\left(\frac{\gamma_1+n_1\land m_1}{2}\right)\land \left(\frac{\gamma_1+n_1\land m_1}{{\alpha_h}}\right), \quad l=\left(\frac{\gamma_2+n_2\land m_2}{2}\right)\land \left(\frac{\gamma_2+n_2\land m_2}{{\alpha_h}}\right).\end{aligned}$$ - Let $T>0$. For all $k,l{\geqslant}0$, $\gamma_1, \gamma_2\in\RR$, $\beta_1,\beta_2,n_1,n_2,m_1,m_2 \in[0,\beta_0]$ with $n_1, n_2 {\leqslant}\beta_1+\beta_2$, $m_1{\leqslant}\beta_1$, $m_2{\leqslant}\beta_2$ and $\theta,\eta \in [0,1]$, satisfying $$\begin{aligned} (\gamma_1+n_1\land m_1)/2 \land (\gamma_1+n_1\land m_1)/{\alpha_h}+1-\theta>0\,,\\ (\gamma_2+n_2\land m_2)/2\land (\gamma_2+n_2\land m_2)/{\alpha_h}+1-\eta>0\,,\end{aligned}$$ and for all $0<s<t{\leqslant}T$, $x\in{{{\mathbb{R}}^{d}}}$, we have $$\begin{aligned} \int_0^t\int_{{{{\mathbb{R}}^{d}}}}& [{\Theta}(t-s)]^k (t-s)^{1-\theta}\, {\rho_{\gamma_1}^{\beta_1}}(t-s,x-z) [{\Theta}(s)]^{l}\,s^{1-\eta}\,{\rho_{\gamma_2}^{\beta_2}}(s,z) \,dzds \nonumber\\ & {\leqslant}c_4 \, [{\Theta}(t)]^{k+l}\, t^{2-\eta-\theta}\Big( {\rho_{\gamma_1+\gamma_2+n_1}^{0}} +{\rho_{\gamma_1+\gamma_2+n_2}^{0}} +{\rho_{\gamma_1+\gamma_2+m_1}^{\beta_2}} +{\rho_{\gamma_1+\gamma_2+m_2}^{\beta_1}} \Big)(t,x)\,,\label{e:convolution-3-crit}\end{aligned}$$ where $c_4=c_4(d,\beta_0,{\alpha_h},C_h,h^{-1}(1/T)\vee 1,k,l,\gamma_1,\gamma_2,n_1,n_2,m_1,m_2,\theta,\eta)$. For the proof of part (d) we multiply the result of part (b) by $$[{\Theta}(t-s)]^{k} (t-s)^{1-\theta} \left[h^{-1}(1/(t-s))\right]^{\gamma_1} [{\Theta}(s)]^{\gamma_1} s^{1-\eta} \left[h^{-1}(1/s)\right]^{\gamma_2}\,,$$ and apply Lemma \[lem:cal\_TCh\]. When using Lemma \[l:convolution\] without specifying the parameters we apply the usual case, i.e., $n_1=n_2=\beta_1+\beta_2$ (${\leqslant}\beta_0$), $m_1=\beta_1$, $m_2=\beta_2$. Similarly, if only $n_1$, $n_2$ are specified, then $m_1=\beta_1$, $m_2=\beta_2$. [10]{} M. T. Barlow, A. Grigor’yan, and T. Kumagai. Heat kernel upper bounds for jump processes and the first exit time. , 626:135–157, 2009. R. F. Bass. Regularity results for stable-like operators. , 257(8):2693–2722, 2009. K. Bogdan, T. Grzywny, and M. Ryznar. Density and tails of unimodal convolution semigroups. , 266(6):3543–3571, 2014. K. Bogdan and T. Jakubowski. Estimates of heat kernel of fractional [L]{}aplacian perturbed by gradient operators. , 271(1):179–198, 2007. K. Bogdan and S. Sydor. On nonlocal perturbations of integral kernels. In [*Semigroups of operators—theory and applications*]{}, volume 113 of [*Springer Proc. Math. Stat.*]{}, pages 27–42. Springer, Cham, 2015. K. [Bogdan]{}, P. [Sztonyk]{}, and V. [Knopova]{}. , 25:1–54, 2020. B. Böttcher. A parametrix construction for the fundamental solution of the evolution equation associated with a pseudo-differential operator generating a [M]{}arkov process. , 278(11):1235–1241, 2005. B. Böttcher. Construction of time-inhomogeneous [M]{}arkov processes via evolution equations using pseudo-differential operators. , 78(3):605–621, 2008. B. Böttcher, R. Schilling, and J. Wang. , volume 2099 of [*Lecture Notes in Mathematics*]{}. Springer, Cham, 2013. Lévy-type processes: construction, approximation and sample path properties, With a short biography of Paul Lévy by Jean Jacod, Lévy Matters. E. A. Carlen, S. Kusuoka, and D. W. Stroock. Upper bounds for symmetric [M]{}arkov transition functions. , 23(2, suppl.):245–287, 1987. Z.-Q. Chen, P. Kim, and T. Kumagai. Weighted [P]{}oincaré inequality and heat kernel estimates for finite range jump processes. , 342(4):833–883, 2008. Z.-Q. Chen, P. Kim, and T. Kumagai. Global heat kernel estimates for symmetric jump processes. , 363(9):5021–5055, 2011. Z.-Q. Chen and X. Zhang. Heat kernels and analyticity of non-symmetric jump diffusion semigroups. , 165(1-2):267–312, 2016. Z.-Q. Chen and X. Zhang. Heat kernels for non-symmetric non-local operators. In [*Recent developments in nonlocal theory*]{}, pages 24–51. De Gruyter, Berlin, 2018. Z.-Q. Chen and X. Zhang. Heat kernels for time-dependent non-symmetric stable-like operators. , 465(1):1–21, 2018. A. Debussche and N. Fournier. Existence of densities for stable-like driven [SDE]{}’s with [H]{}ölder continuous coefficients. , 264(8):1757–1778, 2013. F. G. Dressel. The fundamental solution of the parabolic equation. , 7:186–203, 1940. J. M. Drin. A fundamental solution of the [C]{}auchy problem for a class of parabolic pseudodifferential equations. , (3):198–203, 284, 1977. J. M. Drin and S. D. Eidelman. Construction and investigation of classical fundamental solutions to the [C]{}auchy problem of uniformly parabolic pseudodifferential equations. , (63):18–33, 180–181, 1981. Boundary value problems for partial differential equations. S. D. Eidelman, S. D. Ivasyshen, and A. N. Kochubei. , volume 152 of [*Operator Theory: Advances and Applications*]{}. Birkhäuser Verlag, Basel, 2004. W. [Feller]{}. . , 113:113–160, 1936. A. Friedman. . Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964. M. Fukushima, Y. Oshima, and M. Takeda. , volume 19 of [*De Gruyter Studies in Mathematics*]{}. Walter de Gruyter & Co., Berlin, extended edition, 2011. M. [Gevrey]{}. , 152:428–431, 1911. T. Grzywny and K. Szczypkowski. . preprint 2017, arXiv:1710.07793v1. T. Grzywny and K. Szczypkowski. Heat kernels of non-symmetric [L]{}évy-type operators. , 267(10):6004–6064, 2019. W. Hoh. A symbolic calculus for pseudo-differential operators generating [F]{}eller semigroups. , 35(4):789–820, 1998. C. T. Iwasaki. The fundamental solution for pseudo-differential operators of parabolic type. , 14(3):569–592, 1977. N. Jacob. A class of [F]{}eller semigroups generated by pseudo-differential operators. , 215(1):151–166, 1994. N. Jacob. . Imperial College Press, London, 2001. Fourier analysis and semigroups. N. Jacob. . Imperial College Press, London, 2002. Generators and their potential theory. T. Jakubowski. Fundamental solution of the fractional diffusion equation with a singular drift. , 218(2):137–153, 2016. T. Jakubowski and K. Szczypkowski. Time-dependent gradient perturbations of fractional [L]{}aplacian. , 10(2):319–339, 2010. T. Jakubowski and K. Szczypkowski. Estimates of gradient perturbation series. , 389(1):452–460, 2012. P. Jin. . preprint 2017, arXiv:1709.02836. P. Kim and J. Lee. Heat kernels of non-symmetric jump processes with exponentially decaying jumping kernel. , 129(6):2130–2173, 2019. P. Kim, R. Song, and Z. Vondraček. Heat [K]{}ernels of [N]{}on-symmetric [J]{}ump [P]{}rocesses: [B]{}eyond the [S]{}table [C]{}ase. , 49(1):37–90, 2018. V. Knopova and A. Kulik. Parametrix construction for certain [L]{}évy-type processes. , 23(2):111–136, 2015. V. Knopova and A. Kulik. Intrinsic compound kernel estimates for the transition probability density of [L]{}évy-type processes and their applications. , 37(1):53–100, 2017. V. Knopova and A. Kulik. Parametrix construction of the transition probability density of the solution to an [SDE]{} driven by [$\alpha$]{}-stable noise. , 54(1):100–140, 2018. A. N. Kochubei. Parabolic pseudodifferential equations, hypersingular integrals and [M]{}arkov processes. , 52(5):909–934, 1118, 1988. A. Kohatsu-Higa and L. Li. Regularity of the density of a stable-like driven [SDE]{} with [H]{}ölder continuous coefficients. , 34(6):979–1024, 2016. V. Kolokoltsov. Symmetric stable laws and stable-like jump-diffusions. , 80(3):725–768, 2000. F. Kühn. Transition probabilities of [L]{}évy-type processes: parametrix construction. , 292(2):358–376, 2019. T. Kulczycki and M. Ryznar. Transition density estimates for diagonal systems of [SDE]{}s driven by cylindrical [$\alpha$]{}-stable processes. , 15(2):1335–1375, 2018. A. M. Kulik. On weak uniqueness and distributional properties of a solution to an [SDE]{} with [$\alpha$]{}-stable noise. , 129(2):473–506, 2019. H. Kumano-go. . MIT Press, Cambridge, Mass.-London, 1981. Translated from the Japanese by the author, Rémi Vaillancourt and Michihiro Nagase. E. E. [Levi]{}. , 24:275–317, 1907. A. Pazy. , volume 44 of [*Applied Mathematical Sciences*]{}. Springer-Verlag, New York, 1983. S. I. Podolynny and N. I. Portenko. On multidimensional stable processes with locally unbounded drift. , 3(2):113–124, 1995. N. I. Portenko. Some perturbations of drift-type for symmetric stable processes. , 2(3):211–224, 1994. E. Rothe. ber die [G]{}rundlösung bei parabolischen [G]{}leichungen. , 33(1):488–504, 1931. W. Rudin. . McGraw-Hill Book Co., New York, third edition, 1987. C. Tsutsumi. The fundamental solution for a degenerate parabolic pseudo-differential operator. , 50:11–15, 1974. T. Watanabe. The isoperimetric inequality for isotropic unimodal [L]{}évy processes. , 63(4):487–499, 1983. L. Xie and X. Zhang. Heat kernel estimates for critical fractional diffusion operators. , 224(3):221–263, 2014.
--- abstract: 'Two graphene monolayers that are oppositely charged and placed close to each other are considered. Taking into account valley and spin degeneracy of electrons we analyze the symmetry of the excitonic insulator states in such a system and build a phase diagram that takes into account the effect of the symmetry breaking due to the external in-plane magnetic field and the carrier density imbalance between the layers.' author: - 'Yevhen F.' - Vadim Cheianov - 'Vladimir I. Fal’ko' title: 'Phases of the excitonic condensate in two-layer graphene' --- \[sec:1 Introduction\]Introduction. =================================== ![ (a) The excitonic condensation due to an electron-hole pairing is studied in the system of two spatially separated graphene monolayers with an excess of electrons on layer 1 and a lack of electrons on layer 2. (b) The schematic phase diagram of the excitonic condensation in the system at different values of a Zeeman splitting and different values of the asymmetry between Fermi energies in layer 1 and 2, $\epsilon_{Z}=\mu_{B}|\mathbf{h}|$ is the Zeeman energy in an in-plane magnetic field $\mathbf{h}.$ []{data-label="fig 1"}](figure_1.eps){width="8cm"} The excitonic insulator [@blatt; @keldysh; @kopaev; @jerome; @rice; @kohn; @PR; @keldysh; @kozlov; @halperin; @rice; @rev; @mod; @phys] was predicted theoretically four decades ago in 3D semiconductors and then in spatially separated layers of electrons and holes.[@lozovik; @yudson; @shevchenko] Since then, an excitonic insulator has been searched for in a variety of systems. The excitonic insulator is a material where the electron-hole excitonic correlations lead to the formation of a gapped state characterized by the order parameter resembling a superfluid condensate of excitons. Such a correlated state has been observed in double-quantum well semiconductor structures in quantizing magnetic fields.[@quantum; @well; @start; @butov; @moon; @zhang; @joglekar; @9; @spielman; @eisenstein; @high] After the experimental discovery of graphene [@Novoselov; @1; @Novoselov; @2; @Zhang; @1; @Zhang; @2] it has been discussed as a possible candidate for experimental realization of the excitonic insulator state [@Aleiner; @lozovik; @sokolik; @lozovik; @merkulova; @sokolik; @min; @..; @macdonald; @zhang; @joglekar; @lozovik; @willander] sparking the on-going debate [@kharitonov; @efetov; @KharitonovEfetov0903; @macdonald; @comment] about the critical temperature $T_{c}$ of the excitonic condensate transition in a two-layer graphene system. Various estimations of $T_{c}$ for such a system lay in a wide region of magnitudes from milli-Kelvins[@kharitonov; @efetov; @KharitonovEfetov0903] up to Kelvins[@lozovik2009; @lozovik2009v2; @lozovik2010; @mink] and further up to the room temperature.[@min; @..; @macdonald; @zhang; @joglekar] The considered system [@lozovik; @sokolik; @lozovik; @merkulova; @sokolik; @min; @..; @macdonald; @zhang; @joglekar; @kharitonov; @efetov; @macdonald; @comment; @KharitonovEfetov0903] consists of two parallel, separately controlled graphene monolayers, in which external gates induce a finite density of electrons in the layer 1 and holes in the layer 2, Fig. \[fig 1\](a). Recently the two-layer graphene system has been obtained experimentally.[@schmidt; @schmidt2; @schmidt3; @schmidt4; @FalkoGaugeField; @FalkoCheianovTunable] In this paper, we extend the existing theory of the excitonic insulator state in a two-layer graphene system: we analyze a symmetry of the excitonic insulator and classify its phases. As a result a phase diagram of the excitonic insulator is built, that takes into account the effect of the symmetry breaking due to the Zeeman splitting and the asymmetry between electron/hole densities in the layer 1 and 2. A phase diagram, Fig. \[fig 1\](b), contains 3 phases: ${\rm B,B^{\prime}}$ and ${\rm A_{1}}.$ Transitions between phases are found to be of the first order. These transitions are subject to the use of an in-plane magnetic field and a variation of external gate voltages, leading to different charge carriers densities in layers: the density of all electrons $n_{1e}$ in layer 1 (which corresponds to the Fermi energy $E_{F}^{(1)}=\hbar v\sqrt{\pi n_{1e}}/2),$ and density of holes $n_{2h}$ in layer 2 (which corresponds to the negative Fermi energy in the layer 2, $E_{F}^{(2)}=-\hbar v\sqrt{\pi n_{2h}}/2$). The ${\rm B}$ phase, Fig. \[fig 1\](b), exists when there is no magnetic field and charge carriers densities are the same in both layers $n_{1e}=n_{2h}$ (i.e. when $E_{F}^{(1)}=|E_{F}^{(2)}|).$ The ${\rm B^{\prime}}$ phase exists at the same condition $n_{1e}=n_{2h}$ but when an in-plane magnetic field is applied, which causes a Zeeman splitting of energies of electrons with different spin projections. The ${\rm A_{1}}$ phase exists when a symmetry of charge carriers density is violated, e.g. $n_{1e}>n_{2h},$ and when the corresponding splitting of Fermi energies $E_{F}^{(1)}-|E_{F}^{(2)}|$ is equal to the Zeeman splitting due to an in-plane magnetic field. The diversity of obtained phases is due to the high symmetry of the normal ground state, which can be broken in several different ways leading to a variety of phases possessing different symmetry groups. A well known example of the system with the diversity of phases due to various normal state symmetry breaking is liquid Helium-3.[@leggettrmp75; @wheatley; @mineevufn; @voloviksymmetryin3-Hechapter; @vollhardt; @wolfle] In liquid Helium-3 the symmetry of the order parameter can be changed by correspondent external parameters, leading to phase transitions. The analysis in this paper is organised as follows. Section \[sec:2 Two-layer Hamiltonian\] describes the theoretical model of the considered system. Pairing of electrons and holes within mean field theory is introduced in the Section \[sec:3 Excitonic pairing\]. Section \[sec:4 Symmetry\] provides symmetry analysis and the phase classification of the excitonic correlated state. Section \[sec:5 Phases\] contains detailed description of the most symmetric phases, their properties are summarizes in Table \[tab:phases\]. Results are discussed in Section \[sec:6 Results\]. \[sec:2 Two-layer Hamiltonian\]Two-layer Hamiltonian ==================================================== Graphene [@wallace; @review; @1; @review; @2] is a gapless semiconductor with the Fermi surface consisting of two distinct points, $\mathbf{K}_{+}$ and $\mathbf{K}_{-},$ called valleys. Near these Fermi points electrons have a linear dispersion $E(p)=\pm vp,$ with a velocity $v\approx10^{8}{\rm cm/sec},$[@Novoselov; @2] here $p=|\mathbf{p}|,$ $\mathbf{p} =\mathbf{k}-\mathbf{K}_{\pm}$ is the momentum of an electron relative to the Fermi point. Using external gates, one can independently tune the carrier density in each of the two graphene flakes.[@Novoselov; @2] Neglecting tunneling, the electrons in the two layer graphene system initially can be described with the Hamiltonian $\hat{H}_{{\rm 2layer}}=\hat{H}_{{\rm s.p.}}+\hat{H}_{11}+\hat{H}_{22}+\hat{H}_{12},$ here the single particle part of the Hamiltonian is $$\hat{H}_{{\rm s.p.}}= \sum_{l,\zeta,\mathbf{p},s} (s vp-E_{F}^{(l)})\,\, a^{\dagger}_{l,\zeta,\mathbf{p},s}\,\, a_{l,\zeta,\mathbf{p},s}, \label{H kin}$$ the operators $a_{l,\zeta,\mathbf{p},s}^{\dagger}\,(a_{l,\zeta,\mathbf{p},s})$ create (annihilate) an electron on the $l=1,2$ layer on the $s=+/-$ conduction or valence band with momentum $\mathbf{p}=p(\cos\phi_{\mathbf{p}},\sin\phi_{\mathbf{p}}),$ $E_{F}^{(1)}$ and $E_{F}^{(2)}$ are the Fermi energies, which correspond to charge carrier densities in the layers. The index $\zeta$ denotes 4 different pairs of spin projection $(\uparrow,\downarrow)$ and valleys $(\mathbf{K}_{+},\mathbf{K}_{-}).$ In the Hamiltonian $\hat{H}_{{\rm 2layer}}$ the terms $\hat{H}_{11}$ and $\hat{H}_{22}$ take into account the intra-layer interaction. These terms can be ignored in the following studies, provided that one uses a screened inter-layer interaction in the term $\hat{H}_{12}.$ Hence in $\hat{H}_{12}$ we keep only those terms that contribute to the BCS mean field theory,[@mineevsamokhin] absorbing other contributions into a renormalization of the velocity and the Fermi energy in the single particle part (\[H kin\]) of the Hamiltonian $$\begin{aligned} \hat{H}_{12}&=&- \sum_{\mathbf{p},\mathbf{p}^{\prime},\,s,s^{\prime}} V(|\mathbf{p}-\mathbf{p}^{\prime}|)\, {1+ss^{\prime} \cos(\phi_{\mathbf{p}}-\phi_{\mathbf{p^{\prime}}}) \over2} \nonumber \\ &&\times \sum_{\zeta,\zeta^{\prime}}\, a^{\dagger}_{1,\zeta,\mathbf{p},s}\,\, a^{\dagger}_{2,\zeta^{\prime},\mathbf{p^{\prime}},-s^{\prime}}\,\, a_{1,\zeta,\mathbf{p^{\prime}},s^{\prime}}\,\, a_{2,\zeta^{\prime},\mathbf{p},-s}. \label{H int}\end{aligned}$$ The scattering process, described by $\hat{H}_{12},$ is shown on Fig. \[fig:Vrpa\]. ![ A typical transition which is described by $\hat{H}_{12}$ in Eq. (\[H int\]). Indices $l=1{\rm~or~}2,\zeta,\mathbf{p},s$ denote a layer, a pair of the spin projection and valley, a momentum of an electron and a conduction $s=+$ or valence $s=-$ band. []{data-label="fig:Vrpa"}](figure_2.eps){width="5cm"} The function $V(q)$ denotes a screened Coulomb interaction in the static limit $V(q)=V(q,\omega\ll q)$. The factor $\left[1+ss^{\prime} \cos(\phi_{\mathbf{p}}-\phi_{\mathbf{p^{\prime}}})\right] /2$ in Eq.(\[H int\]) reflects chiral properties of electrons related to the sublattice composition of electronic Bloch wave functions.[@review; @1; @review; @2] These chiral properties of electrons result in the suppressed backwards scattering if an electron does not change the energy band upon scattering $(ss^{\prime}=+),$ otherwise $(ss^{\prime}=-)$ the electron can not forward-scatter.[@katsnelson] \[sec:3 Excitonic pairing\]Excitonic pairing, mean field order parameter ======================================================================== The excitonic insulator state of the electron-hole liquid is characterized by the electron-hole correlations on the Fermi surface, Fig. 3. Mathematically it means that in the excitonic insulating state there is a non-vanishing ground state average ${\rm F}$ of electron operators ![ The excitonic electron-hole bound state in the two-layer graphene. The left hand side of the figure shows the electron’s spectrum in graphene layer 1 and 2. An electron on the Fermi surface in the layer 1 is shown as a fulfilled circle. Absence of an electron on the Fermi surface in layer 2 is shown as an empty circle. Closed line around both circles represents an excitonic pairing, which is developed due to a Coulomb interaction (shown as a wavy line). The right hand side of the figure shows the coincided Fermi circles in both layers at Fermi momentum $p_F.$ []{data-label="fig:phase"}](figure_3.eps){width="8.0cm"} $${\rm F}_{\zeta\zeta^{\prime},s}({\bf p}) = \langle {a}^{\dagger}_{2,\zeta^{\prime},{\bf p},-s} {a}_{1,\zeta,{\bf p},+s} \rangle. \label{anom average}$$ For the existence of the non-zero ground state average ${\rm F}$ it is crucial that Fermi surfaces for electrons and holes coincide.[@BCS; @mineevsamokhin] Due to the electron-hole symmetry of the energy spectrum in graphene, the electron-hole excitonic correlations (\[anom average\]) in the considered system are most developed when the density of electrons in the layer 1 is equal to the density of holes in the layer 2, $n_{1e}=n_{2h},$ or, in terms of Fermi energies $E_{F}^{(1)}=-E_{F}^{(2)}$, Fig.\[fig:phase\]. However apart from this condition there can be certain other external conditions when excitonic correlations (\[anom average\]) can be developed. Thus, although the excitonic insulator state disappears when the symmetry $n_{1e}=n_{2h}$ is violated by external gates, we show below that excitonic correlations can be restored by the in-plane magnetic field. Based on the detailed analysis of excitonic correlations in monolayer graphene in the in-plane magnetic field, which is done by Aleiner and co-authors[@Aleiner], we show that the excitonic insulator state can exist in various phases in the two-layer graphene system. In order to study phases of the excitonic insulator state at different external conditions, firstly we apply the standard mean-field approximation.[@mineevsamokhin] We assume that the product of the operators $a^{\dagger}_{2,\zeta^{\prime},\mathbf{p},-s}a_{1,\zeta,\mathbf{p},s}$ weakly deviates from its non-vanishing ground state average. We expand the interacting part $\hat{H}_{12},$ Eq. (\[H int\]), of the Hamiltonian $\hat{H}_{{\rm 2layer}}$ up to the linear order with respect to these small deviations and neglect constant terms. The mean field Hamiltonian of the system becomes $$\hat{H}_{{\rm mf}}= \hat{H}_{{\rm s.p.}} + \sum_{\mathbf{p},s,\zeta,\zeta^{\prime}} [ a^{\dagger}_{1,\zeta,\mathbf{p},s} \Delta_{\zeta\zeta^{\prime},s}(\mathbf{p}) a_{2,\zeta^{\prime},\mathbf{p},-s} +{\rm H.c.}], \label{mean-field H}$$ where ${\rm H.c.}$ stands for “Hermitian conjugate”, and $$\begin{aligned} \Delta_{\zeta\zeta^{\prime},s}(\bf{p})&=& -\sum_{\mathbf{p^{\prime}},s^{\prime}} {\rm F}_{\zeta\zeta^{\prime},s^{\prime}}({\bf p}^{\prime}) V(|\mathbf{p}-\mathbf{p^{\prime}}|)\, \nonumber \\ & &\times {1+ss^{\prime}\cos(\phi_{\mathbf{p}}-\phi_{\mathbf{p^{\prime}}}) \over2}. \label{Delta difinition}\end{aligned}$$ Quantities $\Delta_{\zeta\zeta^{\prime},s}(\mathbf{p})$ form the matrix ${\Delta}$ of the order parameter. Index $\zeta$ denotes 4 different pairs of spin projections and valleys $(\uparrow \mathbf{K}_{+},\uparrow \mathbf{K}_{-},\downarrow \mathbf{K}_{+},\downarrow \mathbf{K}_{-}).$ Thus in the spin$\otimes$valley space the order parameter is given by the $4\times4$ matrix $\Delta$ with matrix elements given by (\[Delta difinition\]). For brevity we omit index $s$ and momentum $\mathbf{p}$ in the notation for the order parameter $\Delta.$ For further analysis it is convenient to rewrite the Hamiltonian (\[mean-field H\]) as follows: $\hat{H}_{{\rm mf}}=\sum_{\zeta,\bf{p},s} {\Psi}_{\zeta,\bf{p},s}^{\dagger} H_{{\rm mf}}(\mathbf{p},s) {\Psi}_{\zeta,\bf{p},s},$ where $\Psi_{\zeta,\bf{p},s}=\left({a}_{1,\zeta,\bf{p},+s},{a}_{2,\zeta,\bf{p},-s}\right)^{T},$ and $$H_{{\rm mf}}(\mathbf{p},s) = \left(\begin{array}{cc} (svp-E_{F})\openone &\Delta\\ \Delta^{\dagger}&-(svp-E_{F})\openone\\ \end{array}\right). \label{Hmf 1/N}$$ Here all elements of the matrix $H_{{\rm mf}}$ are $4\times4$ matrices in the spin$\otimes$valley space: diagonal elements have structure of the identity matrix $\openone$ in this space, whereas $\Delta$ is given by some $4\times4$ matrix, whose structure is identified in this paper for each phase of the excitonic correlated state. The matrix of the order parameter ${\Delta}$ describes the correlations between conduction/valence electrons in the layers 1 and 2 below a critical temperature $T_{c}$. Nevertheless, the phase classification can be made regardless of the value of the transition temperature $T_{c}.$ Assuming that the excitonic insulator state can be observed in two-layer graphene system, we analyze the symmetry of the mean field Hamiltonian (\[mean-field H\]) and the order parameter $\Delta.$ As a result the classification of all phases of the excitonic insulating state of the two-layer graphene system is presented in the next Section, and a detailed discussion of each phase is presented in Section \[sec:5 Phases\]. \[sec:4 Symmetry\]Symmetry analysis of the correlated state =========================================================== The analysis in this section is based on the idea of breaking of the initial symmetry of the hamiltonian by the order parameter. The initial symmetry group $G$ of the Hamiltonian $\hat{H}_{{\rm 2layer}}$ is formed by global unitary transformations of an electronic single-particle state in the 4-component spin$\otimes$valley space independently in the layer 1 and 2. These transformations are represented by independent matrices ${\rm U}^{(1)}$ and ${\rm U}^{(2)}$ in layer 1 and 2 respectively. Therefore the group $G$ is given by the direct product of corresponding unitary groups $U_{4}$ $$G= U_{4}^{(1)} \times U_{4}^{(2)}. \label{G}$$ Unitary group $U_{4}^{(l)},$ $l=1,2,$ consists of $4\times4$ unitary matrices ${\rm U}^{(l)}$ which perform transformations of electron’s operators in the $l-$th layer as follows: $$a_{l,\zeta,\mathbf{p},s} \rightarrow \sum_{\zeta^{\prime}} {\rm U}^{(l)}_{\zeta\zeta^{\prime}} a_{l,\zeta^{\prime},\mathbf{p},s}. \label{G transformations}$$ Thus, as it is seen from Eqs. (\[mean-field H\]) and (\[Hmf 1/N\]), under symmetry transformations (\[G transformations\]) the order parameter ${\rm \Delta}$ transforms as: $${\Delta} \longrightarrow {\rm U}^{(1)\dagger} \, {\Delta} \, {\rm U}^{(2)}. \label{cond on Delta1}$$ This implies that the Hamiltonian of the system is not invariant under the action of the group $G$ any longer. However for any fixed non-zero ${\Delta}$ there is always some subgroup $H$ of the group $G,$ $H\subset G,H\neq G,$ such that all transformations from the group $H$ do not transform $\Delta,$ i.e. the order parameter $\Delta$ remains invariant: $${\rm U}^{(1)\dagger}_{H} \, {\Delta} \, {\rm U}^{(2)}_{H} = \Delta. \label{cond on Delta U1U2H}$$ Such transformations ${\rm U}^{(1)}_{H}$ in layer 1 and ${\rm U}^{(2)}_{H}$ in layer 2 form a symmetry group $H$ $$\left(\begin{array}{cc} {\rm U}^{(1)}_{H}&{\rm 0}\\ {\rm 0}&{\rm U}^{(2)}_{H}\\ \end{array}\right) \in H\subset G. \label{U1U2 H}$$ Only transformations from the group $H$ leave the ground state of the excitonic insulator invariant, i.e. only these transformations leave the mean field Hamiltonian (\[mean-field H\]) and (\[Hmf 1/N\]) invariant: $$\left(\begin{array}{cc} {\rm U}^{(1)\dagger}_{H}&{\rm 0}\\ {\rm 0}&{\rm U}^{(2)\dagger}_{H}\\ \end{array}\right) H_{{\rm mf}}(\mathbf{p},s) \left(\begin{array}{cc} {\rm U}^{(1)}_{H}&{\rm 0}\\ {\rm 0}&{\rm U}^{(2)}_{H}\\ \end{array}\right) =H_{{\rm mf}}(\mathbf{p},s). \label{Hmf H sym}$$ Thus the symmetry group $G$ of the initial uncorrelated normal ground state of the system is broken down to the symmetry group $H$ of the ground state of the excitonic insulator. All transformations from $G,$ which are not included in $H,$ form the factor-space $G/H.$ These transformations change the order parameter $\Delta$, however they do not change the energy of the corresponding ground state. Therefore the manifold of all matrices $\Delta,$ which can be obtained by transformations from $G/H,$ form a degeneracy space of the order parameter. Consequently the manifold of the correspondent ground states form a phase of the correlated state. It is important to notice, that all these ground states within the same phase are described by the same symmetry group $H,$ which is a symmetry group of the order parameter. Therefore phases of a correlated state can be classified by the symmetry group $H$ and the degeneracy space of the order parameter. The phase classification presented in this paper is also reminiscent of the classification of the various degeneracy spaces of the order parameter in liquid Helium-3.[@leggettrmp75; @wheatley; @vollhardt; @wolfle; @mineevufn] This classification principle was used in the determination of superconducting phases in nontrivial superconductors [@mineevsamokhin] and superfluid phases in liquid Helium-3. [@brudervollhardt; @voloviksymmetryin3-Hechapter] ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[tab:phases\] $\begin{array}{c} $\begin{array}{c} $~{\rm Phase}~$ $\begin{array}{c} $\begin{array}{c}\\ $~{\rm dim}[G/H]~$ $\begin{array}{c} {\rm External}\\ {\rm Symmetry~group}~G\\ {\rm Matrix~structure}\\ {\rm Symmetry~group}~H\\ {\rm Single}\\ {\rm conditions}\\ {\rm of~the~two-layer}\\ {\rm of~the~order}\\ {\rm of~the}\\ {\rm particle}\\ \end{array}$ {\rm~Hamiltonian} {\rm parameter~}{\Delta}\\ {\rm order~parameter~}{\Delta} \\\\ {\rm spectrum}\\ \\ ({\rm here~~V},{\rm \widetilde{V}}\in U_{4})\\ \end{array}$ \end{array}$ \end{array}$ \end{array}$ ------------------------------------------------------------------------------------------ --------------------------------------------------- ------------------------------------------ -------------------------------------------------------- ------------------------------------------------- -------------------- -------------------------- $\begin{array}{c}\\ B\\ \\ \end{array}$ ${\rm V}$ $U_{4}^{(1,2)}$ 16 gapped $A_{0}^{\prime}$ $\begin{array}{c}\\ $ U_{1}^{(1)}\times 21 gapless {\rm \widetilde{V}}^{\dagger}\,\, U_{3}^{(1,2)}\times {\rm Diag}[1,1,1,0]\,\, U_{1}^{(2)} {\rm V}\\ \\ $ \end{array}$ $A_{1}^{\prime}$ $\begin{array}{c}\\ $ U_{2}^{(1)}\times 20 gapless {\rm \widetilde{V}}^{\dagger}\,\, U_{2}^{(1,2)}\times {\rm Diag}[1,1,0,0]\,\, U_{2}^{(2)} {\rm V}\\ \\ $ \end{array}$ \[1.25cm\][$\begin{array}{c} \[1.25cm\][$\begin{array}{c} $A_{2}^{\prime}$ $\begin{array}{c}\\ $ U_{3}^{(1)}\times 13 gapless n_{1e}=n_{2h}\\\\ U_{4}^{(1)}\times {\rm \widetilde{V}}^{\dagger}\,\, U_{1}^{(1,2)}\times \epsilon_{Z}=0\\\\\\ U_{4}^{(2)}\\ {\rm Diag}[1,0,0,0]\,\, U_{3}^{(2)} \end{array}$]{} \end{array}$]{} {\rm V}\\ \\ $ \end{array}$ $\begin{array}{c}\\ $ U_{2}^{(1\uparrow)}\times $ B^{\prime}$ $\begin{array}{c}\\ $ U_{2}^{(1\uparrow,2\downarrow)}\times 8 gapped n_{1e}=n_{2h}\\\\ U_{2}^{(1\downarrow)}\times \left(\begin{array}{cc} U_{2}^{(1\downarrow,2\uparrow)} \epsilon_{Z}>0\\ U_{2}^{(2\uparrow)}\times {\rm 0}&{\rm v}\\ $ \end{array}$ U_{2}^{(2\downarrow)} \widetilde{ {\rm v}}&{\rm 0}\\ $ \end{array}\right)\\\\ {\rm v}, \widetilde{{\rm v}}\in U_{2}\\\\ \end{array}$ $\begin{array}{c}\\ $ U_{2}^{(1\uparrow)}\times $A_{1}$ $\begin{array}{c}\\ $ \,U_{2}^{(1\downarrow)}\times 4 gapless n_{1e}>n_{2h}\footnote{Here we assume parameters to be tuned so that a Fermi U_{2}^{(1\downarrow)}\times \left(\begin{array}{cc} U_{2}^{(1\uparrow,2\uparrow)}\times surface of electrons only with spin up in layer 1 coincides with the Fermi surface U_{2}^{(2\uparrow)}\times {\rm v}&{\rm 0}\\ U_{2}^{(2\downarrow)}\, of holes with spin down in layer 2, for details see Fig. \ref{fig:Apr}.}\\\\ U_{2}^{(2\downarrow)} {\rm 0}&{\rm 0}\\ $ \epsilon_{Z}> 0\\ $ \end{array}\right)\\\\ \end{array}$ {\rm v}\in U_{2}\\\\ \end{array}$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In order to classify phases of the excitonic insulating state we are going to classify the degeneracy spaces of order parameters with the same symmetry. For this we consider the condition (\[cond on Delta U1U2H\]) and use the method of a singular value decomposition.[@linearalgebra] It allows us to represent an arbitrary matrix ${\Delta}$ as a product of a unitary matrix ${\rm \widetilde{V}}^{\dagger}$, a diagonal matrix ${\rm D}$ with real non-negative numbers on the diagonal, and another unitary matrix ${\rm V}$. Applying the singular value decomposition to the order parameter at any given values of $s$ and $\mathbf{p}$ we obtain $${\Delta} = {\rm \widetilde{V}}^{\dagger}_{s}(\mathbf{p}) {\rm D}_{s}(\mathbf{p}) {\rm V}_{s}(\mathbf{p}). \label{V Diag V}$$ However, first of all, we notice that in the considered system, the lowest ground state energy is realized when matrices ${\rm V},{\rm \widetilde{V}}$ do not depend on momentum $\mathbf{p}$ and index $s=\pm.$ The reason for this is that in such a case in the expression for the ground state energy there is a cancellation of the product of matrices ${\rm V}$ and ${\rm V}^{\dagger}$ (${\rm \widetilde{V}}$ and ${\rm \widetilde{V}}^{\dagger}$) into a unit matrix. Such a cancellation leads to the maximal negative contribution to the ground state energy, and therefore the energy of the ground state achieves its minimal value. If we assume that unitary matrices in Eq. (\[V Diag V\]) depend on momentum $\mathbf{p}$ and index $s=\pm,$ then unitary matrices at different momenta $\mathbf{p},\mathbf{p}^{\prime}$ and different indices $s,s^{\prime}$ do not cancel each other, which increases the ground state energy comparatively to the previous case. Thus, we conclude, that in order to realize the lowest energy of the ground state, matrices $\widetilde{{\rm V}}$ and ${\rm V}$ can not depend on the momentum $\mathbf{p}$ and index $s.$ Therefore the singular value decomposition of the matrix of order parameter becomes $${\Delta} = {\rm \widetilde{V}}^{\dagger} {\rm D} {\rm V}, \label{V Diag V 2}$$ where in the right hand side of the equation (\[V Diag V 2\]) only matrix ${\rm D}$ depends on $\mathbf{p}$ and $s,$ but for brevity we omit these indices. Second, all transformations from the group $G$, including those from the factor-space $G/H$, do not change the diagonal elements of the matrix ${\rm D},$ but change matrices ${\rm \widetilde{V}},$ ${\rm V}$ into any other unitary matrices. Thus if we introduce the notations $${\rm \widetilde{V}}^{\prime\dagger} \equiv {\rm U}^{(1)\dagger} \, {\rm \widetilde{V}}^{\dagger}, \qquad {\rm V^{\prime}} \equiv {\rm V} \, {\rm U}^{(2)},$$ then under the transformation (\[G transformations\])-(\[cond on Delta1\]) the order parameter will transform in the following way: $${\Delta} = {\rm \widetilde{V}}^{\dagger} {\rm D} {\rm V} ~~\longrightarrow~~ {\Delta}^{\prime}= {\rm \widetilde{V}}^{\prime\dagger} {\rm D} {\rm V^{\prime}}.$$ Thus under this transformation the diagonal matrix ${\rm D}$ does not change. Recall that the degeneracy space of the order parameter is obtained by acting on the order parameter ${\Delta}$ by all transformations from the group $G$ (here transformations from subgroup $H$ will not change the order parameter while remaining transformations from factor-space $G/H$ will create the degeneracy space of the order parameter). As long as only matrices ${\rm V}$ and ${\rm \widetilde{V}}$ are changed by transformations from $G$, we obtain that the degeneracy space of the order parameter and a phase of the correlated state are determined only by the diagonal elements of the matrix ${\rm D}.$ Finally, from the condition (\[cond on Delta U1U2H\]) we have found that all possible degeneracy spaces of the order parameter are classified by numbers of equal and different diagonal elements in matrix ${\rm D}.$ In the case of physically relevant phases there are additional restrictions on the diagonal elements of matrix ${\rm D}$. Thus among all possible matrices ${\Delta}$ only physically relevant order parameters satisfy the self-consistency equation. For the phase classification it is sufficient to consider the BCS self-consistency equation for the order parameter. Diagonalizing the self-consistency equation by unitary matrices from Eq. (\[V Diag V 2\]), one obtains 4 equations for diagonal elements of the matrix ${\rm D}$, each equation corresponds to some value of index $\zeta.$ These equations have the same structure and depend on the Fermi momentum $p_{F}$. If the Fermi momentum $p_{F}$ is the same for all types of electrons (for all indices $\zeta$), then these 4 self-consistency equations are identical, and apart from a trivial zero solution they have the same non-zero solution. Hence in such situation in physically relevant phases the arbitrary diagonal element in matrix ${\rm D}$ can be equal either to other non-zero diagonal elements, or be equal to a zero. The application of the in-plane magnetic field in principle changes such a description because of Zeeman splitting. However, in the case where the Fermi energy is much greater than the Zeeman energy, $E_{F}\gg\epsilon_{Z}$, the magnetic field does not change the situation essentially as long as it is possible to neglect the difference between Fermi momenta for electrons with opposite spin projections in the self-consistency equations. Therefore four self-consistency equations on four diagonal elements of the matrix ${\rm D}$ become approximately identical also when relatively small in-plane magnetic field $(\epsilon_{Z}\ll E_{F})$ is applied. The non-zero solution of these equations is given by the gap function $g_{s}(\mathbf{p})$ which at the Fermi surface $(s=+,|\mathbf{p}|=p_{F})$ determines a gap in the single-particle excitation spectrum. Thus we conclude that in all physically relevant phases the matrix ${\rm D}$ in the singular value decomposition (\[V Diag V 2\]) of the order parameter $\Delta$ consists of zeros or non-negative diagonal elements which approximately are equal to the gap function $g_{s}(\mathbf{p}).$ Substituting the obtained result into Eq. (\[V Diag V 2\]), we extract the gap function as a multiplier. Thus we conclude that the order parameter ${\Delta}$ in all physically relevant phases has a form $${\Delta} \cong g_{s}(\mathbf{p}) {\rm \widetilde{V}}^{\dagger} {\rm D} {\rm V}. \label{W matrix in Delta}$$ Here the matrix ${\rm D}$ is a diagonal matrix with 0 or 1 on the diagonal. The representation (\[W matrix in Delta\]) becomes approximate in the case of the applied in-plane magnetic field with the condition $\epsilon_{Z}\ll E_{F}$. The dependence of the order parameter $\Delta$ on variables $s$ and $\mathbf{p}$ is completely given by the function $g_{s}(\mathbf{p}).$ Having the matrix of the order parameter provided, the symmetry group $H$ is found from the equation (\[cond on Delta U1U2H\]). For this the matrix of the order parameter is represented as a single value decomposition (\[V Diag V 2\]). The constant matrices ${\rm V}$ and ${\rm \widetilde{V}}$ are absorbed into matrices ${\rm U}^{(1)}_{H}$ and ${\rm U}^{(2)}_{H}$ of the global symmetry transformations from the symmetry group $H.$ Then the equation (\[cond on Delta U1U2H\]) connects two unitary matrices ${\rm VU}^{(1)}_{H}{\rm V^{\dagger}}\,$ and ${\widetilde{{\rm V}}{\rm U}}^{(2)}_{H}{\widetilde{{\rm V}}^{\dagger}}$ and the diagonal matrix ${\rm D}$ with $0$ or $1$ on the diagonal. Thus the matrices ${\rm U}^{(1)}_{H}$ and ${\rm U}^{(2)}_{H}$ of transformations from the symmetry group $H$ are obtained. \[sec:5 Phases\]Phases ====================== In this section we provide a detailed description of phases of excitonic insulator state in two-layer graphene system. Results of this section are summarized in Table \[tab:phases\]. The $B$ phase. -------------- Firstly we consider the situation when there is no external magnetic field and when the charge carrier densities in layers are the same $n_{1e}=n_{2h}.$ In such case the symmetry group $G$ of the two-layer Hamiltonian of the system in normal state is given in Eq. (\[G\]). Under the mentioned conditions the Fermi circle in the conduction band in layer 1 coincides with the Fermi circle in the valence band in layer 2 due to the electron-hole symmetry in graphene. Hence the non-vanishing ground state average ${\rm F}$, Eq. (\[anom average\]), can be formed by all species of electrons. Taking into account that the ground state with the lower energy is more stable, we consider the phase when excitonic correlations are developed among all species of electrons. In such cases the order parameter matrix $\Delta$ and the matrix ${\rm D},$ Eq. (\[W matrix in Delta\]), are not degenerate matrices. Moreover, because there is only one Fermi circle for all species of electrons, the most stable ground state is characterized by the matrix ${\rm D}$ in Eq. (\[W matrix in Delta\]) with equal non-zero diagonal elements, i.e. ${\rm D}$ is an identity matrix. As discussed in the previous section, this conclusion follows from the consideration of the self-consistency equation on the order parameter. Thus substituting ${\rm D}= \openone$ into the equation (\[W matrix in Delta\]) we obtain the following structure for the order parameter in spin$\otimes$valley space: $${\Delta} = g_{s}(\mathbf{p}) {\rm V}, \qquad {\rm V}\in U_{4}. \label{Delta B phase}$$ Such a structure of the order parameter determines the symmetry group $H$ of the ground state and the degeneracy space of the order parameter, and consequently it determines the phase of the excitonic insulator. ![ In the $B$ phase within the excitonic paired state the electron on the Fermi surface in the layer 1 (grey circle) is characterized by the index $\zeta$ in the spin$\otimes$valley basis $\Phi,$ and the absent electron (white circle) in the layer 2 is characterized by the same index $\zeta$ but in the spin$\otimes$valley basis ${\rm V}\Phi,$ which is transformed by the matrix of the order parameter ${\rm V}$. []{data-label="fig:Bphase"}](figure_4.eps){width="8.0cm"} The symmetry group $H$ of the ground state in the considered phase can be found as a group of all unitary transformations ${\rm U}^{(1)}_{H}$, ${\rm U}^{(2)}_{H}$ in layers 1 and 2, which leave the order parameter invariant, Eq. (\[cond on Delta U1U2H\]). Solving the condition (\[cond on Delta U1U2H\]) with the order parameter (\[Delta B phase\]) we obtain matrices ${\rm U}^{(1)}_{H}$ and ${\rm U}^{(2)}_{H}$ of symmetry transformations in the layer 1 and 2 correspondingly. Thus, having ${\rm U}^{(1)}_{H}$ and ${\rm U}^{(2)}_{H}$, $${\rm U}_{H}^{(1)}={\rm U},\qquad {\rm U}_{H}^{(2)}={\rm V}^{\dagger}{\rm UV},$$ we can express an arbitrary element of the group $H,$ Eq. (\[U1U2 H\]), which transforms electron operators in layers 1 and 2 according to Eq. (\[G transformations\]). Omitting indices $\zeta,\mathbf{p},s$ we have: $$\left(\begin{array}{l} a_{1}\\ a_{2}\\ \end{array}\right) \rightarrow \left(\begin{array}{cc} {\rm U}&{\rm 0}\\ {\rm 0}&{\rm V^{\dagger}UV}\\ \end{array}\right) \left(\begin{array}{l} a_{1}\\ a_{2}\\ \end{array}\right). \label{transform of a}$$ Here the matrix ${\rm V}$ is given by the fixed matrix of the order parameter (\[Delta B phase\]). The unitary matrix ${\rm U}$ is present in transformations in both layers. This means that the symmetry group $H$ of the ground state in the considered phase (\[Delta B phase\]) consists of the combined transformations in both layers 1 and 2. The unitary group of the combined transformations in layers 1 and 2 is denoted as $U^{(1,2)}_{4},$ $$\left(\begin{array}{cc} {\rm U}&{\rm 0}\\ {\rm 0}&{\rm V^{\dagger}UV}\\ \end{array}\right) \in U^{(1,2)}_{4}\equiv H. \label{transform of a element H}$$ The combined transformations from the group $U_{4}^{(1,2)}$ can also be described in terms of generators of these transformations. For this, each element of the group is written as the exponential function of the element of the group’s algebra $$\left(\begin{array}{cc} {\rm U}&{\rm 0}\\ {\rm 0}&{\rm V^{\dagger}UV}\\ \end{array}\right) = \exp\left[i\vec{\theta}\vec{\Gamma}_{H}\right] \in H, \label{def of generators}$$ where $\vec{\theta}$ is a vector of real variables and a vector $\vec{\Gamma}_{H}$ consists of generators of the group $H.$ For the considered phase these generators are: $$\left(\Gamma_{H}\right)_{m}\equiv \left(\begin{array}{cc} \lambda_{m}&0\\ 0&{\rm V^{\dagger}}\lambda_{m}{\rm V}\\ \end{array} \right),\quad m=0,1,...,15. \label{B phase generators}$$ In contrast to Eq.(\[B phase generators\]) transformations, which change the order parameter and create a degeneracy space $G/H,$ are described by the following generators: $$\left(\Gamma_{G/H}\right)_{m}\equiv \left(\begin{array}{cc} \lambda_{m}&0\\ 0&-{\rm V^{\dagger}}\lambda_{m}{\rm V}\\ \end{array} \right),\quad m=0,1,...,15. \label{B phase generators G/H}$$ Here the $2\times2$ block matrices $\left(\Gamma_{H}\right)_{m}$ and $\left(\Gamma_{G/H}\right)_{m}$ act in the space of layers 1 and 2, the matrix $\lambda_{m}$ acts on the spin$\otimes$valley basis in the layer 1 and matrices $\pm{\rm V^{\dagger}}\lambda_{m}{\rm V}$ acts on the basis $\Phi$ in layer 2. The spin$\otimes$valley basis $\Phi$ is the same in both layers. Matrices $\lambda_{m}$ are 4$\times$4 Hermitian traceless matrices of generators of transformations from the unitary group $U_{4}.$ The total number of generators $\Gamma_{G/H},$ Eq. (\[B phase generators G/H\]), equals to the dimension of the degeneracy space $G/H,$ ${\rm dim}[G/H]$. In the $B$ phase ${\rm dim}[G/H]={\rm dim}[G]- {\rm dim}[H]=32-16=16.$ The electron operators in the second layer can be transformed by the matrix of the order parameter: ${\rm V}a_{2}\rightarrow a_{2}^{\prime}$, see Eqs. (\[Delta B phase\]) and (\[mean-field H\]), or, equivalently, the spin$\otimes$valley basis in layer 2 can be transformed by the matrix of the order parameter. In such a case from Eqs. (\[transform of a\]) and (\[B phase generators\]) it follows, that the transformation from the group $H,$ in contrast to the transformation from $G/H,$ can be represented by identical transformations in both layers. These identical transformations act by the same matrix ${\rm U}$ on the spin$\otimes$valley basis $\Phi$ in the layer 1 and on the transformed spin$\otimes$valley basis ${\rm V}\Phi$ in the layer 2, Fig. \[fig:Bphase\]. Hence the matrix of the order parameter (\[Delta B phase\]) defines the relative unitary rotation of the spin$\otimes$valley basis $\Phi$ in layer 2 with respect to layer 1. It signifies the relative symmetry breaking: the ground state is not invariant under unitary transformations of the basis $\Phi$ in one layer relatively to the basis $\Phi$ in another layer. The basis $\Phi$ in layer 1 is “locked” relatively to the basis $\Phi$ in layer 2 by the matrix of the order parameter which defines the relative unitary rotation of one basis with respect to another. Because of the presence of the relative symmetry breaking by the order parameter (\[Delta B phase\]) the phase discussed here resembles the superfluid $B$ phase in the liquid Helium-3.[@mineevsamokhin; @voloviksymmetryin3-Hechapter; @leggettrmp75; @wheatley; @mineevufn] In $B$ phase the matrix of the order parameter is not degenerate. It means that all species of charge carriers develop excitonic correlations, therefore a single particle excitation spectrum is gapped. The external conditions ($\epsilon_{Z}=0,$ $n_{1e}=n_{2h}$) for the $B$ phase can be violated by an in-plane magnetic field or by external gates. However the excitonic correlations continue to exist in the $B$ phase until the difference of radiuses of Fermi circles is bigger than $2g_{+}(p_{F})/v,$ where $g_{+}(p_{F})$ is a gap in a single-particle excitation spectrum in the $B$ phase. Indeed such behavior can be seen, if one creates an asymmetry between charge carriers densities in layers, which can be expressed in terms of a shift $\delta E_{F}>0$ of Fermi enetgies: $E_{F}^{(1)}=E_{F}+\delta E_{F},$ $E_{F}^{(2)}=-E_{F}+\delta E_{F}$. Substituting these values to the mean field Hamiltonian (\[mean-field H\]) and finding its eigenvalues, one obtains[@mineevsamokhin] two branches of excitation spectrum $ \varepsilon_{s}^{(\pm)}(p)=\sqrt{(svp-E_{F})^2+g_{s}^{2}(p)}\pm \delta E_{F}. $ At values $\delta E_{F}=g_{+}(p_{F})$ one of branches of excitation spectrum becomes zero at $s=+,p=p_{F}.$ At this situation the excitonic pairing stops being energetically favorable and the system appears in the normal state via a first order phase transition. In similar way, when Fermi circles for charge carriers with opposite spin projection are separated by $2g_{+}(p_{F})/v$ due to an in-plane magnetic field, excitonic correlations between charge carriers on these Fermi circles vanish. This fact is schematically shown in the phase diagram Fig.\[fig 1\]b: at the borders of the $B$ phase in the phase diagram excitonic correlations are no longer energetically stable and the excitonic insulator state transforms into either a normal state or into another phase via a first order phase transition. The $A_{0}^{\prime},A_{1}^{\prime},A_{2}^{\prime}$ phases. ---------------------------------------------------------- In this section we consider phases under the same external conditions as in $B$ phase, thus the symmetry group $G$ is again given by Eq.(\[G\]). We consider phases where order parameters are characterized by the degenerate matrices of rank $r<4.$ In such phases only a part of electron species develop excitonic correlations, therefore the single particle excitation spectrum is gapless for certain species of electrons. The matrix of the order parameter can be chosen as follows, compare with Eq.(\[W matrix in Delta\]): $${\Delta} = g_{s}(\mathbf{p}) \widetilde{{\rm V}}^{\dagger} {\rm Diag}[a,b,c,0] {\rm V}, \qquad \widetilde{{\rm V}}^{\dagger},{\rm V}\in U_{4}. \label{Delta A1pr phase}$$ Here the diagonal matrix ${\rm Diag}$ determines the order parameter in phases, which are denoted as $A_{0}^{\prime}$, $A_{1}^{\prime}$, $A_{2}^{\prime}$: numbers $(a,b,c)$ are given by $(1,1,1)$ in $A_{0}^{\prime}$ phase, $(1,1,0)$ in $A_{1}^{\prime}$ phase and $(1,0,0)$ in $A_{2}^{\prime}$ phase. Using the transformed electron operators ${\rm\widetilde{V}}a_{1}$ in the layer 1 and ${\rm V}a_{2}$ in the layer 2, see Eqs. (\[mean-field H\]) and (\[Delta A1pr phase\]), the self-consistency equation on the order parameter becomes diagonal, and only first $r$ out of four equations for diagonal elements will have non-zero solutions. We assume that in the self-consistency equations we can use the screened interaction among charge carriers in the system in normal state.[@KharitonovEfetov0903] In this case self-consistency equations in $A_{0}^{\prime},A_{1}^{\prime},A_{2}^{\prime}$ phases are identical to self-consistency equations in $B$ phase, therefore their non-zero solutions are given by the same gap function $g_{s}(\mathbf{p}).$ Substituting the order parameter in the symmetry condition (\[cond on Delta U1U2H\]) one obtains matrices of symmetry transformations in layer 1 and 2, for example for $A_{1}^{\prime}$ phase one gets $${\rm U}^{(1)}_{H}= {\rm \widetilde{V}}^{\dagger} \left(\begin{array}{cc} {\rm u}&0\\ 0&{\rm u}^{\prime}\\ \end{array} \right){\rm \widetilde{V}}, \quad {\rm U}^{(2)}_{H}= {\rm V}^{\dagger} \left(\begin{array}{cc} \rm {\rm u}&0\\ 0&{\rm u}^{\prime\prime}\\ \end{array} \right){\rm V}, \label{U1U2 A1phase}$$ where $${\rm u}\in U_{2}^{(1,2)},\quad {\rm u}^{\prime}\in U_{2}^{(1)},\quad {\rm u}^{\prime\prime}\in U_{2}^{(2)}. \label{U1U2 A1phase2}$$ Here, similarly to the $B$ phase, the $2\times2$ unitary matrix ${\rm u}$ determines the combined unitary rotations of the first two components of the spin$\otimes$valley basis ${\rm \widetilde{V}}\Phi$ in layer 1 and the first two components of the spin$\otimes$valley basis ${\rm V}\Phi$ in layer 2. Therefore such phase is characterized by a partial relative symmetry breaking. Remaining matrices ${\rm u}^{\prime},{\rm u}^{\prime\prime}\in U_{2}$ determine independent unitary rotations of the other 2 components of corresponding spin$\otimes$valley basis in layers. These other 2 components correspond to quasiparticle’s states which do not contribute to the excitonic condensation, their single particle excitation spectrum is gappless. Therefore only 2 out of 4 electron’s species are involved in the excitonic condensation. Because of this, such a phase is similar to the superfluid $A_{1}$ phase of liquid Helium-3, where paired states with only one spin projection $S_{z}=+1$ are present in the condensate.[@mineevsamokhin] The $A_{1}$ phase of Helium-3 exists only in magnetic field, in order to underline the stability of the phase in the absence of the magnetic field we denote the phase discussed here by an additional prime, therefore it is denoted as the $A_{1}^{\prime}$ phase of the excitonic insulator. Other phases with degenerate matrices of order parameters are denoted as $A_{0}^{\prime}$ and $A_{2}^{\prime}.$ In phases $A_{0}^{\prime},$ $A_{1}^{\prime},$ $A_{2}^{\prime}$ the number of non-zero diagonal elements in the diagonal matrix ${\rm Diag}$ determines the rank $r$ of the symmetry group of combined unitary rotations, denoted as $U_{r}^{(1,2)}.$ Zeros in the diagonal of the matrix ${\rm Diag}$ correspond to electron states which do not develop excitonic correlations, and, therefore, these states can be unitary transformed independently in each layer. Consequently the symmetry group $H$ for $A_{0}^{\prime},$ $A_{1}^{\prime},$ $A_{2}^{\prime}$ phases can be easily identified. For example, the symmetry group $H$ for $A_{1}^{\prime}$ phase is following: $$H=U_{2}^{(1)}\times U_{2}^{(1,2)}\times U_{2}^{(2)}.$$ The dimension of the degeneracy space is calculated as follows: for $A_{0}^{\prime}$ phase ${\rm dim}[G/H]=32-1-9-1=21;$ for $A_{1}^{\prime}$ phase ${\rm dim}[G/H]=32-3\times4=20;$ for $A_{2}^{\prime}$ phase ${\rm dim}[G/H]=32-9-1-9=13.$ The $B^{\prime}$ phase. ----------------------- In this section we consider the two-layer graphene system in an in-plane magnetic field. Our analysis is based on the comprehensive study by Aleiner and co-authors[@Aleiner] of the spontaneous symmetry breaking in graphene subjected to an in-plane magnetic field. When an in-plane magnetic field is applied, the Fermi circles for quasiparticles with different spin projections become separated due to a Zeeman splitting. Such splitting changes the symmetry group $G,$ Eq. (\[G\]), of the initial Hamiltonian $\hat{H}_{{\rm 2layer}}$ toward a direct product of 4 unitary groups $U_{2},$ $$G=U_{2}^{(1\uparrow)}\times U_{2}^{(1\downarrow)}\times U_{2}^{(2\uparrow)}\times U_{2}^{(2\downarrow)}.$$ Each of these $U_{2}$ groups transforms a valley space of electrons with corresponding spin projections in one layer, e.g. $U_{2}^{(1\uparrow)}$ transforms electrons with spin up in layer 1. The $B^{\prime}$ phase can be obtained from the $B$ phase by the application of an in-plane magnetic field. Such magnetic field should be big enough to break the excitonic correlations in the $B$ phase and to split Fermi circles. Therefore a Zeeman energy $\epsilon_{Z}$ should be bigger than a gap in the excitation spectrum in the $B$ phase, $\epsilon_{Z}>g_{+}(p_{F}).$ In such a case, due to the initial equality of charge carrier densities $n_{1e}=n_{2h}$ in the $B$ phase, the two Fermi circles in layer 1 coincide with two Fermi circles in layer 2. Thus it leads to the appearance of two different Fermi circles in the system, Fig. \[fig:Bpr phase\]. Consequently, the electron-hole pairs, which appear on different Fermi circles, have different properties: thus such electron-hole pairs have different spin projection, $+1$ or $-1,$ Fig. \[fig:Bpr phase\]. Also due to slightly different Fermi momenta, electron-hole pairs on different Fermi circles are characterized by slightly different gap functions. Hence in the spin$\otimes$valley basis $\Phi,$ ![ Excitonic correlations in the two-layer graphene system with an in-plane magnetic field $h$ in the case of equal charge densities in layers $n_{1e}=n_{2h}.$ Because of a Zeeman splitting $2\epsilon_{Z}$ there are two Fermi circles with radiuses $p_{F}\pm\epsilon_{Z}/v.$ The absent electron with a particular spin projection is considered as a quasiparticle (hole) with an opposite spin projection.[]{data-label="fig:Bpr phase"}](figure_5.eps){width="8.0cm"} $$\Phi= \left( \uparrow \mathbf{K}_{+}, \uparrow \mathbf{K}_{-}, \downarrow \mathbf{K}_{+}, \downarrow \mathbf{K}_{-} \right), \label{Phi}$$ the order parameter has the following structure (compare with Eq. (\[Delta B phase\])) $${\Delta}= \left(\begin{array}{cc} {\rm 0}&g^{\prime}_{s}(\mathbf{p}){\rm v}\\ g^{\prime\prime}_{s}(\mathbf{p}){\rm \widetilde{v}}&{\rm 0}\\ \end{array}\right). \label{Delta Bpr exact}$$ Here matrices ${\rm v}$ and ${\rm \widetilde{v}}$ are unitary $2\times2$ matrices, which, by analogy with the $B$ phase, determine the relative unitary rotation of a valley space of electron states with one spin projection in layer 2 relatively to electron states with another spin projection in layer 1. Functions $g^{\prime}_{s}(\mathbf{p}), g^{\prime\prime}_{s}(\mathbf{p})$ are gap functions, which differ from each other only because of the presence of a Zeeman splitting. However when the Fermi energy is much bigger than Zeeman energy, $E_{F}\gg\epsilon_{Z},$ the difference between these functions is negligible and they are approximately equal to the gap function in the $B$ phase, $g^{\prime}_{s}(\mathbf{p})\approx g^{\prime\prime}_{s}(\mathbf{p})\approx g_{s}(\mathbf{p}).$ Thus, $$\Delta\approx g_{s}({\mathbf p}) \left(\begin{array}{cc} {\rm 0}&{\rm v}\\ {\rm \widetilde{v}}&{\rm 0}\\ \end{array}\right).$$ Using this approximation, the symmetry group $H$ of the order parameter can be found from the condition (\[cond on Delta U1U2H\]). As a result one obtains matrices ${\rm U}^{(1)}_{H}$ and ${\rm U}^{(2)}_{H}$ of the transformations (\[U1U2 H\]) from the group $H$ in the layer 1 and 2 respectively (matrices are written in the basis (\[Phi\]) in each layer): $${\rm U}^{(1)}_{H}= \left(\begin{array}{cc} {\rm u}&0\\ 0&\widetilde{{\rm u}}\\ \end{array}\right), \quad {\rm U}^{(2)}_{H}= \left(\begin{array}{cc} {\rm \widetilde{v}}^{\dagger}{\rm \widetilde{u}\widetilde{v}}&0\\ 0&{\rm v}^{\dagger}{\rm uv}\\ \end{array}\right). \label{U1U2 Bpr phase}$$ The corresponding electron operators are transformed as follows: $$a_{1,\uparrow}\rightarrow{\rm u}a_{1,\uparrow},\quad a_{2,\downarrow}\rightarrow{\rm v}^{\dagger}{\rm uv}a_{2,\downarrow},\quad {\rm u}\in U_{2}^{(1\uparrow,2\downarrow)}, \label{Bpr u}$$ $$a_{1,\downarrow}\rightarrow{\rm \widetilde{u}}a_{1,\downarrow},\quad a_{2,\uparrow}\rightarrow{\rm \widetilde{v}}^{\dagger}{\rm \widetilde{u}\widetilde{v}} a_{2,\uparrow},\quad {\rm \widetilde{u}}\in U_{2}^{(1\downarrow,2\uparrow)}. \label{Bpr u tilde}$$ The unitary $2\times2$ matrix ${\rm u},$ Eq. (\[Bpr u\]), determines a subgroup of the group $H,$ which consists of combined unitary rotation of valley space of electrons with spin up in layer 1 and electrons with spin down in layer 2: ${\rm u}\in U_{2}^{(1\uparrow,2\downarrow)}.$ The unitary $2\times2$ matrix ${\rm \widetilde{u}},$ Eq. (\[Bpr u tilde\]), defines another corresponding subgroup of the group $H,$ ${\rm \widetilde{u}}\in U_{2}^{(1\downarrow,2\uparrow)}\subset H.$ Hence in the phase considered here the symmetry group $H$ of the order parameter is given by direct product of two subgroups $$H=U_{2}^{(1\uparrow,2\downarrow)}\times U_{2}^{(1\downarrow,2\uparrow)}.$$ Using expressions for groups $G$ and $H$ in the $B^{\prime}-$phase, we found that the degeneracy space $G/H$ is 8-dimensional, ${\rm dim}[G/H]=4\times4-4-4=8.$ It is also defined by the structure of the order parameter (\[Delta Bpr exact\]), i.e. here the degeneracy space is determined as a space of all possible unitary $2\times2$ matrices ${\rm v}$ and ${\rm \widetilde{v}}.$ Because of the non-degenerate matrix of the order parameter, the single particle excitation spectrum in this phase is gapped. Similarly to the $B$ phase, the excitonic correlations in the $B^{\prime}$ phase cease to exist when the external conditions ($\epsilon_{Z}>g_{+}(p_{F}),$ $n_{1e}=n_{2h}$) are perturbed, i.e. when Fermi circles in different layers are separated for the energy interval which is bigger than a double value of a gap in the single-particle excitation spectrum. Thus, in particular, in the schematic phase diagram Fig. \[fig 1\](b) at the border of the $B^{\prime}-$phase (when a symmetry $n_{1e}=n_{2h}$ of charge carriers densities is violated) the ground state of the system transforms to an uncorrelated normal ground state via the first order phase transition. The $A_{1}$ phase.\[sec:5 A1 phase\] ------------------------------------ In contrast to $B$ and $B^{\prime}$ phases, where all species of charge carriers develop excitonic correlations, in this subsection we discuss another possible realization of the excitonic insulator state in the two-layer graphene system. We show that the excitonic correlated state can exist in the presence of an in-plane magnetic field and a specially chosen asymmetry in charge carriers densities in layers. ![ Excitonic correlations in the $A_{1}$ phase. Starting from a zero magnetic field the asymmetry between charge carrier densities $n_{1e}>n_{2h}$ is created. In terms of Fermi energies it means $E_{F}^{(1)}>|E_{F}^{(2)}|.$ A magnitude of an in-plane magnetic field is chosen such that a Zeeman energy $\epsilon_{Z}$ satisfies the condition: $E_{F}^{(1)}-\epsilon_{Z}=|E_{F}^{(2)}|+\epsilon_{Z},$ where $(E_{F}^{(1)}-\epsilon_{Z})/v$ is the radius of the Fermi circle for electrons with spin up in layer 1, and $(|E_{F}^{(2)}|+\epsilon_{Z})/v$ is a radius of the Fermi circle for electrons with spin up in layer 2. Both of these Fermi circles are situated at the same Fermi momentum $p_{F}$. Therefore two out of four Fermi circles coincide, leading to excitonic correlations between only half of electron’s species. Using the expression for the Fermi energies $E_{F}^{(1)/(2)}=\pm vp_{F}+v\delta p_{F},$ $p_{F}>\delta p_{F},$ where $\delta p_{F}=(\sqrt{n_{1e}}-\sqrt{n_{2h}})\sqrt{\pi}/2,$ the condition on the Zeeman energy is the following: $\epsilon_{Z}=v\delta p_{F}.$ []{data-label="fig:Apr"}](figure_6.eps){width="8.0cm"} In order to achieve a necessary external conditions, firstly we consider the two-layer system without a magnetic field and with equal charge carrier densities in layers. Under such conditions the spectrum of electrons in both layers have only one Fermi circle at the Fermi momentum $p_{F}.$ By changing the external gate voltages we create asymmetry between charge carrier densities in layers: $n_{1e}>n_{2h}.$ Thus the Fermi circle in the layer 1 is situated at the momentum $p_{F}+\delta p_{F}$, and the Fermi circle in the layer 2 is situated at the momentum $p_{F}-\delta p_{F},$ where $\delta p_{F}>0,$ and $p_{F}$ is the Fermi momentum in the case $n_{1e}=n_{2h}$ (i.e. in $B$ and $B^{\prime}$ phases). It is assumed that the separation between Fermi circles is big enough to prevent the development of excitonic correlations. Keeping the chosen values of densities, we switch-on an in-plane magnetic field with such a magnitude that the Zeeman energy $\epsilon_{Z}$ is equal to the energy shift of each Fermi surface, $\epsilon_{Z}=v\delta p_{F},$ Fig. \[fig:Apr\]. The presence of an in-plane magnetic field signifies that the symmetry group $G$ in such case is the same as in $B^{\prime}$ phase. The external conditions mentioned above lead to the situation when only two out of four Fermi circles coincide: both the Fermi circle of electrons with spin up in the layer 1 (a Fermi circle with a smaller radius in the layer 1) and the Fermi circle of electrons with spin up in the layer 2 (a Fermi circle with a bigger radius in the layer 2) are situated at the same Fermi momenta $p_{F}.$ Thus electron-hole pairs are formed on these two coincided Fermi circles. Notice, that a total spin projection of such an electron-hole pair is equal to a zero in contrast to electron-hole pairs with spin projections $+1$ or $-1$ in the $B^{\prime}$ phase. Fermi circles for electrons with spin down in both layers do not coincide with any other Fermi surfaces. Therefore corresponding electrons and holes are in a normal state (i.e. they do not participate in excitonic correlations), their single-particle excitation spectrum is gapless. Thus for the phase discussed here only half of all electron species in the system develop excitonic correlations. It is reflected in the order parameter, whose structure in the basis (\[Phi\]) in both layers is given by the following expression: $$\Delta= g_{s}(\mathbf{p}) \left(\begin{array}{cc} {\rm v}&0\\ 0&0\\ \end{array}\right). \label{Delta A1}$$ Here the gap function $g_{s}({\mathbf{p}})$ is the same as in other phases due to the same Fermi momentum $p_{F}$ in the self-consistency equation and due to the approximation of the interaction among charge carriers in all phases by the screened interaction among charge carriers in the system in normal state. Similarly to the phases discussed previously, the unitary $2\times2$ matrix ${\rm v}$ in the order parameter (\[Delta A1\]) determines a relative unitary rotation of the valley space of electrons with spin up in layer 2 relatively electrons with spin up in layer 1. Solving the condition (\[cond on Delta U1U2H\]) with the order parameter (\[Delta A1\]) we found that transformations from the group $H$ are represented in layer 1 and 2 by following matrices ${\rm U}^{(1)}_{H}$ and ${\rm U}^{(2)}_{H}$ respectively (both matrices are written in the basis (\[Phi\]) in each layer): $${\rm U}^{(1)}_{H}= \left(\begin{array}{cc} {\rm u}&0\\ 0&{\rm u}^{\prime}\\ \end{array} \right), \quad {\rm U}^{(2)}_{H}= \left(\begin{array}{cc} {\rm v^{\dagger}uv}&0\\ 0&{\rm u}^{\prime\prime}\\ \end{array} \right). \label{U1U2 A1pr phase}$$ Here matrices ${\rm u},{\rm u}^{\prime},{\rm u}^{\prime\prime}$ are unitary $2\times2$ matrices. The matrix ${\rm u}$ performs a combined unitary transformation of a valley space of electrons with spin up in both layers, in addition the valley space of electrons in layer 2 are rotated by the order parameter (\[Delta A1\]), compare with Eq. (\[Bpr u\]). The valley space of electrons with spin down is transformed by the unitary matrix ${\rm u}^{\prime}$ in layer 1 and by the unitary matrix ${\rm u}^{\prime\prime}$ in layer 2 correspondingly: $$\begin{aligned} a_{1,\uparrow}\rightarrow{\rm u}a_{1,\uparrow},\quad &a_{2,\uparrow}\rightarrow{\rm v}^{\dagger}{\rm uv}a_{2,\uparrow},\quad &&{\rm u}\in U_{2}^{(1\uparrow,2\uparrow)}, \label{A1 u} \\\nonumber\\ &a_{1,\downarrow}\rightarrow{\rm u^{\prime}}a_{1,\downarrow},\quad &&{\rm u^{\prime}}\in U_{2}^{(1\downarrow)}, \label{A1 ut} \\\nonumber\\ &a_{2,\downarrow}\rightarrow{\rm u^{\prime\prime}}a_{2,\downarrow},\quad &&{\rm u^{\prime\prime}}\in U_{2}^{(2\downarrow)}. \label{A1 utt}\end{aligned}$$ Thus the group $\textsl{H}$ consists of the direct product of 3 unitary groups, $$H=U_{2}^{(1\downarrow)}\times U_{2}^{(1\uparrow,2\uparrow)}\times U_{2}^{(2\downarrow)}. \label{A1 H}$$ Using the expressions for initial symmetry group $G$ and Eq. (\[A1 H\]), we find that the degeneracy space $G/H$ in this phase is 4 dimensional, ${\rm dim}[G/H]=16-3\times4=4.$. It is determined by the manifold of all possible matrices ${\rm v}\in U_2$ in the structure of the order parameter (\[Delta A1\]). The subgroups $U_{2}^{(1\downarrow)}$ and $U_{2}^{(2\downarrow)}$ are present in both groups $G$ and $H,$ the appearance of the correlated state does not change them. Therefore for the phase discussed here the initial symmetry is broken only partially. According to the similarities with the superfluid $A_{1}$ phase of liquid Helium-3 (i.e. that the phase described here exists in magnetic field and has a partial relative symmetry breaking[@mineevsamokhin; @voloviksymmetryin3-Hechapter; @leggettrmp75; @wheatley; @mineevufn]), the phase discussed in this subsection was denoted as $A_{1}$ phase. \[sec:6 Results\]Results and Discussions ======================================== In the present paper we consider a two-layer graphene system where external gate voltage induces a finite density of electrons in one layer and holes in another. Assuming that the transition temperature $T_{c}$ towards excitonic insulator is high enough so that it can be observed, we classify phases of such correlated state. In order to obtain different excitonic correlations and therefore different phases we propose to use parallel to graphene layers magnetic field and perpendicular to graphene layers electric field. Firstly we consider the Hamiltonian of the two-layer graphene system. We recognize that the ground state is characterized by a high symmetry group - the group of unitary rotations of spin$\otimes$valley space of electrons in each layer independently. Below a transition temperature $T_{c}$ this symmetry is reduced by a non-zero order parameter towards a symmetry of the excitonic insulating ground state, which consists of electron-hole pairs with electrons on one layer and holes on another. Following the BCS theory of superconductivity, we identify the condition for such electron-hole pairing, determine the order parameter and build a BCS-like mean-field theory of the excitonic insulator. Analyzing a symmetry breaking of the initial ground state by the order parameter, we consider a condition that mutually determines the order parameter and the corresponding symmetry group of the excitonic insulator ground state. Using a singular value decomposition of the matrix of the order parameter, for each phase of the excitonic insulator we obtain a corresponding symmetry group of the ground state, a structure of the order parameter and its degeneracy space. The results of a phase classification of the excitonic insulator are shown in Table \[tab:phases\], the most energetically stable phases are shown in the phase diagram, Fig. \[fig 1\](b), and on Figs. \[fig:Bphase\], \[fig:Bpr phase\], \[fig:Apr\]. It is important to notice that the excitonic correlations in all phases discussed in this paper origin from the coincided Fermi surfaces at approximately the same Fermi momentum $p_{F}$ (we use assumption $E_{F}\gg \epsilon_{Z}$, where the Fermi energy $E_{F}$ is determined in the system without a magnetic field and with equal densities of charge carriers in layers $n_{1e}=n_{2h}$). Thus assuming that the interaction among charge carriers is the same in all phases (i.e. that the effect of excitonic correlations on the screening of the interaction can be neglected[@KharitonovEfetov0903]), we obtain that the energy gap in the single particle excitation spectrum in all phases is determined by the same self-consistency equation. Therefore the transition temperature $T_c$ estimated from the self-consistency equation[@mineevsamokhin] should be the same for all phases. At a temperature lower than the transition temperature $T_{c}$, transitions between phases in the phase diagram are found to be of the first order. Phases of excitonic insulator have different properties: thus the electron-hole pairs in $B^{\prime}$ phase have total spin projection $+1$ or $-1,$ Fig. \[fig:Bpr phase\], whereas in the $A_{1}$-phase a total spin projection of an electron-hole pair is equal to zero, Fig. \[fig:Apr\]. According to the number of estimations of the critical temperature in the considered system, the most optimistic estimation gives values of $T_{c}$ close to a room temperature.[@min; @..; @macdonald] However this estimation[@min; @..; @macdonald] does not take screening of the Coulomb interaction into account, explaining it by the assumption of the first order phase transition in the system. Some other estimations[@kharitonov; @efetov; @KharitonovEfetov0903] point on the improbability of observation of excitonic condensation due to extremely low transition temperature $\lesssim 1{\rm mK},$ $(T_{c}\approx 10^{-7}E_{F})$. According to Refs. [@kharitonov; @efetov; @KharitonovEfetov0903] the reason for low transition temperature lays in the effective screening of the Coulomb interaction by a big number $N$ of species of electrons.[@kharitonov; @efetov; @kharitonov; @efetov; @foster; @aleiner] In the considered system $N=8$, which is given by product of 2 valleys, 2 spin projections and 2 layers. Such large $N$ increases screening and makes excitonic condensation not so effective, as in the monolayer graphene [@khveshchenko] and especially in the monolayer graphene in a magnetic field,[@gorbar; @Aleiner] where $T_{c}$ can reach value up to $10^{-4}~E_{F}.$ However, recent investigations, based on a detailed treatment of the screened Coulomb interaction [@lozovik2009; @lozovik2009v2; @lozovik2010; @lozovik2012; @sodemann; @macdonald] and on a consideration of a multi-band pairing [@mink; @lozovik2012] or pairing with nonzero momentum[@efimkin; @nonzer; @mom] suggest that the transition temperature $T_{c}$ can be sufficiently big for the experimental observation of the excitonic insulator in the considered system. Together with recent experimental realization of the two-layer graphene system [@schmidt; @schmidt2; @schmidt3; @schmidt4; @FalkoGaugeField; @FalkoCheianovTunable] it provides a hope that the phase diagram of the excitonic insulator, Fig. \[fig 1\](b), under favorable conditions[@su; @macdonald; @bistritzer; @macdonald; @basu; @efimkin; @disoder] will be observed experimentally. Y.F.S. would like to thank L. Glazman, E. Burovski and Y. Sherkunov for useful discussions. The authors thank EPSRC and Physics Department at Lancaster University for financial support. J. M. Blatt *et al*, Phys. Rev. **126**, 1691 (1962). L. V. Keldysh, Y. V. Kopaev, Sov. Phys. Solid State **6**, 2219 (1965). D. Jerome, T. M. Rice, W. Kohn, Phys. Rev. **158**, 462 (1967). L. V. Keldysh and A. N. Kozlov, Sov. Phys. JETP **27**, 521 (1968). B. I. Halperin, T. M. Rice, Rev. Mod. Phys. **40**, 755-766 (1968). Yu. E. Lozovik and V. I. Yudson, JETP Lett. 22, 274 (1975). S. I. Shevchenko, Fiz. Nizk. Temp. **2**, 505 (1976) \[Sov. J. Low Temp. Phys. **2**, 251 (1976)\]. U. Sivan, P. M. Solomon, and H. Shtrikman, Phys. Rev. Lett. **68**, 1196 (1992). L. V. Butov, A. Zrenner, G. Abstreiter, G. Bohm, and G. Weimann, Phys. Rev. Lett. **73**, 304 (1994). K. Moon, H. Mori, K. Yang, S. M. Girvin, A. H. MacDonald, L. Zheng, D. Yoshioka, and S.-C. Zhang, Phys. Rev. B **51**, 5138 (1995). X. Zhu, P. B. Littlewood, M. S. Hybertsen, and T. M. Rice, Phys. Rev. Lett. **74**, 1633 (1995). I. B. Spielman, J. P. Eisenstein, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. **84**, 5808 (2000); **87**, 036803 (2001). J. P. Eisenstein and A. H. MacDonald, Nature **432**, 691 (2004). A. A. High, J. R. Leonard, A. T. Hammack, M. M. Fogler, L. V. Butov, A. V. Kavokin, K. L. Campman and A. C. Gossard, Nature **483**, 584 (2012). K. S. Novoselov, A. K. Geim, S. V. Morozov, *et al.*, Science **306**, 666 (2004). K. S. Novoselov, A. K. Geim, S. V. Morozov, *et al.*, Nature **438**, 197 (2005). Y. Zhang, J. P. Small, M. E. S. Amori, and P. Kim, Phys. Rev. Lett. **94**, 176803 (2005). Y. Zhang, *et al.*, Nature **438**, 201 (2005). I. L. Aleiner, D. E. Kharzeev, and A. M. Tsvelik, Phys. Rev. B **76**, 195415 (2007). Yu. E. Lozovik and A. A. Sokolik, JETP Letters **87**, 55-59 (2008). Yu. E. Lozovik, S. P. Merkulova, and A. A. Sokolik, Usp. Fiz. Nauk 178, 757 (2008) \[Phys. Usp. 51, 727 (2008)\] (in Russian). H. Min, R. Bistritzer, J. J. Su, A. H. MacDonald, Phys. Rev. B **78**, 121401(R) (2008). C.-H. Zhang and Yogesh N. Joglekar, Phys. Rev. B **77**, 233405 (2008). Yu. E. Lozovik, A. A. Sokolik, and M. Willander, Phys. Status Solidi A **206**, 927-930 (2009). M. Y. Kharitonov, K. B. Efetov, Phys. Rev. B **78**, 241401(R) (2008). R. Bistritzer, H. Min, J.-J. Su, and A. H. MacDonald, arXiv:0810.0331 \[cond-mat\]. M. Y. Kharitonov, K. B. Efetov, Semicond. Sci. Technol. **25**, 034004 (2010). Yu. E. Lozovik and A. A. Sokolik, Physics Letters A **374**, 326 - 330 (2009). Yu. E. Lozovik and A. A. Sokolik, European Physical Journal B **73**, 195-206 (2010). Yu. E. Lozovik and S. L. Ogarkov and A. A. Sokolik, Phil. Trans. R. Soc. A **368**, 5417-5429 (2010). M. P. Mink, H. T. C. Stoof, R. A. Duine and A. H. MacDonald, Phys Rev B **84**, 155409 (2011). H. Schmidt, T. Ludtke, P. Barthold, E. McCann, V. I. Fal’ko, R. J. Haug, Applied Phys. Lett. **93**, 172108 (2008). H. Schmidt and T. Ludtke and P. Barthold and R. J. Haug, Phys. Rev. B **81**, 121403(R) (2010). T. Ludtke and H. Schmidt and P. Barthold and R. J. Haug, Physica E **42**, 695-698 (2010). H. Schmidt and T. Ludtke and P. Barthold and R. J. Haug, Physica E **42**, 699-702 (2010). D. Rainis, F. Taddei, M. Polini, G. Leon, F. Guinea and V. I. Fal’ko, Phys. Rev. B **83**, 165403 (2011). L. A. Ponomarenko, A. A. Zhukov, R. Jalil, S. V. Morozov, K. S. Novoselov, V. V. Cheianov, V. I. Fal’ko, K. Watanabe, T. Taniguchi, A. K. Geim and R. V. Gorbachev, Nat. Phys. **7**, 958 (2011). A. J. Leggett, Rev. Mod. Phys. **47**, 331 (1975). J. C. Wheatley, Rev. Mod. Phys. **47**, 415 (1975). V. P. Mineev, Usp. Fiz. Nauk **139**, 303 (1983), \[Sov. Phys.- Uspekhi **26**, 160 (1983)\]. G. E. Volovik, “Symmetry in Superluid 3He”, in *Modern Problems in Condensed Matter Sciences (Helium Three)*, edited by W. P. Halperin and L. P. Pitaevskii (North- Holland, 1990), Chapter 2. D. Vollhardt and P. Wolfle, *The Superfluid Phases of Helium 3* (Taylor and Francis, New York, 1990). P. R. Wallace, Phys. Rev. **71**, 622 (1947). A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009); arXiv:0709.1163v2. N. M. R. Peres, J. Phys.: Condens. Matter **21**, 323201 (2009). V. P. Mineev, K. V. Samokhin, *Introduction to Unconventional Superconductivity* (Gordon and Breach Science Publishers, 1999). M. Katsnelson, K. Novoselov, and A. Geim, Nat. Phys. 2, 620 (2006). J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. **108**, 1175 (1957). C. Bruder and D. Vollhardt, Phys. Rev. B **34**, 131 (1986). A. Howard, *Elementary linear algebra : applications version* (Howard Anton, Chris Rorres. - 6th ed. New York: Wiley, 1991). M. S. Foster and I. L. Aleiner, Phys. Rev. B **77**, 195413 (2008). D. V. Khveshchenko, Phys. Rev. Lett. **87**, 206401 (2001); **87**, 246802 (2001). E. V. Gorbar, V. P. Gusynin, V. A. Miransky, I. A. Shovkovy, Phys. Rev. B **66**, 045108 (2002). Yu. E. Lozovik, S. L. Ogarkov, A. A. Sokolik, Phys. Rev. B **86**, 045429 (2012). I. Sodemann, D. A. Pesin, and A. H. MacDonald, Phys. Rev. B **85**, 195136 (2012). D. K. Efimkin and Yu. E. Lozovik, JETP **113**, 880-886 (2011). R. Bistritzer and A. H. MacDonald, Phys. Rev. Lett. **101**, 256406 (2008). D. Basu, L. F. Register, A. H. MacDonald, and S. K. Banerjee, Phys. Rev. B **84**, 035449 (2011). D. K. Efimkin, V. A. Kulbachinskii, and Yu. E. Lozovik, JETP Lett. **93**, 219-222 (2011) J.-J. Su and A. H. MacDonald, Nat, Phys. **4**, 799 (2008).
--- abstract: 'A method is presented for the identification of high-energy neutrinos from gamma ray bursts by means of a large-scale neutrino telescope. The procedure makes use of a time profile stacking technique of observed neutrino induced signals in correlation with satellite observations. By selecting a rather wide time window, a possible difference between the arrival times of the gamma and neutrino signals may also be identified. This might provide insight in the particle production processes at the source. By means of a toy model it will be demonstrated that a statistically significant signal can be obtained with a km$^{3}$ scale neutrino telescope on a sample of 500 gamma ray bursts for a signal rate as low as 1 detectable neutrino for 3% of the bursts.' address: | Department of Physics and Astronomy, Utrecht University\ Princetonplein 5, NL-3584 CC Utrecht, The Netherlands\ Email : nickve.nl@gmail.com author: - Nick van Eijndhoven title: | On the observability of high-energy neutrinos\ from gamma ray bursts --- Neutrino astronomy, gamma ray bursts, neutrino telescopes. Introduction ============ Cosmic radiation is a valuable source of information about various energetic astrophysical processes. However, the existence of very energetic cosmic rays also raises questions such as : how are they accelerated and from where do they originate ?\ A variety of possible accelerator mechanisms exists, ranging from shock waves produced by exploding stars (supernovae) or Gamma Ray Bursts (GRBs) to supermassive black holes with strong magnetic fields (Active Galactic Nuclei). The current understanding is that protons and electrons are the primary particles that are accelerated by electromagnetic fields at a cosmic accelerator site.\ In case of a supernova event, a shock is formed by the expanding matter envelope when it sweeps through the interstellar medium which surrounds the exploding star. In such an environment stochastic processes occur which can accelerate particles to very high energies. A detailed treatment [@shock] shows that acceleration by shock waves automatically results in a power spectrum, which is in qualitative agreement with the observations up to the ’knee’ region of the cosmic ray spectrum [@pdg].\ However, this leaves us with the question of which events can produce the cosmic rays above the ’knee’ region. Candidates for the production of the most energetic cosmic rays are Active Galactic Nuclei (AGN) and Gamma Ray Bursts. The current perception is that the majority of these objects have a similar inner engine, in which infalling matter and the likely presence of a strong magnetic field gives rise to relativistic shock wave acceleration in two back to back jets. Interactions of accelerated protons and electrons with the ambient photons at the acceleration site give rise to very energetic secondary particles, as shown in Fig. \[fig:jet\]. In particular the $p\gamma$ interactions yield a flux of very energetic neutrinos, as depicted in more detail in Fig. \[fig:nuprod\]. ![Particle production in jets (courtesy C. Spiering).[]{data-label="fig:jet"}](jet){width="5cm"} ![Neutrino production processes.[]{data-label="fig:nuprod"}](grb-engine){width="6cm"} In case of a proton energy of $10^{16}$ eV, i.e. the region of the ’knee’ of the cosmic ray spectrum, the photon energy threshold for $\Delta$ production is about 10 eV, being the ultraviolet part of the spectrum. Since there are many of these UV photons present, the depicted hadronic processes will take place at high rates, yielding substantial neutrino fluxes comparable to those of ultrahigh-energy photons.\ In the decay of the $\Delta$ resonance into a nucleon and a $\pi$ meson, the meson obtains on average 20% of the primary proton energy. This yields an average neutrino energy of about 400 TeV for a primary proton energy of $10^{16}$ eV. Detailed model calculations [@nuflux] predict an $E^{-2}$ powerlaw spectrum for the produced neutrino flux. Taking into account the fact that the atmospheric spectrum is softer [@pdg] and that the neutrino cross section increases with energy [@pdg; @nucross], we observe that optimal detection conditions are obtained for neutrino telescopes in an energy range of about 10-100 TeV [@i3sens]. Various attempts [@amagrb] have been made to identify a high-energy neutrino flux in correlation with satellite observations of GRBs. The performed searches for a statistical excess above the background comprise both photon-neutrino coincidence studies and investigations of so-called “rolling time windows”.\ However, the former will obviously fail in case there exists a significant time difference between the arrival times of the photon and neutrino fluxes, whereas the latter can only be succesful in case some GRBs produce multiple neutrino detections within the corresponding time windows. So far, no positive identifications have been reported. From the above it is seen that it would be preferable to use an analysis procedure that does not require the simultaneous arrival of photons and neutrinos and which also provides a high sensitivity in case of low signal rates. Such a method, based on a time profile stacking technique, is presented here. The time profile stacking procedure =================================== In order to obtain a statistical significant result even in case of low signal rates, a cumulative procedure as outlined below has been devised. It is based on the generic GRB engine described in the previous section, which implies that the arrival times of the photons and neutrinos are correlated but are not necessarily simultaneous. When a GRB is observed by a satellite, the trigger time $t_{grb}$ and burst location on the sky are recorded. Afterwards, the data of a neutrino telescope are inspected for a time interval $[t_{grb}-\Delta t, t_{grb}+\Delta t]$ and all arrival times of upgoing muons are recorded relative to $t_{grb}$. Here $\Delta t$ is some predefined time margin, which is identical for all observed bursts. An upgoing muon is a long reconstructed track in a neutrino telescope pointing backwards to a location in the hemisphere opposite to the detector location. The usage of upgoing $\mu$ tracks allows reduction of the (atmospheric) background signals in our analysis procedure, as outlined lateron.\ For a sample of different GRB observations, the above will result in a set of identical time windows with upgoing $\mu$ arrival time recordings relative to the corresponding GRB trigger time. Stacking of all these time profiles will exhibit a uniform distribution for background events. However, in case the data contain upgoing $\mu$ signals correlated with the GRBs, a clustering of data bins is expected. Consequently, comparison of the stacked time profile contents with a uniform background allows the identification of correlated signals. Due to the cumulative character of the procedure, large statistics can be obtained resulting in a good sensitivity even in case of low signal rates.\ Any arrival time difference between a photon and neutrino signal poses no problem as long as this time difference is smaller than $\Delta t$. As such, $\Delta t$ should be taken as large as possibly allowed by the background signals. However, it is obvious that a spread in this photon-neutrino arrival time difference will reduce the significance of the signal. To address the feasibility of the procedure and to investigate the effects of the various parameters, a toy model [^1] which mimics GRB induced signals as well as (atmospheric) background has been devised. A description of this toy model and the results of the above analysis procedure performed on the simulated data for a km$^{3}$-scale detector are presented hereafter. Signal and background generation ================================ Satellite observations of GRBs have shown [@satgrbs] that the burst locations are homogeneously distributed over the sky. Since our analysis procedure is based on the detection of upgoing $\mu$ tracks, our toy model only generates GRB positions homogeneously distributed over the hemisphere opposite to the detector location. For each generated burst location we define the satellite trigger time to be $t_{grb} \equiv 0$ and create a time window $[-\Delta t, \Delta t]$ around it.\ Observations with the AMANDA neutrino telescope [@amaupmu] show that a km$^{3}$-scale detector will observe on average 300 upgoing muons per 24 hours due to (atmospheric) background, homogeneously distributed over the hemisphere. Therefore, each of the above time windows will be filled with a number of background upgoing muon signals taken from a Poissonian distribution with an average number of $(300/24)\cdot(2\Delta t/1 \text{ hour})$.\ The arrival directions of these background upgoing muons are taken to be homogeneously distributed over the hemisphere, whereas their arrival times are taken to be uniformly distributed in the corresponding time window. To take the detector time resolution $\sigma_{t}$ into account, a Gaussian spread with a standard deviation $\sigma_{t}$ is introduced to the arrival times. Finally the resulting arrival times are recorded as (background) entries in the various corresponding time windows. Also the angular positions of the arrival directions are recorded, after introducing a Gaussian spread corresponding to the angular resolution $\sigma_{a}$ of the detector. Recording of these angular positions will lateron allow reduction of the background by correlation with the actual GRB locations. By means of a uniform random number generator only a fraction $f$ of the generated burst locations is selected to yield a single upgoing $\mu$ signal. To mimic a time difference $\tau$ between the photon and neutrino burst arrival times, the upgoing $\mu$ signal arrival time of each signal burst is taken from a Gaussian distribution with a mean value $\tau$ and a standard deviation $\sigma_{\tau}$. Before these signal muon arrival times are added to the corresponding time windows, a Gaussian spread corresponding to the detector time resolution $\sigma_{t}$ is introduced. The arrival directions of these signal upgoing muons are recorded as the locations of the corresponding bursts, after introducing a Gaussian spread corresponding to the detector angular resolution $\sigma_{a}$. Introduction of realistic values for the various toy model parameters outlined above will allow to investigate the feasibility of detecting neutrino induced signals in a large scale neutrino telescope. In an actual experimental data analysis effort one obviously has to account for several additional (systematic) effects like detector stability, track reconstruction efficiency and so on. However, these are detector specific effects and fall beyond the scope of the present studies. Analysis of simulated data ========================== The only large scale neutrino telescope currently in operation is IceCube [@i3perform] and as such we use the parameters of this detector [@i3sens] as benchmark values for our present studies.\ The expected data rates for the full km$^{3}$ scale detector allow a time margin $\Delta t$ of 1 hour. This implies an average number of background signals of about 25 upgoing muons for each individual GRB time window. The time resolution for the reconstructed muon tracks will be of the order of the time it takes for a muon to cross the detector volume. As such we take $\sigma_{t}=10~\mu$s as a conservative estimate for the detector time resolution. Experience with the analysis of the AMANDA data [@amaupmu; @mutrack] together with detector simulation studies [@i3sens] show that a realistic estimate for the angular resolution is obtained by taking $\sigma_{a}=1^{\circ}$.\ The remaining parameters of our toy model are related to the characteristics of the various bursts. Based on the processes sketched in Fig. \[fig:jet\], a reasonable estimate for the possible photon-neutrino arrival time difference and its spread can be obtained from the actual burst duration. Satellite observations [@satgrbs] exhibit a mean burst duration of about 30 seconds. As such we take $\tau=30$ s and $\sigma_{\tau}=30$ s. As mentioned before, for evaluation of the currently presented procedure the value of $\tau$ is actually irrelevant as long as it is smaller than the time margin $\Delta t$.\ This leaves us with only two free parameters : the fraction $f$ of GRBs that actually induces an upgoing muon signal and the bin size to be used for the time profiles. In order to optimise the time bin clustering of the signals, the bin size should be taken to be of the order of the temporal signal spread $\sigma_{\tau}$. However, since the observed redshifts of GRBs [@satgrbs] exhibit a median value of $z=1.9$ with a spread of 1.3, cosmological time dilation effects have to be taken into account. It should be noted, however, that in case both the photon and neutrino production processes are taking place continuously throughout the jet existence, the cosmological time dilation is already included in the observed gamma burst duration and consequently also in the temporal signal spread $\sigma_{\tau}$. Nevertheless, we always account for a possible additional cosmological time dilation and take for the time profile bin size a conservative value of $5\sigma_{\tau}$, corresponding to 150 s.\ It should be noted here that restricting the analysis to short duration bursts allows for smaller time bins and consequently more detailed time profile studies.\ The fraction $f$ we keep as a free parameter in order to determine the sensitivity of our analysis procedure for different sizes of the GRB sample. For a first investigation of the performance of the procedure we generated 100 GRBs in one hemisphere. This corresponds to about 2 years of operation of the Swift satellite [@swift], which currently is the main source of GRB triggers. All parameters were set to the values mentioned above and for the fraction $f$ we used a value of 10% [@grbfrac]. The resulting stacked time profile is shown in Fig. \[fig:tott1\]. ![Stacked time profile for 100 GRBs with $f=0.1$. Further details can be found in the text.[]{data-label="fig:tott1"}](tott-100-010-noang){width="8.5cm"} Since in our toy model we have access to all information, we are also able to construct the corresponding stacked time profile from the background signals only. This background stacked time profile is shown in Fig. \[fig:bkgt1\]. ![Stacked time profile corresponding to the background data of Fig. \[fig:tott1\].[]{data-label="fig:bkgt1"}](bkgt-100-010-noang){width="8.5cm"} Comparison of the number of entries from Fig. \[fig:tott1\] and Fig. \[fig:bkgt1\] shows that 8 of our generated GRBs induced a signal in the stacked time window. However, due to the presence of a large background we are not able to identify the GRB signals on the basis of our observations of Fig. \[fig:tott1\] alone. Reduction of the background without significant signal loss can be achieved by only investigating a certain angular region centered around the actual GRB position. As detector angular resolution we have $\sigma_{a}=1^{\circ}$, so restricting ourselves to an angular region of $5^{\circ}$ around the GRB location will reduce significantly the background while preserving basically all signal muons.\ The stacked time profile of our previous generation, but now restricted to an angular region of $5^{\circ}$ around the burst location, is shown in Fig. \[fig:tott2\]. ![Stacked time profile for 100 GRBs with $f=0.1$ and restricted to an angular region of $5^{\circ}$ around the actual burst location.[]{data-label="fig:tott2"}](tott-100-010-ang){width="9cm"} Visual inspection of Fig. \[fig:tott2\] raises some doubts to a conclusion that the observed time profile results from a uniform background distribution. This is confirmed if we investigate the corresponding background distribution as shown in Fig. \[fig:bkgt2\]. ![Stacked time profile corresponding to the background data of Fig. \[fig:tott2\].[]{data-label="fig:bkgt2"}](bkgt-100-010-ang){width="9cm"} Comparison of Fig. \[fig:tott2\] and Fig. \[fig:bkgt2\] allows the identification of the GRB signals in the central bin. In the analysis of experimental data, however, we don’t have access to the actual corresponding background distribution. As such, we need to quantify our degree of (dis)belief in a background observation solely based on the actually recorded signals like in Fig. \[fig:tott2\]. Bayesian assessment of the significance ======================================= Consider two propositions $A$ and $B$ and some prior information $I$. We introduce the notation $p(A|BI)$ to represent the probability that $A$ is true under the condition that both $B$ and $I$ are true. Following the arguments of extended logic [@jaynes] we automatically arrive at the so-called theorem of Bayes $$p(B|AI)=p(B|I)\,\frac{p(A|BI)}{p(A|I)} \quad . \label{eq:bayes}$$ The above theorem is extremely powerful in the process of hypothesis testing, as will be shown here.\ Consider a hypothesis $H$ in the light of some observed data $D$ and prior information $I$. By $H_{\ast}$ we denote an unspecified alternative to $H$. This implies that $H_{\ast}$ is just the proposition that $H$ is false. From eq.  we immediately obtain $$\frac{p(H|DI)}{p(H_{\ast}|DI)}=\frac{p(H|I)}{p(H_{\ast}|I)}\,\frac{p(D|HI)}{p(D|H_{\ast}I)} \quad . \label{eq:bayes2}$$ Introducing an intuitive decibel scale, we can express the evidence $e(H|DI)$ for $H$ relative to any alternative based on the data $D$ and prior information $I$ as : $$e(H|DI) \equiv 10\log_{10} \left[\frac{p(H|DI)}{p(H_{\ast}|DI)} \right] \quad . \label{eq:evidence}$$ Combined with eq.  this yields $$e(H|DI)=e(H|I)+10\log_{10} \left[\frac{p(D|HI)}{p(D|H_{\ast}I)} \right] \quad . \label{eq:evidence2}$$ To quantify the degree to which the data support a certain hypothesis $H$, we introduce the Bayesian observables $\psi \equiv -10\log_{10} p(D|HI)$ and $\psi_{\ast} \equiv -10\log_{10} p(D|H_{\ast}I)$. Since the value of a probability always lies between 0 and 1, we have $\psi \geqq 0$ and $\psi_{\ast} \geqq 0$. Together with eq.  we obtain $$e(H_{\ast}|DI)=e(H_{\ast}|I)+\psi-\psi_{\ast} \leqq e(H_{\ast}|I)+\psi \quad . \label{eq:evidence3}$$ In other words : there is no alternative to a certain hypothesis $H$ which can be supported by the data $D$ by more than $\psi$ decibel, relative to $H$.\ So, the value $\psi=-10\log_{10} p(D|HI)$ provides the reference to quantify our degree of belief in $H$. In our evaluation of the stacked time profile the main question is to which degree we believe our observed distribution to be inconsistent with respect to a uniform background. This question can be answered unambiguously if we are able to determine the $\psi$ value corresponding to the uniform background hypothesis based on our observed stacked time profile. The process of recording background signals is identical to performing an experiment with $m$ different possible outcomes $\{A_{1},...,A_{m}\}$ at each trial. Obviously, $m$ is in our case just the number of bins in the time profile and the number of trials $n$ is the number of entries.\ In case all the probabilities $p_{k}$ corresponding to the various outcomes $A_{k}$ on successive trials are independent and stationery, the experiment is said to belong to the Bernoulli class $B_{m}$ [@jaynes]. It is clear that our data recordings according to a uniform background hypothesis satisfy the requirements of $B_{m}$. The probability $p(n_{1} \dots n_{m}|B_{m}I)$ of observing $n_{k}$ occurrences of each outcome $A_{k}$ after $n$ trials is therefore given by the multinomial distribution [@jaynes]. Consequently, the probability for observing a specific set of background data $D$ consisting of $n$ entries is given by $$p(D|B_{m}I)=\frac{n!}{n_{1}! \cdots n_{m}!} \, p_{1}^{n_{1}} \cdots p_{m}^{n_{m}} \quad . \label{eq:pbm}$$ This immediately yields the following expression for the $\psi$ value according to a uniform background hypothesis $$\psi=-10 \left[ \log_{10}n! + \sum_{k=1}^{m}(n_{k}\log_{10}p_{k}-\log_{10}n_{k}!) \right]~. \label{eq:psi}$$ When a signal from a uniform background is being recorded in our time window, there is no preference for any specific time bin. This implies that in our case all $p_{k}$ values are identical and equal to $m^{-1}$. As such we can evaluate the $\psi$ value of eq.  for any set of observed data $D$. Relation to a frequentist approach ---------------------------------- In the case of large statistics we can use Stirling’s approximation $\ln x!=x \ln x - x$ for $x \gg 1$ in eq. . Together with the fact that $\sum n_{k}=n$ this yields the frequentist approximation $$\psi=10 \sum_{k=1}^{m} n_{k}\log_{10}\left(\frac{n_{k}}{np_{k}} \right) \quad . \label{eq:psi2}$$ Furthermore, for a “near match” scenario we have $n_{k} \approx np_{k}$. In such a case we can use the series expansion $\ln x=(x-1)-\frac{1}{2}(x-1)^{2}+\dots$, which yields $$\left| \sum_{k=1}^{m} n_{k}\ln\left(\frac{n_{k}}{np_{k}} \right) \right| \approx \frac{1}{2} \sum_{k=1}^{m} \frac{(n_{k}-np_{k})^{2}}{np_{k}} \quad . \label{eq:match}$$ This yields the correspondence with the $\chi^{2}$ statistic $$\chi^{2}=\sum_{k=1}^{m} \frac{(n_{k}-np_{k})^{2}}{np_{k}} \quad . \label{eq:chi2}$$ Equation  allows a frequentist $\chi^{2}$ evaluation of the statistical significance of our observations. However, this will only provide meaningful results in case the conditions mentioned above are satisfied. In case a rather unlikely event happens to be observed within a small number of trials, a $\chi^{2}$ analysis may lead to completely wrong conclusions whereas the Bayesian approach outlined above will provide the correct results [@jaynes]. As such, the present studies will be based on the exact Bayesian expression of eq. . Discovery potential =================== Evaluation of the expression of eq.  for the data displayed in Fig. \[fig:tott1\] yields $\psi=713.38$ dB. Since these data don’t allow the identification of a GRB signal, this rather high $\psi$ value must be due to background fluctuations. This is indeed confirmed by investigation of the corresponding background data shown in Fig. \[fig:bkgt1\], which yield $\psi_{bkg}=709.43$ dB. Consequently, it is required to determine the $\psi$ value of the corresponding background before the statistical significance of an observed time profile can be evaluated. In our toy model studies we have directly access to the corresponding background time profile, but this will in general not be the case in an actual experimental data analysis effort. One way to investigate background signals is to record data as outlined above, but with fictative GRB trigger times not coinciding with the actual $t_{grb}$. This method we call “on source off time”. In order to have similar detector conditions for both the signal and background studies, the fictative trigger times should be chosen not too distinct from the actual $t_{grb}$. Recording background data in a time span covering 1 day before and 1 day after the GRB observation will allow the investigation of at least 25 different background time profiles per burst. These in turn will yield the corresponding different stacked background time profiles which allow the determination of an average value $\bar{\psi}_{bkg}$ and the corrresponding root mean square deviation $s_{bkg}$. The processing of extra background data as described above might turn out to become unpractical, due to e.g. data volume. In such a case one might consider using the remaining data of the off source locations of the actual time windows. Such a method we call “off source on time”.\ The performance of the “off source on time” method obviously depends on various detector conditions, which may limit the feasibility of such a background determination. To overcome these possible limitations, one could also envisage using the actual observed time profile and randomise the entries in time. By performing several randomisations, a representation of the corresponding background is obtained. This method we call “time shuffling”. It should be noted, however, that in the case of a large signal contribution the time shuffling method will underestimate the significance of the signal. In view of the above, we will use the “on source off time” method in our toy model studies by generating 25 different background samples and performing our analysis procedure for each of them.\ In the case of the situation reflected by Fig. \[fig:tott1\] this yields $\bar{\psi}_{bkg}=692.04$ dB and $s_{bkg}=21.19$ dB, which is seen to be in excellent agreement with the actual background value corresponding to Fig. \[fig:bkgt1\].\ Comparison of the actually observed $\psi$ value of 713.38 dB with the reconstructed background values immediately shows that no significant signal is observed.\ However, evaluation of the data corresponding to Fig. \[fig:tott2\] yields $\psi=218.78$ dB with background values $\bar{\psi}_{bkg}=99.62$ dB and $s_{bkg}=23.98$ dB. Here a statistically significant signal is obtained. For a uniform background and large statistics, the Bayesian $\psi$ observable can be approximated by the frequentist $\chi^{2}$ statistic, as indicated in eqs. -. This implies that the statistical significance for deviation from a uniform background distribution can be expressed in terms of a standard deviation $\sigma$ by comparison of the actually observed $\psi$ value of the stacked time profile with the corresponding $\bar{\psi}_{bkg}$ and $s_{bkg}$ background values. This is illustrated in Fig. \[fig:psidist\] for a sample of 250 different background samples according to the situation reflected in Fig. \[fig:tott2\]. The distribution of the obtained $\psi_{bkg}$ values as shown in Fig. \[fig:psidist\] exhibits a Gaussian profile with a mean value and standard deviation which are indeed consistent with the above $\bar{\psi}_{bkg}$ and $s_{bkg}$ values, respectively. ![Distribution of $\psi_{bkg}$ values for a large sample of background distributions according to the situation depicted in Fig. \[fig:tott2\].[]{data-label="fig:psidist"}](psidist){width="9cm"} Variation of the number of GRBs allows a determination of the minimal value of the fraction $f$ for which a statistically significant signal can be obtained. Common practice is to claim a discovery in the case a significance in excess of $5\sigma$ is obtained. Following the procedure outlined above this leads to the discovery sensitivities as shown in Fig. \[fig:disc\]. ![Sensitivities corresponding to a $5\sigma$ signal significance.[]{data-label="fig:disc"}](disc){width="9cm"} It should be noted that the actually achievable sensitivities are depending on various detector specific parameters and the quality of the available data. As such, all parameters as well as the amount of possible background samples will have to be optimised for each specific experimental data analysis scenario. In case no significant signal can be identified from an experimentally observed stacked time profile, values like the ones presented in Fig. \[fig:disc\] provide the basis for a fluence limit determination. Summary ======= The method introduced in this report allows identification of high-energy neutrinos from gamma ray bursts with large scale neutrino telescopes. The procedure is based on a time profile stacking technique, which provides statistical significant results even in the case of low signal rates. The performance of the method has been investigated by means of toy model studies based on realistic parameters for the future IceCube km$^{3}$ neutrino telescope and a variety of burst samples. From these investigations it is seen that a $5\sigma$ significance is obtained on a sample of 500 bursts with a signal rate as low as 1 detectable neutrino for 3% of the bursts.\ Finally, it should be realised that the actually achievable sensitivities are depending on various detector specific parameters and the quality of the available data. These aspects, however, fall beyond the scope of the present report. The author would like to thank Bram Achterberg, Martijn Duvoort, John Heise and Garmt de Vries for the very fruitful discussions on the subject. [99]{} R. Blandford, D. Eichler, Phys. Rep. [**154**]{} (1987) 1.\ A. Achterberg [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**328**]{} (2001) 393. Particle Data Group, J. Phys. [**G33**]{} (2006) 1. E. Waxman, Astrophys. J. [**452**]{} (1995) L1.\ E. Waxman, J. Bahcall, Phys. Rev. [**D59**]{} (1999) 023002. R. Gandhi [*et al.*]{}, Phys. Rev. [**D58**]{} (1998) 093009.\ L. Anchordoqui [*et al.*]{}, Phys. Rev. [**D74**]{} (2006) 043008. IceCube collab., Astropart. Phys. [**20**]{} (2004) 507. IceCube collab., ICRC 2005 proceedings\ (astro-ph/0509330). C. Meegan [*et al*]{}, Nature [**355**]{} (1992) 143.\ W. Paciesas [*et al*]{}, Astrophys. J. S. [**122**]{} (1999) 465.\ http://swift.gsfc.nasa.gov/docs/swift/archive/grb\_table. Amanda collab., Phys. Rev. [**D66**]{} (2002) 012005.\ Amanda collab., Astrophys. J. [**583**]{} (2003) 1040. IceCube collab., Astropart. Phys. [**26**]{} (2006) 155. http://swift.gsfc.nasa.gov/docs/swift. Amanda collab., Nucl. Instr. and Meth. [**A524**]{} (2004) 169. F. Halzen, D. Hooper, Astrophys. J. [**527**]{} (1999) L93. E.T. Jaynes, Probability Theory, Cambridge Univ. Press 2003. [^1]: See http://www.phys.uu.nl/$\sim$nick/grbmodel
--- abstract: 'Tidal Downsizing is the modern version of the Kuiper (1951) scenario of planet formation. Detailed simulations of self-gravitating discs, gas fragments, dust grain dynamics, and planet evolutionary calculations are summarised here and used to build a predictive planet formation model and population synthesis. A new interpretation of exoplanetary and debris disc data, the Solar System’s origins, and the links between planets and brown dwarfs is offered. This interpretation is contrasted with the current observations and the predictions of the Core Accretion theory. Observations that can distinguish the two scenarios are pointed out. In particular, Tidal Downsizing predicts that presence of debris discs, sub-Neptune mass planets, planets more massive than $\sim 5$ Jupiter masses and brown dwarfs should not correlate strongly with the metallicity of the host. For gas giants of $\sim$ Saturn to a few Jupiter mass, a strong host star metallicity correlation is predicted only at separation less than a few AU from the host. Composition of massive cores is predicted to be dominated by rock rather than ices. planet formation in surprisingly young or very dynamic systems such as HL Tau and Kepler-444 a signature of Tidal Downsizing. Open questions and potential weaknesses of the hypothesis are pointed out.' author: - 'Sergei Nayakshin [^1]\' title: 'Dawes Review. The tidal downsizing hypothesis of planet formation.' --- keyword1 – keyword2 – keyword3 – keyword4 – keyword5 The Dawes Reviews are substantial reviews of topical areas in astronomy, published by authors of international standing at the invitation of the PASA Editorial Board. The reviews recognise William Dawes (1762-1836), second lieutenant in the Royal Marines and the astronomer on the First Fleet. Dawes was not only an accomplished astronomer, but spoke five languages, had a keen interest in botany, mineralogy, engineering, cartography and music, compiled the first Aboriginal-English dictionary, and was an outspoken opponent of slavery. Introduction ============ A planet is a celestial body moving in an elliptic orbit around a star. Although there does not appear to be a sharp boundary in terms of properties, objects more massive than $\approx 13{{\,{\rm M}_{\rm J}}}$ are called brown dwarfs (BDs) since they can fuse deuterium while planets are never sufficiently hot for that [@BurrowsEtal01]. Formation of a star begins when a large cloud dominated by molecular hydrogen collapses due to its self-gravity. The first hydrostatic object that forms in the centre of the collapsing cloud is a gaseous sphere of 1 to a few Jupiter masses; it grows rapidly by accretion of more gas from the cloud [@Larson69]. Due to an excess angular momentum, material accreting onto the protostar forms a disc of gas and dust. Planets form out of this (protoplanetary) disc, explaining the flat architecture of both the Solar System and the extra-solar planetary systems [@FabryckyEtal14; @WF14]. The most widely accepted theory of planet formation is the Core Accretion (CA) scenario, pioneered by [@Safronov72]. In this scenario, microscopic grains in the protoplanetary disc combine to yield asteroid-sized bodies [e.g., @GoldreichWard73], which then coalesce to form rocky and/or icy planetary cores [@Wetherill90; @KL99]. These solid cores accrete gas from the disc when they become sufficiently massive [@Mizuno80; @Stevenson82; @IkomaEtal00; @Rafikov06], becoming gas giant planets [@PollackEtal96; @AlibertEtal05; @MordasiniEtal14]. [@Kuiper51b] envisaged that a planet’s life begins as that of stars, by gravitational instability, with formation of a few Jupiter mass gas clump in a massive protoplanetary disc. In difference to stars, young planets do not accrete more gas in this picture. They may actually loose most of their primordial gas if tidal forces from the host stars are stronger than self-gravity of the clumps. However, before the clumps are destroyed, solid planetary cores are formed inside them when grains grow and sediment to the centre [@McCreaWilliams65]. In this scenario, the inner four planets in the Solar System are the remnant cores of such massive gas condesations. Jupiter, on the other hand, is an example of a gas clump that was not destroyed by the stellar tides because it was sufficiently far from the Sun. The other three giants in the Solar System are partially disrupted due to a strong negative feedback from their massive cores [@HW75 and §\[sec:SS\_basic\]]. It was later realised that gas clumps dense and yet cool enough for dust grain growth and sedimentation could not actually exist at the location of the Earth for more than a year, so Kuiper’s suggestion lost popularity [@DW75]. However, recent simulations show that gas fragments migrate inward rapidly from their birth place at $\sim 100$ AU, potentially all the way into the star [@BoleyEtal10 more references in §\[sec:rapid\]]. Simulations also show that grain sedimentation and core formation can occur inside the clumps while they are at separations of tens of AU, where the stellar tides are weaker. The clumps may eventually migrate to a few AU and could then be tidally disrupted. Kuiper’s top-down scenario of planet formation is therefore made plausible by planet migration; it was recently re-invented [@BoleyEtal10] and re-branded “Tidal Downsizing” hypothesis [@Nayakshin10c]. This review is structured as following. §\[sec:TD\_scenario\] lists important physical processes underpinning the scenario and points out how they could combine to account for the Solar System’s structure. §§4-7 present detailed calculations that constrain these processes, whereas §\[sec:dp\_code\] overviews a population synthesis approach for making statistical model predictions. §§\[sec:Z\]-\[sec:kepler444\] are devoted to the comparison of Tidal Downsizing’s predictions with those of Core Accretion and the current observations. §\[sec:SS\] is a brief summary of the same for the Solar System. The Discussion (§\[sec:discussion\]) presents a summary of how Tidal Downsizing might relate to the exoplanetary data, observations that could distinguish between the Tidal Downsizing and the Core Accretion scenarios, open questions, and potential weaknesses of Tidal Downsizing. Observational characteristics of planetary systems {#sec:key_obs} ================================================== In terms of numbers, $\sim 90$% of planets are those less massive than $\sim 20 {{\,{\rm M}_{\oplus}}}$ [@MayorEtal11; @HowardEtal12]. These smaller planets tend to be dominated by massive solid cores with gas envelopes accounting for a small fraction of their mass budget only, from tiny (like on Earth) to $\sim 10$%. There is a very sharp rollover in the planet mass function above the mass of $\sim 20{{\,{\rm M}_{\oplus}}}$. On the other end of the mass scale, there are gas giant planets that are usually more massive than $\sim 100 {{\,{\rm M}_{\oplus}}}$ and consist mainly of a H/He gas mixture enveloping a solid core. In terms of environment, planets should be able to form as close as $\lesssim 0.05$ AU from the host star [@MQ95] to as far away as tens and perhaps even hundreds of AU [@MaroisEtal08; @BroganEtal15]. Both small and large planets are not just smaller pieces of their host stars: their bulk compositions are over-abundant in metals compared to their host stars [@Guillot05; @MillerFortney11]. Planet formation process should also provide a route to forming smaller $\sim 1 - 1000$ km sized solid bodies, called planetesimals, such as those in the asteroid and the Kuiper belt in the Solar System and the debris discs around nearby stars [@Wyatt08]. While gas giant planet detection frequency is a strongly increasing function of the host star’s metallicity [@FischerValenti05], the yield of observed smaller members of the planetary system – massive solid cores [@BuchhaveEtal12; @WangFischer14] and debris discs [@Moro-MartinEtal15] – do not correlate with metallicity. One of the observational surprises of the last decade has been the robustness of the planet formation process. Planets must form in under 3 [@HaischEtal01] and perhaps even 1 Myr [@BroganEtal15 and §\[sec:HLT\]], and also in very dynamic environments around eccentric stellar binaries [e.g., @WelshEtal12] and also orbiting the primary in eccentric binary systems such as Kepler-444 [@DupuyEtal16 §\[sec:kepler444\]]. It was argued in the past that formation pathways of brown dwarfs (BDs) and of more massive stellar companions to stars should be distinct from those of planets [e.g., @WF14] because of their different metallicity correlations and other properties. However, observations now show a continuos transition from gas giant planets to brown dwarfs on small orbits in terms of their metal content, host star metallicity correlations, and the frequency of appearance (see §\[sec:BD\_vs\_planets\]). Also, observations show that planets and stellar companions are often members of same systems. There are stellar multiple systems whose orbital structure is very much like that of planetary systems [e.g., @TokovininEtal15]. This suggests that we need a theory that can address formation of both planetary and stellar mass companions in one framework [as believed by @Kuiper51b]. Tidal Downsizing hypothesis {#sec:TD_scenario} =========================== Basic steps {#sec:TD_basic} ----------- ![Tidal Downsizing hypothesis is a sequence of four steps: (1) gas clump birth; (2) migration; (3) grain sedimentation and core formation; (4) disruption. Not all of these steps may occur for a given clump (see §\[sec:TD\_basic\] for detail).[]{data-label="fig:sketch"}](Figs/TD_sketch1-eps-converted-to.pdf){width="0.9\columnwidth"} Tidal Downsizing hypothesis is a sequence of four steps, illustrated in Fig. \[fig:sketch\]: \(1) A gas clump of Jovian mass is born at separation of $\sim 100$ AU from the star in a gravitationally unstable gas disc (see §\[sec:disc\_fragm\]). \(2) The clump migrates inward rapidly due to torques from the disc, as shown by simulations (§\[sec:rapid\]). \(3) A core and solid debris (planetesimals) form in the centre of the clump by grain sedimentation and gravitational instability of the solid component in the centre of the clump (§§\[sec:dust\_inside\], \[sec:planetesimals\], \[sec:cores\]). (4A) If the fragment did not contract sufficiently from its initial extended state, it is disrupted by tides from the star [@BoleyEtal10 and §\[sec:term\]]. The core and the debris are released back into the disc, forming debris rings (shown as a brown oval filled with a patern in Fig. \[fig:sketch\]). The core continues to migrate in, although at a slower rate. (4B) If the fragment contracts faster than it migrates then it is not disrupted and becomes a gas giant planet with a core. Note that the latter does not have to be massive. The planet formation process ends when the gas disc is dissipated away [@AlexanderREtal14a]. Key concepts and physical constraints {#sec:tscales} ------------------------------------- #### Pre-collapse gas fragments, formed by gravitational instability in the disc (see §\[sec:inside\] and §\[sec:term\]) are initially cool, with central temperatures $T_{\rm c}\sim$ a hundred K, and extended, with the radius of the clump (planet) estimated as [@Nayakshin15a] $$R_{\rm p} \approx 0.7 {G M_{\rm p}\mu\over k_b T_{\rm c}} \approx 2 \hbox{AU} \left({M_{\rm p}\over 1 {{\,{\rm M}_{\rm J}}}}\right) T_2^{-1}\;, \label{rp1}$$ where $T_2 = T_{\rm c}/100$ K, and $\mu \approx 2.43 m_p$ is the mean molecular weight for Solar composition molecular gas. Clump effective temperatures are typically of order of tens of K [e.g., @VazanHelled12]. The fragments are expected to contract rapidly and heat up initially; when reaching $T_{\rm c}\sim 1000 $ K their contraction becomes much slower [e.g., Fig. 1 in @Nayakshin15a]. #### Second collapse. If the planet contracts to the central temperature $T_{\rm c}\sim 2,000$ K, it collapses rapidly due to H$_2$ dissociation [@Bodenheimer74] into the “second core” [@Larson69], which has $T_{\rm c} \gtrsim 20,000$ K and a radius of only $R_{\rm p} \sim 1{{\,{\rm R}_{\odot}}}\approx 0.005$ AU (see §\[sec:term\]). #### Super-migration. Numerical simulations (§\[sec:rapid\]) show that gas clumps born by gravitational instability “super-migrate” in, that is, their separation from the star may shrink from $a \sim 100$ AU to arbitrarily close to the star, unless the disc dissipates earlier. The migration time $t_{\rm mig}$ is from a few thousand years to a few $\times 10^5$ years at late times when the disc mass is low. #### Tidal disruption of the planet takes place if its radius is larger than the Hill radius of the planet, $$R_{\rm H} = a \left({M_{\rm p}\over 3 M_*}\right)^{1/3} \approx 0.07 a \left({M_{\rm p}\over 1 {{\,{\rm M}_{\rm J}}}}\right)^{1/3}\;, \label{RH1}$$ where $a$ is the planet-star separation and $M_*$ was set to $1{{\,{\rm M}_{\odot}}}$. Pre-collapse fragments can be disrupted at $a\sim $ a few to tens of AU whereas post-collapse planets are safe from tidal disruptions except perhaps for very small separations, $a\lesssim 0.1$ AU. #### Exclusion zone. The smallest separation which a migrating gas fragment can reach is found by comparing equations \[RH1\] and \[rp1\] for $T_{\rm c} = 2000$ K: $$a_{\rm exc} = 1.33 \hbox{ AU } \left({ M_{\rm p}\over 1 {{\,{\rm M}_{\rm J}}}}\right)^{2/3}\;. \label{aex1}$$ This implies that there should be a drop in the number of gas giant planets inwards of $a_{\rm exc}$. Inside the exclusion zone, only the planets that managed to collapse before they were pushed to $a_{\rm exc}$ remain gas giants. #### Grain sedimentation is possible inside pre-collapse fragments [see §\[sec:cores\], @McCreaWilliams65] as long as the fragments are cooler than $\sim 1500$ K. Grain growth and sedimentation time scales are a few thousand years (eq. \[tsed1\]). Massive core ($M_{\rm core} \ge 1 {{\,{\rm M}_{\oplus}}}$) assembly may however require from $10^4$ to a few $\times 10^5$ years. #### Planetesimals are debris of disrupted planets in the model, and are born only when and where these disruptions take place (§\[sec:planetesimals\] and \[sec:hier\]). The relation between planets and planetesimals are thus inverse to what it is in the Core Accretion picture. #### Pebble accretion. 10-cm or larger grains accreting onto the planet may accelerate its collapse by increasing the planet weight (§\[sec:pebbles\]). This process leads to distinct testable metallicity correlation signatures. #### Negative feedback by cores more massive than a few ${{\,{\rm M}_{\oplus}}}$. These cores release so much heat that their host gas clumps expand and may lose their gas completely, somewhat analogously to how red giant stars lose their envelopes. Core feedback can destroy gas clumps at separations as large as tens of AU (§\[sec:feedback\]). ![image](Figs/SS_sketch.pdf){width="1.9\columnwidth"} A zeroth order Solar System model {#sec:SS_basic} --------------------------------- Figure \[fig:SS\_sketch\] shows a schematic Tidal Downsizing model for the formation of the Solar System. In this picture, the inner four terrestrial planets are the remnants of gas fragments that migrated in most rapidly and lost their gaseous envelopes due to the tides from the Sun at separations $a\gtrsim a_{\rm exc}$, e.g., a few AU (cf. eq. \[aex1\]), potentially explaining the origin and the location of the Asteroid belt. Since these fragments were made earlier when the disc was likely more massive, they migrated in very rapidly and had little time for core assembly. This may explain qualitatively why the terrestrial planet masses are so low compared to the much more massive cores of the four giants. Continuing this logic, we should expect that the mass of a core in the planet increases with the distance from the Sun, in general. If the Jupiter’s core mass is below $\lesssim 5 {{\,{\rm M}_{\oplus}}}$, that is in between the terrestrial planet mass and the more distant “ice giants” [such a core mass is allowed by the Jupiter’s interior models, e.g., @Guillot05], then Jupiter was not strongly affected by the feedback from its core. It is therefore reasonable that Jupiter kept all or a major fraction of its primordial H/He content at its current location of 5.2 AU. Pebble accretion onto Jupiter, and/or partial H/He mass loss, made its bulk composition metal-rich compared with the Sun. Even further from the Sun, Saturn, Uranus and Neptune are expected to have even larger cores, which is consistent with Saturn’s core [constrained to weigh $5-20{{\,{\rm M}_{\rm J}}}$, see @HelledG13] most likely being heavier than Jupiter’s, and with Uranus and Neptune consisting mainly of their cores, so having $M_{\rm core}\gtrsim 10 {{\,{\rm M}_{\oplus}}}$. At these high core masses, the three outer giants of the Solar System evolved differently from Jupiter. In this model, they would have had their envelopes puffed up to much larger sizes than Jupiter had. Saturn has then lost much more of its primordial H/He than Jupiter, with some of the gas envelope still remaining bound to its massive core. Uranus and Neptune envelopes’ were almost completely lost. As with the Asteroid belt, the Kuiper belt is the record of the tidal disruptions that made Saturn, Uranus and Neptune. A more detailed interpretation of the Solar System in the Tidal Downsizing scenario is given in §\[sec:SS\]. The Solar System is not very special in this scenario, being just one of thousands of possible realisations of Tidal Downsizing (see Fig. \[fig:sketch2\]). The main difference between the Solar System and a typical observed exoplanetary system [e.g., @WF14] may be that the proto-Solar Nebula was removed relatively early on, before the planets managed to migrate much closer in to the Sun. The spectrum of Tidal Downsizing realisations depends on many variables, such as the disc metallicity, the timing of the gas disc removal, the number and the masses of the gas clumps and the planetary remnants, and the presence of more massive stellar companions. There is also a very strong stochastic component due to the clump-clump and the clump-spiral arm interactions [@ChaNayakshin11a]. Multidimensional gas disc simulations {#sec:3D} ===================================== Disc fragmentation {#sec:disc_fragm} ------------------ To produce Jupiter at its current separation of $a\approx 5$ AU via disc fragmentation [@Kuiper51b], the protoplanetary disc needs to be very massive and unrealistically hot [e.g., @GoldreichWard73; @CassenEtal81; @LaughlinBodenheimer94]. Analytical arguments and 2D simulations with a locally fixed cooling time by [@Gammie01] showed that self-gravitating discs fragment only when (1) the [@Toomre64] $Q$-parameter is smaller than $\sim 1.5$, and (2) when the disc cooling time is $t_{\rm cool} = \beta \Omega_K^{-1} \lesssim $ a few times the local dynamical time, which is defined as $1/\Omega_K = (R^3/GM_*)^{1/2}$, where $M_*$ is the protostar’s mass. The current consensus in the community is that formation of planets any closer than tens of AU via gravitational instability of its protoplanetary disc [*in situ*]{} is very unlikely [e.g., see @Rafikov05; @Rice05; @DurisenEtal07; @RogersWadsley12; @HelledEtal13a; @YoungClarke16], although some authors find their discs to be fragmenting for $\beta$ as large as 30 in their simulationd [@MeruBate11a; @MeruBate12; @Paardekooper12a]. The [@Toomre64] $Q$-parameter must satisfy $$Q = {c_s \Omega\over \pi G\Sigma} \approx {H\over R} { M_* \over M_{\rm d}}\lesssim 1.5\;, \label{Q1}$$ where $c_s$ and $\Sigma$ are the disc sound speed and surface density, respectively. The second equality in eq. \[Q1\] assumes hydrostatic balance, in which case $c_s/H = \Omega$ [@Shakura73], where $H$ is the disc vertical height scale. The disc mass at radius $R$ was defined as $M_{\rm d}(R) = \Sigma \pi R^2$. Finally, $\Omega^2 \approx G M_*/R^3$, neglecting the mass of the disc compared to that of the star, $M_*$. Since $H/R\propto T_{\rm d}^{1/2}$, where $T_{\rm d}$ is the disc mid plane temperature, we see that to fragment, the disc needs to be (a) relatively cold and (b) massive. In particular, assuming $H/R \sim 0.2$ [@TsukamotoEtal14] at $R\sim 50-100$ AU, the disc mass at fragmentation is estimated as $${M_{\rm d}\over M_*} \approx 0.15 \left( {1.5\over Q} \right) \left({H \over 0.2 \; R}\right) \;. \label{md1}$$ [@Lin87] argued that effective $\alpha_{\rm sg}$ generated by spiral density waves should saturate at around unity when the Toomre’s parameter $Q$ approaches unity from above. Simulations [@Gammie01; @LodatoRice04; @LodatoRice05] show that $\alpha_{\rm sg}$ for [*non-fragmenting*]{} discs does not exceed $\sim 0.1$. This constrains the disc viscous time scale as $$t_{\rm visc} = {1\over \alpha} {R^2 \over H^2} {1\over \Omega_K} \approx 4\times 10^4 \; {\rm years} \; \alpha_{0.1}^{-1} R_2^{3/2}\;, \label{tvisc1}$$ where $\alpha_{0.1} = \alpha/0.1$, $R_2 = R/100$ AU and $H/R$ was set to 0.2. Thus, gravitationally unstable discs may evolve very rapidly, much faster than the disc dispersal time [$\sim 3$ Million years @HaischEtal01]. However, once the disc loses most of its mass via accretion onto the star, $\alpha_{\rm sg}$ may drop well below $\sim 0.1$ and the disc then may persist for much longer in a non self-gravitating state. Rapid fragment migration {#sec:rapid} ------------------------ [@Kuiper51b] [*postulated*]{} that Solar System planets did not migrate. The importance of planet migration for Core Accretion theory was realised when the first hot Jupiter was discovered [@MQ95; @Lin96], but gravitational instability planets remained “immune” to this physics for much longer. [@VB05; @VB06] performed numerical simulations of molecular cloud core collapse and protostar growth. As expected from the previous fixed cooling time studies (§\[sec:disc\_fragm\]), their discs fragmented only beyond $\sim 100$ AU. However, their fragments migrated inward towards the protostar very rapidly, on time scales of a few to ten orbits ($\sim O(10^4)$ yrs). The clumps were “accreted” by their inner boundary condition at $10$ AU. This could be relevant to the very well known “luminosity problem” of young protostars [@HartmannEtal98]: observed accretion rates of protostars are too small to actually make $\sim 1$ Solar mass stars within the typical disc lifetime. The missing mass is believed to be accreted onto the stars during the episodes of very high accretion rate bursts, $\dot M \gtrsim 10^{-4}{{\,{\rm M}_{\odot}}}$ yr$^{-1}$, which are rare. The high accretion rate protostars are called “FU Ori” sources [e.g., @HK96]; statistical arguments suggest that a typical protostar goes through a dozen of such episodes. Although other possibilities exist [@Bell94; @ArmitageEtal01], massive migrating clumps driven into the inner disc and being rapidly disrupted there yield a very natural mechanism to solve the luminosity problem [@DunhamVorobyov12] and the origin of the FU Ori sources [@NayakshinLodato12]. Future observations of FU Ori outburst sources may give the presence of close-in planets away by quasi-periodic variability in the accretion flow [e.g., @PowellEtal12]. Recent coronagraphic Subaru 8.2 m Telescope imaging in polarised infrared light of several brightest young stellar objects (YSO), including FU Ori, have shown evidence for large scale spiral arms on scales larger than 100 AU in all of their sources [@LiuHB16]. The authors suggest that such spiral arms may indeed be widespread amongst FU Ori sources. This would support association of FU Ori with migrating gas clumps. In the planet formation literature gas fragment migration was rediscovered by [@BoleyEtal10], who modelled massive and large protoplanetary discs [although the earliest mention of gas fragment migration may have been made by @MayerEtal04]. They found that gravitational instability fragments are usually tidally disrupted in the inner disc. Similar rapid migration of fragments was seen by [@InutsukaEtal09; @MachidaEtal11; @ChaNayakshin11a; @ZhuEtal12a]. [@BaruteauEtal11] (see figure \[fig:Bar11\]) and [@MichaelEtal11] found that gas giants they migrate inward so rapidly because they do not open gaps in [*self-gravitating*]{} discs. This is known as type I migration regime [see the review by @BaruteauEtal14a]. For a laminar disc, the type I migration time scale, defined as $da/dt = - a/t_{\rm I}$ where $a$ is the planet separation from the star, $$t_{\rm I} = (\Gamma \Omega)^{-1} Q \frac{M_*}{M_{\rm p}}\frac{H}{a} = 3\times 10^4 \;\hbox{yrs}\; a_2^{3/2} \frac{H}{0.2 a} {Q\over \Gamma} q_{-3}^{-1} ;, \label{tmig1}$$ where $q_{-3} = 1000 M_{\rm p}/M_*$ is the planet to star mass ratio scaled to 0.001, $a_2 = a/100$ AU, and $\Gamma$ is a dimensionless factor that depends on the disc surface density profile and thermodynamical properties [$\Gamma$ is the modulus of eq. 6 in @BaruteauEtal11]. Simulations show that $\Gamma\sim$ a few to ten for self-gravitating discs, typically. Due to the chaotic nature of gravitational torques that the planet receives from the self-gravitating disc, planet migration is not a smooth monotonic process. This can be seen from the migration tracks in Fig. \[fig:Bar11\], which are for the same disc with cooling parameter $\beta = 15$ and the same $M_{\rm p} = 1{{\,{\rm M}_{\rm J}}}$ planet, all placed at $a=100$ AU initially, but with varying azimuthal angles $\phi$ in the disc. The extremely rapid inward migration slows down only when deep gaps are opened in the disc, which typically occur when $q > 0.01-0.03$ at tens of au distances. This is appropriate for brown dwarf mass companions. ![Numerical simulations of a Jupiter mass planet migrating in a self-gravitating protoplanetary disc [@BaruteauEtal11]. The planets are inserted in the disc at separation of $100$ AU, and migrate inward in a few thousand years. Different curves are for the same initial disc model but for the planet starting at 8 different azimuthal locations. The inset shows the disc surface density map.[]{data-label="fig:Bar11"}](Figs/bmp11_c.png){width="1.\columnwidth"} Fragment mass evolution {#sec:AorM} ----------------------- Most authors find analytically that initial fragment mass, $M_{\rm in}$, at the very [*minimum*]{} is $3{{\,{\rm M}_{\rm J}}}$ [e.g., @Rafikov05; @KratterEtal10; @ForganRice11; @ForganRice13; @TsukamotoEtal14], suggesting that disc fragmentation should yield objects in the brown dwarf rather than planetary mass regime [e.g., @SW08]. One exception is [@BoleyEtal10], who found analytically $M_{\rm in} \sim 1-3{{\,{\rm M}_{\rm J}}}$. Their 3D simulations formed clumps with initial mass from $M_{\rm in} \approx 0.8{{\,{\rm M}_{\rm J}}}$ to $\sim 3{{\,{\rm M}_{\rm J}}}$. [@ZhuEtal12a] found initial masses larger than $10 {{\,{\rm M}_{\rm J}}}$ in their 2D fixed grid simulations, commenting that they assumed a far more strongly irradiated outer disc than [@BoleyEtal10]. [@Boss11] finds initial fragment mass from $\sim 1 {{\,{\rm M}_{\rm J}}}$ to $\sim 5{{\,{\rm M}_{\rm J}}}$. However, $M_{\rm in}$ remains highly uncertain. In the standard accretion disc theory, the disc mid plane density is $\rho_{\rm d} = \Sigma/(2 H)$. Using eq. \[Q1\], the initial fragment mass can be estimated as $$M_{\rm in} = {4\pi\over 3} \rho_{\rm d} H^3 \approx {1 \over 2} M_* \left( {H\over R} \right)^{3} {1.5 \over Q}\;. \label{m_in1}$$ For $H/R = 0.2$ and $M_* = 1{{\,{\rm M}_{\odot}}}$, this yields $M_{\rm in} = 4{{\,{\rm M}_{\rm J}}}$, but for $H/R=0.1$ we get approximately ten times smaller value. While the mass of the disc at fragmentation depends on $H/R$ linearly, $M_{\rm in} \propto (H/R)^3$, so the fragment mass is thus much more sensitive to the properties of the disc at fragmentation. If the clump accretes more gas from the disc then it may move into the brown dwarf or even low stellar mass regime. To become bound to the planet, gas entering the Hill sphere of the planet, $R_{\rm H}$, must lose its excess energy and do it quickly, while it is still inside the Hill sphere, or else it will simply exit the Hill sphere on the other side [cf. @OrmelEtal15 for a similar Core Accretion issue]. [@ZhuEtal12a] used 2D fixed grid hydrodynamical simulations to follow a massive protoplanetary disc assembly by axisymmetric gas deposition from larger scales. They find that the results depend on the mass deposition rate into the disc, $\dot M_{\rm dep}$, and may also be chaotic for any given clump. Out of 13 gas fragments formed in their simulations, most (six) migrate all the way to the inner boundary of their grid, four are tidally disrupted, and three become massive enough (brown dwarfs) to open gaps in the disc. Even when the gas is captured inside the Hill radius it still needs to cool further. [@NayakshinCha13] pointed out that the accretion rates onto gas fragments in most current hydrodynamical disc simulations may be over-estimated due to neglect of planet feedback onto the disc. It was found that fragments more massive than $\sim 6 {{\,{\rm M}_{\rm J}}}$ (for protoplanet luminosity of $0.01 L_\odot$) have atmospheres comparable in mass to that of the protoplanet. These massive atmospheres should collapse under their own weight. Thus, fragments less massive than a few ${{\,{\rm M}_{\rm J}}}$ do not accrete gas [*rapidly*]{} whereas fragments more massive than $\sim 10{{\,{\rm M}_{\rm J}}}$ do. [@Stamatellos15] considered [*accretion luminosity*]{} feedback for planets after the second collapse. Figure \[fig:ForNot\] shows time evolution of the fragment separation, mass, and eccentricity for two simulations that are identical except that one of them includes the radiative pre-heating of gas around the planet (red curves), and the other neglects it (black curves). Preheating of gas around the fragment drastically reduces the accretion rate onto it, and also encourages it to migrate inward more rapidly, similarly to what is found by [@NayakshinCha13]. In addition, Nayakshin (2016, in preparation), finds that gas accretion onto the jovian mass gas clumps depends strongly on dust opacity of protoplanetary disc (which depends on grain growth amongst other things); the lower the opacity, the higher the accretion rate onto the planet. ![From Stamatellos (2015). The evolution of a fragment in two identical simulations which differ only by inclusion of radiative feedback from accretion onto the planet. Panels (a), (b), (c) show the fragment separation, mass and orbital eccentricity, respectively.[]{data-label="fig:ForNot"}](Figs/Stam-eps-converted-to.pdf){width="0.9\columnwidth"} {#sec:desert} Direct imaging observations show that the fraction of stars orbited by gas giant planets at separations greater than about 10 au is 1% only [see @GalicherEtal16 and also §\[sec:wide\] for more references]. This is widely interpreted to imply that massive protoplanetary discs rarely fragment onto planetary mass objects. However, this is only the simplest interpretation of the data and the one that neglects at least three very important effects that remove gas giant planet mass objects from their birth-place at $a\gtrsim 50$ AU. A few Jupiter mass gas clump can (1) migrate inward on a time scale of just a few thousand years, as shown in §\[sec:rapid\]; (2) get tidally disrupted, that is downsized to a solid core if one was formed inside the clump [@BoleyEtal10]; (3) accrete gas and become a brown dwarf or even a low mass secondary star (§\[sec:AorM\]). In Nayakshin (2017, in preparation), it is shown that which one of these three routes the clump takes depends most strongly on the cooling rate of the gas that enters the Hill sphere of the planet. The time scale for the gas to cross the Hill sphere is about the local dynamical time, $t_{\rm cr} \sim 1/\Omega_K$, where $\Omega_K$ is the local Keplerian frequency at the planet’s location. The gas gets compressed and heated as it enters the sphere. If the cooling rate is shorter than $t_{\rm cr}$, then the gas should be able to radiate its excess energy away and get bound to the planet and eventually accreted by it. In the opposite case the gas is unable to cool; its total energy with respect to the planet is positive and thus it leaves the Hill sphere on the other side, never accreting onto the planet. Both pre-collapse and post-collapse planets (see §\[sec:term\] for terminology) were investigated. Simulations are started with a gas clump placed in a massive gas disc at separation of 100 AU. A range of initial clump masses was investigated, from $M_{\rm p} = 0.5 {{\,{\rm M}_{\rm J}}}$ to $M_{\rm p} =16{{\,{\rm M}_{\rm J}}}$, in step of the factor of 2. The gas radiative cooling was done with prescription similar to the one in [@NayakshinCha13] but without including radiating feedback[^2]. To take into account modelling uncertainties in dust opacities of protoplanetary discs [see, e.g., @SemenovEtal03; @DD05], the interstellar dust opacity of [@ZhuEtal09] was multiplied by an arbitrary factor $f_{\rm op} = 0.01$, $0.1$, or 10. The results of these simulations are presented in Fig. \[fig:MoneyP\]. For each simulation, only two symbols are shown: the initial planet mass versus the separation, and then the final object mass and separation. These two points are connected by straight lines although the planets of course do not evolve along those lines. For each starting point there are four lines corresponding to the simulations with the four values of $f_{\rm op}$ as detailed above. As expected, short cooling time simulations (small $f_{\rm op}$) lead to planets accreting gas rapidly. These objects quickly move into the massive brown dwarf regime and stall at wide separations, opening wide gaps in the parent disc. In the opposite, long cooling time (large values for $f_{\rm op}$) case, the planets evolve at almost constant mass, migrating inward rapidly. The final outcome then depends on how dense the planet is. If the planet is in the pre-collapse, low density, configuration, which corresponds to the left panel in Fig. \[fig:MoneyP\], then it is eventually tidally disrupted. It is then arbitrary assumed that the mass of the surviving remnant is $0.1{{\,{\rm M}_{\rm J}}}$ (this mass is mainly the mass of a core assembled inside the fragment, and will usually be smaller than this). Such remnants migrate slowly and may or may not remain at their wide separations depending on how long the parent disc lasts. Post-collapse planets, on the other hand, are not tidally disrupted and can be seen on nearly horizontal tracks in the right panel of fig. \[fig:MoneyP\]. These objects manage to open deep gaps in their parent discs because discs are less vertically extended and are not massive enough to be self-gravitating at $\lesssim 20$ AU. They migrate in in slower type II regime. For all of the objects in the fig. \[fig:MoneyP\], their further evolution dependents on the mass budget of the remaining disc and the rate of its removal by, e.g., photo-evaporation. Since the objects of a few ${{\,{\rm M}_{\rm J}}}$ masses migrate most rapidly, it is likely that the objects of that mass that survived in the right panel of the figure will migrate into the inner disc. ![image](Figs/ACC_Mpl_vs_a_end2-eps-converted-to.pdf){width="0.99\columnwidth"} ![image](Figs/ACC_Mpl_vs_a_end_POST_COLLAPSE-eps-converted-to.pdf){width="0.99\columnwidth"} The most important point from the figure is this. The numerical experiments with a single clump embedded into a massive disc show that it is entirely impossible for the clump to remain in the rectangular box termed a desert in the figure. The observed $\sim 1$% population of gas giant planets at wide separations [@GalicherEtal16] must have evolved in an unusual way to survive where they are observed. Either the parent disc was removed unusually rapidly, by, e.g., a vigorous photo-evaporation from an external source [@Clarke07] or the rapid inward migration of the planet was upset by N-body effects. The latter may be relevant to the HR 8799 system [@MaroisEtal10]. Simulations including solids {#sec:sim_solids} ============================ Dynamics of solids in a massive gas disc {#sec:gi_solids} ---------------------------------------- Dust particles in the protoplanetary disc are influenced by the aerodynamical friction with the gas [@Weiden77], which concentrates solid particles in dense structures such as spiral arms [@RiceEtal04; @RiceEtal06; @ClarkeLodato09] and gas clumps. [@BoleyDurisen10] performed hydrodynamics simulations of massive self-gravitating discs with embedded 10 cm radius particles. Figure \[fig:BD10\] shows some of their results. The top panel shows a time sequence of gas disc surface density maps with the grain positions super-imposed. Spiral arms and gas clumps become over-abundant in 10 cm particles compared to the initial disc composition. This is seen in the bottom panel of the figure that presents azimuthally averaged surface densities of the gas and the solid phase. The latter is multiplied by 100. We see that solids tend to be much stronger concentrated than gas in the peaks of the gas surface density. [@BoleyEtal11a] emphasised that composition of the planets formed by gravitational instability may be more metal-rich than that of the parent protoplanetary disc. ![Simulations of Boley & Durisen (2010). [**Top**]{}: the gas disc surface density (colours) and the locations of 10 cm dust grains (black dots) in a simulation of a $0.4{{\,{\rm M}_{\odot}}}$ disc orbiting a $1.5{{\,{\rm M}_{\odot}}}$ star. The snapshots’ time increases from left to right and from top to bottom. [**Bottom:**]{} Azimuthally averaged gas and dust particles surface densities versus radius in a self-gravitating disc. The peaks in the gas surface density correspond to the locations of gas fragments. Note that solids are strongly concentrated in the fragments and are somewhat deficient in between the fragments. []{data-label="fig:BD10"}](Figs/BolDur10_f10.pdf){width="1.05\columnwidth"} Core formation inside the fragments {#sec:dust_inside} ----------------------------------- [@ChaNayakshin11a] performed 3D Smoothed Particle Hydrodynamics [e.g., @Price12] simulations of a massive self-gravitating gas disc with dust. Dust particles were allowed to grow in size by sticking collisions with the dominant background population of small grains tightly bound to the gas. In addition, self-gravity of dust grains was included as well. The disc of $0.4{{\,{\rm M}_{\odot}}}$ in orbit around a star with mass of $0.6{{\,{\rm M}_{\odot}}}$ became violently gravitationally unstable and hatched numerous gas fragments, most of which migrated in and were tidally disrupted. Grains in the disc did not have enough time to grow in size significantly from their initial size $a_g = 0.1$ cm during the simulations, but grains inside the gas fragments grew much faster. One of the fragments formed in the outer disc lived sufficiently long so that its grains sedimented and got locked into a [*self-gravitating bound*]{} condensation of mass $\sim 7.5{{\,{\rm M}_{\oplus}}}$. Figure \[fig:ChaN11\] shows the gas density (black) and the dust density profiles (colours) within this fragment as a function of distance from its centre. There is a very clear segregation of grain particles by their size, as larger grains sink in more rapidly. The dense dust core is composed of particles with $a_g \gtrsim$ 50 cm. The linear extent of the dusty core is $\sim 0.05$ AU, which is the gravitational softening length of the dust particles for the simulation. This means that gravitational force between the dust particles is artificially reduced if their separation is less than the softening length. The gas fragment shown in Fig. \[fig:ChaN11\] migrated in rapidly (although not monotonically) and was tidally destroyed at separation $\sim 15$ AU. The self-gravitating condensation of solids (the core) however survived this disruption and remained on a nearly circular orbit at the separation of $\sim 8$ AU. This simulation presents a proof of concept for Tidal Downsizing. Gas fragments formed in the simulation showed a range of behaviours. More than half migrated in rapidly and were destroyed. Some fragments merged with others. Others did not merge but exchanged angular momentum with their neighbours and evolved onto more eccentric orbits, with either smaller or larger semi-major axes than their original orbits. This indicates that Tidal Downsizing may result in a number of planet and even more massive companions outcomes. ![Gas (black) and dust grains (colour) density as a function of distance from the centre of a gas fragment [from @ChaNayakshin11a]. The colour of grain particles reflects their size. The coloured points show the grain density at the positions of individual grain particles. The colours are: red is for $a < 1$ cm grain particles, green for $1 < a <10$ cm, cyan for $10< a < 100 $ cm and blue for $a > 1$ m. When the gas is tidally disrupted, the blue and the cyan grains remain self-bound in a core of mass $7.5{{\,{\rm M}_{\oplus}}}$.[]{data-label="fig:ChaN11"}](Figs/fig7.png){width="0.9\columnwidth"} Birth of planetesimals in the fragments {#sec:planetesimals} --------------------------------------- [@BoleyEtal10] concluded that fragments made by gravitational instability and that are tidally disrupted “... will have very different environments from the typical conditions in the outer disk, and they represent factories for processing dust and building [*large solid bodies*]{}. Clump disruption therefore represents a mechanism for processing dust, modifying grain growth, and building large, possibly Earth-mass, objects during the first stages of disk formation and evolution.” In [@Nayakshin10b], §7, it was argued that making large solids by grain sedimentation is much more straightforward in Tidal Downsizing than it is in Core Accretion since there is no Keplerian shear that may pump turbulence in the case of the planetesimal assembly in the protoplanetary disc [@Weiden80], the grains are not lost into the star [the famous 1 metre barrier, @Weiden77], and the expected grain sedimentation velocities are below grain material break-up speeds. [@NayakshinCha12] argued that not only massive cores but also smaller, $\sim 1-1000$ km size bodies can be made inside the fragments. Analytical arguments supporting these ideas will be detailed in §\[sec:hier\]. Here we focus on the orbits of these bodies after a fragment is disrupted. Simulations show that self-gravitating gas fragments formed in protoplanetary discs always rotate [e.g., @MayerEtal04; @BoleyEtal10; @GalvagniEtal12], so that not all solids are likely to condense into a single central core due to the excess angular momentum in the cloud [@Nayakshin11a]. At gas densities characteristic of pre-collapse gas fragments, solids larger than $\sim 1-10$ km in radius decouple from the gas aerodynamically in the sense that the timescale for in-spiral of these bodies into the core is $\gtrsim 10^5$ years, which is longer than the expected lifetime of the host fragments [see Fig. 1 in @NayakshinCha12]. Neglecting aerodynamical friction for these large bodies, and assuming that they are supported against fall into the core by rotation, we may ask what happens to them once the gas envelope is disrupted. Approximating the fragment density profile as constant in the region of interest, and labelling it $\rho_0$, the mass enclosed within radius $R$ away from the centre of the core is $M_{\rm enc} = M_{\rm core} + (4\pi/3)\rho_0 R^3$. The circular speed of bodies at $R$ is $v_{\rm circ}^2 = G M_{\rm enc}/R$. Bodies circling the core at distances such that $M_{\rm enc} \gg M_{\rm core}$ will be unbound when the gas leaves, whereas bodies very near the core remain strongly bound to it. It is thus convenient to define the core influence radius, $$R_{\rm i} = \left[ {3 M_{\rm core}\over 4 \pi \rho_0}\right]^{1/3}\;. \label{ri}$$ For central fragment density an order of magnitude larger than the mean density, eq. (10) of [@NayakshinCha12] shows that $R_{\rm i} \sim 0.1 R_{\rm f}$, where $R_{\rm f}$ is the fragment radius. Since the fragment is denser than the tidal density $\rho_{\rm t} = M_*/(2\pi a^3)$, where $a$ is the fragment separation from the host star, $R_{\rm i}$ is also considerably smaller than the Hill radius [*of the core*]{}, $R_{\rm i}/R_{\rm H, core} \approx (\rho_{\rm t}/\rho_0)^{1/3} \ll 1 $, hence the bodies inside $R_{\rm i}$ are not disrupted off the core via stellar tides. [@NayakshinCha12] used the 3D dust-SPH code of [@ChaNayakshin11a] to simulate the disruption of a gas fragment in orbit around the star. It was assumed for simplicity that planetesimals orbit the central core on circular orbits in a disc inside the gas fragment. No protoplanetary disc was included in the simulation. Figure \[fig:NCha12\] shows the gas and the solids shortly after the fragment of mass $5{{\,{\rm M}_{\rm J}}}$ is tidally disrupted [this figure was not shown in the paper but is made using the simulations data from @NayakshinCha12]. The core mass in the simulation is set to $10{{\,{\rm M}_{\oplus}}}$, and its position is marked with the green cross at the bottom of the figure at $(x,y)\approx (0,-40)$. The gas (all originating from the clump) is shown by the diffuse colours. The position of the central star is shown with the red asterisk in the centre. The black dots show the planetesimal particles. Solid bodies closest to the core remain bound to it even after the gas envelope is disrupted. These may contribute to formation of satellites to the massive core, as needed for Neptune and Uranus. Bodies farther out are however unbound from the core when the gas is removed and are then sheared into debris rings with kinematic properties (e.g., mild eccentricities and inclinations) resembling the Kuiper and the Asteroid belts in the Solar System. The debris ring widens to $\Delta R\sim 20$ AU at later times in the simulation [see Fig. 3 in @NayakshinCha12]. This shows that if planetesimals are formed inside pre-collapse fragments, then debris rings made after their disruptions may look very much the same as the “bona fide” planetesimal discs postulated by [@Safronov72], implying that we should look for observational tests that could distinguish between the two scenarios for planet debris formation (see §\[sec:Z\_debris\]). ![Gas (colour) surface density map after a tidal disruption of a gas fragment at $a\sim 40$ AU from the host star [from @NayakshinCha12]. Black dots show positions of large solid bodies (“planetesimals”) that initially orbited the central core of mass $M_{\rm core} = 10{{\,{\rm M}_{\oplus}}}$, marked with the green asterisks at the bottom of the figure. []{data-label="fig:NCha12"}](Figs/post_disrupted-eps-converted-to.pdf){width="0.99\columnwidth"} Igneous materials inside fragments {#sec:igneous} ---------------------------------- ![Snapshots from 2D simulations by Vorobyov (2011). Formation of crystalline silicates in fragments formed by gravitational collapse of a young and massive protoplanetary disc. Note the migration and disruption of the fragments along with their high gas temperatures (middle panel). This naturally creates igneous materials [*in situ*]{} in the disc at $\sim 100$ AU where the background disc has temperature of only $\sim 10-20$ K, and may explain why comets represent a mix of materials made at tens and $\sim 1000-2000$ K.[]{data-label="fig:Vor2011"}](Figs/Vor2011-eps-converted-to.pdf){width="1\columnwidth"} Solar System mineralogy shows importance of high temperature $T \ge 1000 - 2000 $K processes even for very small solids called chondrules and crystalline silicates. Chondrules are 0.1 to a few mm igneous spherules found in abundance in most unmelted stony meteorites (for example, chondrites). Roughly 85% of meteorite falls are ordinary chondrites, which can be up to 80% chondrules by volume. Therefore, chondrules are a major component of the solid material in the inner Solar System [@MorrisDesch10]. Chondrules are likely to form individually from precursors that were melted and then rapidly cooled and crystallised. The puzzle here is that high temperatures needed for formation of chondrules in the disc directly are not available beyond $a\sim 1$ AU. A similar composition problem exists for comets. They are icy bodies a few km across that leave vaporised tails of material when they approach the inner Solar System. The composition of comets is bewilderingly diverse. Some of the materials in cometary nuclei have not [@KawakitaEtal04] experienced temperatures greater than $\sim 30 - 150$ K. Crystalline silicates, e.g. olivine, require temperatures of at least 1000 K to make [@WoodenEtal07]. It was thus suggested [e.g., @Gail01] that igneous materials were made inside 1 AU region and then were transported to tens of AU regions. However, crystalline silicates in comets may account for as much as $\sim 60$% of weight, requiring surprising efficiency for such large scale outward transport of solids [@WestphalEtal09]. [@NayakshinEtal11a], [@Vorobyov11a] and [@BridgesEtal12a] noted that high-temperature processed materials could be made inside pre-collapse gas fragments because these are appropriately hot $500 \lesssim T_{\rm c} \le 2000$ K. Grains of less than $\sim 1$ cm in size sediment towards the centre of the fragment slowly, being impeded by convective gas motions [@HB11; @Nayakshin14b]. When the fragment is disrupted, the grains are released back into the surrounding gas disc and will then be mixed with amorphous materials made in the main body of the disc, requiring no global outward grain transport. Fig. \[fig:Vor2011\] shows [@Vorobyov11a]’s calculations that employ a model for the formation of crystalline silicates as a function of the surrounding gas density and temperature. The top, the middle and the bottom rows of the snapshots show maps of the gas projected density, temperature and the crystalline silicates fraction, respectively, for three consecutive snapshots from the same simulation. Note that the gas temperature is high only inside the gas fragments and thus all high-T solid processing occurs inside these fragments at large distances from the star. Repeated fragment disruption events like the one shown in the figure may be able to build up a significant reservoir of annealed igneous materials in both the outer and the inner disc. ![image](Figs/Jupiter_iso-eps-converted-to.pdf){width="0.7\columnwidth"} ![image](Figs/sn_m4_fixedZ-eps-converted-to.pdf){width="0.7\columnwidth"} ![image](Figs/sn_m4_Zdot-eps-converted-to.pdf){width="0.7\columnwidth"} Survival of fragments {#sec:inside} ===================== Terminology: pre-collapse and hot start {#sec:term} --------------------------------------- Contraction of an isolated gas clump of mass $M_{\rm p} = 1{{\,{\rm M}_{\rm J}}}$ to the present day Jupiter proceeds in two stages [@Bodenheimer74]. In the first, the pre-collapse stage, the fragment is dominated by molecular H, its temperature is in hundreds to 2000 K, the radius $R_{\rm p}$ is from a fraction of an AU to $\sim 10$ AU, and its density is between $10^{-12}$ to $\sim 10^{-7}$ g cm$^{-3}$ [@Nayakshin10a]. This stage is analogous to the first core stage in star formation [@Larson69]. First cores [*of stars*]{} accrete gas rapidly and so contract and heat up almost adiabatically [@Masunaga00], reaching the second core stage in some $\sim 10^3-10^4$ years, depending on the core gas accretion rate. For the problem at hand, however, we assume that gas accretion is not important (cf. §\[sec:AorM\]). The left panel of Fig. \[fig:exp1\] shows radius $R_{\rm p}$ and central temperature $T_{\rm c}$ of an isolated $M_{\rm p} = 1 {{\,{\rm M}_{\rm J}}}$ planet, cooling radiatively at the interstellar dust opacity, versus time. It takes 1 Myr for the fragment to contract to temperature $T_{\rm c} = 2000$ K, at which point H$_2$ molecules dissociate. The process requires $\approx 4.5$ eV of energy per molecule to break the bonds, presenting a huge energy sink for the fragment. Robbed of its thermal energy, the fragment then collapses dynamically to much higher densities. When densities of order $\rho\sim 10^{-3}$ g cm$^{-3}$ in the centre are reached, the collapse stops. The post-collapse stage is called the second core in star formation; it is analogous to the “hot start” models [e.g., @MarleyEtal07]. The initial radius of the planet in the hot start configuration is as large as a few $R_\odot$, but the planet is very luminous and contracts quickly to smaller radii [e.g., @SpiegelBurrows12]. In Fig. \[fig:exp1\], the beginning of the second core stage is marked by the blue open circle in the bottom left panel. The red horizontal lines in the top left panel show the Hill radii (eq. \[RH1\]) for several values of planet-star separation $a$, assuming $M_*=1{{\,{\rm M}_{\odot}}}$. When $R_{\rm p}$ approaches $R_{\rm H}$ from below, mass loss from the planet commences. [@NayakshinLodato12] showed that the planet mass loss can be stable or unstable depending on the planet mass-radius relationship. For a molecular hydrogen planet with polytropic index $n=5/2$, $\zeta_p = -3$ in equation 26 in the quoted paper, and the mass transfer is unstable. Physically, the planet expands rapidly ($R_{\rm p} \propto M_{\rm p}^{-3}$ for this $n$) as it loses mass. This expansion and mass loss is a runaway process until the core starts to dominate the mass of the planet, at which point the planet radius-mass relation changes. The mass loss then slows down and eventually stops. In the coupled disc-planet models below (§\[sec:dp\_code\]), a simplifying assumption that mass transfer begins when $R_{\rm p}$ exceeds $R_{\rm H}$ and instantaneously unbinds the planet is made. The top left panel of Fig. \[fig:exp1\] shows that pre-collapse planets can be disrupted at separations from $a\sim 1$ to tens of AU from the host star. Survival of a gas fragment as a giant planet at separations of $\lesssim$ a few AU requires the fragment to undergo second collapse [*before*]{} it migrates into the inner disc. Radiative contraction {#sec:rad_collapse} --------------------- Given that migration times of gas fragments can be as short as $t_{\rm mig} \sim 10^4 $ years (§\[sec:rapid\]), survival of any Jupiter mass gas clumps that cools radiatively, as in Fig. \[fig:exp1\], in the inner few AU disc appears very unlikely. This is confirmed by [@ForganRice13b], see §\[sec:FR13\]. Furthermore, [@VazanHelled12] considered a more realistic setup in which pre-collapse planets are embedded in a protoplanetary disc at selected distances from the star. They found that disc irradiation of the planet further slows down the contraction and may even reverse it, heating the planet up, puffing it up and eventually unbinding it [see also @CameronEtal82]. This “thermal bath” effect makes the challenge of having [*any*]{} moderately massive gas fragments, $M_{\rm p}\lesssim $ a few ${{\,{\rm M}_{\rm J}}}$, to collapse in the inner $\sim 10$ AU via radiative cooling nearly impossible. Finally, [@HB11] pointed out that, [without grain growth and sedimentation]{}, gas giant planets formed by gravitational instability and cooling radiatively would [*anti-correlate*]{} with metallicity of the parent star, \[M/H\], which contradicts the observed positive correlation [@FischerValenti05]. Assuming that dust opacity is proportional to metal mass in the planet, they found that higher dust opacity pre-collapse fragments naturally take longer to cool radiatively. However, the full picture may be more complex if grain opacity is significantly reduced by grain growth, see [@HB11]. Pebble accretion {#sec:pebbles} ---------------- As already discussed in §\[sec:gi\_solids\], grains that are moderately weakly coupled to gas via aerodynamical friction (a few mm to a few cm in size) are captured by a dense body or fragment embedded into the disc [@RiceEtal06; @JohansenLacerda10; @OrmelKlahr10; @BoleyDurisen10]. [@Nayakshin15a] studied contraction of [*coreless*]{} gas fragments of different metallicities, i.e., the limit when grains do not get locked into the core because the fragment is too hot or when the sedimentation process is too long. It was found that if $Z =$ const within the fragment, then fragments of higher metallicity collapse slower, confirming results of [@HB11]. However, if the fragment metallicity was increased gradually, by adding grains to the fragment, then the larger the pebble accretion rate, the faster the fragment was found to contract. The panels (a) and (c) of figure \[fig:exp1\] show the central temperature of gas fragments of initial mass $M_{\rm p0} = 4{{\,{\rm M}_{\rm J}}}$, with an initial $T_{\rm c} = 100$ K and the dust opacity reduced by a factor of 10 from the interstellar values [@ZhuEtal09]. Panels (b) and (d) show metallicity evolution of the fragments. In the figure, the constant $Z$ cases are presented in panels (a,b), whereas panels (c,d) show the cases where metals are added to the planet at a constant rate, parameterised by parameter $t_{\rm z}$: $\dot M_{\rm Z} = dM_{\rm Z}/dt = Z_\odot M_{\rm p0}/t_{\rm z}$, where $M_Z$ is the mass of metals inside the planet, and $M_{\rm p0}$ is the mass of the planet at time $t=0$. The initial metallicity for all the cases on the right is Solar, $Z = Z_\odot$. Grain growth and settling into the core is turned off, so that fragments keep uniform composition. The full problem with grain growth and settling into the core is non-linear and is considered in §\[sec:feedback\]. Physically, addition of pebbles to the fragment may be likened to addition of “dark” mass to the planet. The total energy of the fragment, $E_{\rm tot}$, evolves in time according to $${d E_{\rm tot}\over dt} = - L_{\rm rad} - L_{\rm peb} \;, \label{etot1}$$ where $L_{\rm rad}$ and $L_{\rm peb}$ are respectively, the radiative luminosity of the planet, and the potential energy gain due to pebble accretion, defined as a luminosity: $$L_{\rm peb} = {G M_{\rm p}\dot M_z \over R_{\rm p}}\;,$$ This term is negative since the potential energy change of the fragment as pebbles are added is negative. For moderately massive fragments, $M_{\rm p} \lesssim$ a few ${{\,{\rm M}_{\rm J}}}$, radiative luminosity is small, as we have seen, and so pebble accretion is the dominant [*effective*]{} cooling mechanism [@Nayakshin15a]. In reality the fragment does not cool – it just becomes more massive without a gain in kinetic or thermal energy, and hence must contract. Assuming the planet to be a polytropic sphere of gas with adiabatic index $n$ with an admixture of grains treated as dark mass not contributing to pressure or entropy, it is possible to obtain an analytic solution for how the central temperature of the sphere evolves when its metallicity is increased [@Nayakshin15a]: $$T_{\rm c} = T_0 \left( {M_{\rm p}\over M_{\rm p0}}\right)^{6\over 3-n} = T_0 \left[{1-Z_0\over 1-Z}\right]^{6\over 3-n}\;, \label{tc1}$$ where $Z_0$ and $T_0$ are initial metallicity and central temperature of the planet. In the limit $Z_0 < Z \ll 1$ it can be further simplified. $(1-Z_0)/(1-Z) \approx 1 + (Z-Z_0)$, and using the identity $(1+x)^b \approx \exp(bx)$ valid for $x\ll 1$: $$T_{\rm c} = T_0 \exp\left[ {6\Delta Z\over 3-n}\right]\;, \label{tc_vs_z3}$$ where $\Delta Z = Z - Z_0$. Clearly, if $6/(3-n) \gg 1$ then the planet heats up (contracts) very rapidly with addition of grains. In particular, for di-atomic molecules of H$_2$, $\gamma=7/5$, or $n= 5/2$, so $$T_{\rm c} = T_0 \exp\left[ {12 \Delta Z}\right] = T_0 \exp\left[ {0.18 {\Delta Z\over Z_\odot}}\right] \;. \label{tc_vs_z}$$ This predicts that increasing the metallicity of the fragment by the factor of $\sim 6$ increases its central temperature by factor of $e$, taking the pre-collapse fragment much closer to second collapse. Metallicity correlations as function of $M_{\rm p}$ {#sec:transition} --------------------------------------------------- The time it takes for an isolated gas fragment of mass $M_{\rm p}$ to reach central temperature of $T_{\rm c} \gtrsim 2000$ K and collapse via H$_2$ dissociation is (very approximately) $$t_{\rm rad} \sim 1 \hbox { Myr } \left({1{{\,{\rm M}_{\rm J}}}\over M_{\rm p}}\right)^2 \left( {Z\over Z_\odot}\right)\;, \label{trad}$$ where the interstellar grain opacity is assumed [e.g., see Fig. 1 in @Nayakshin15a]. This equation neglects energy release by the core, which is justifiable as long as the core is less massive than a few Earth masses (§\[sec:feedback\]). The migration time in the type I regime is as short as $\sim 10^4$ years (cf. eq. \[tmig1\]). When the planets reach the inner $\sim 10$ AU disc, where the disc is usually not self-gravitating, with Toomre’s $ Q \gg 1$, more massive planets tend to open gaps and migrate in the slower type II regime. The migration time in that regime is typically $\gtrsim 10^5$ years. Thus, radiative collapse is too slow to beat migration, and hence pebble accretion is needed to speed it up, for gas fragments of a moderate mass, $M_{\rm p}\lesssim 3{{\,{\rm M}_{\rm J}}}$. Since more pebbles are bound to be present in higher metallicity discs, the moderately massive gas giants are expected to correlate with \[M/H\] of the host positively. For planets more massive than $\sim 5{{\,{\rm M}_{\rm J}}}$, the radiative cooling time is comparable or shorter than the migration time. This suggests that massive gas giant planets may collapse radiatively at low \[M/H\] before they migrate in and are tidally disrupted. At even higher masses, $M_{\rm p} \gtrsim10 {{\,{\rm M}_{\rm J}}}$, including the brown dwarf regime, fragments always collapse more rapidly via radiation than they migrate in, whatever the metallicity of the host disc. This predicts that metallicity correlations of giant planets should undergo a fundamental change around the mass of $\sim 5{{\,{\rm M}_{\rm J}}}$. Second disruptions at $a\lesssim 0.1$ AU {#sec:2nd_dis} ---------------------------------------- Post-collapse (second core stage) planets are denser than pre-collapse planets by a few orders of magnitude, so they are much less likely to be tidally compromised. However, as seen from the left panel of Fig. \[fig:exp1\], there is a brief period of time when a contracting post-collapse gas giant planet may be disrupted at separation $a\lesssim 0.1$ AU. In [@Nayakshin11a], a toy model for both the disc and the planet was used to argue that many massive cores found by the [*Kepler*]{} satellite in abundance at separation of $\sim 0.1$ from their host stars could be made via such “second” disruptions. Based on the toy model, it was shown that post-collapse planets migrating early on, when the disc accretion rate is large, $\dot M \gtrsim 10^{-7} {{\,{\rm M}_{\odot}}}$ yr$^{-1}$, may be disrupted at characteristic distance of $a\lesssim 0.1$ AU, whereas planets migrating later, when the disc accretion rate is much smaller are more likely to be sufficiently compact to avoid the disruption. [@NayakshinLodato12] improved on this calculation by using a realistic 1D time dependent disc model, although still using a very simple (constant effective temperature) cooling model for the planet. A rich set of disc-planet interaction behaviour was found, which is not entirely surprising since the disc can exchange with the planet not only the angular momentum but also mass. The disc may be also switching between the cold molecular H and the hot ionised H stable branches of the disc [@MM81; @MM84; @Bell94], resulting in large increases or decreases in the accretion rate. This may lead to the planet’s migration type changing from type II to type I or vice versa. Importantly, if the planet mass loss proceeds mainly via the Lagrangian L1 point and the migration type is II, then the planet migrates outward during the intense mass loss phases. ![A coupled evolution of the disc and the migrating planet from Nayakshin & Lodato 2012. [**Top panel**]{}: planet separation from the star (solid) and planet’s mass in units of $10 {{\,{\rm M}_{\rm J}}}$ (dashed). [**Middle:**]{} Planet radius ($R_{\rm p}$, solid) and planet Hills radius (dashed). [**Bottom**]{}: Accretion rate onto the star (solid) and the mass loss rate of the planet (dotted).[]{data-label="fig:NL12"}](Figs/NL12-eps-converted-to.pdf){width="0.99\columnwidth"} Figure \[fig:NL12\] shows an example calculation from [@NayakshinLodato12] in which a second collapse fragment of mass $M_0 = 10 {{\,{\rm M}_{\rm J}}}$ is inserted into a protoplanetary disc at $a_0 =1$ AU. Initially the planet is much smaller than its Hills radius, so the mass loss rate is zero. The planet opens a very deep gap in the disc, cutting off mass supply to the inside disc, which empties onto the star. This creates a gas-free hole inside the planet orbit. As the planet migrates inward, both $R_{\rm p}$ and $R_{\rm H}$ shrink with time, but the planet contraction time is far longer than its migration time of $\sim 10^3$ years (this is the case of a very massive disc). Therefore, the Hill radius catches up with $R_{\rm p}$ when the planet-star separation $a\sim 0.1$ AU. When $R_{\rm H} - R_{\rm p}$ becomes comparable with the planetary atmosphere height-scale, the planet starts to lose mass rapidly via L1 point. This fills the disc inward of the planet with material lost by the planet, and accretion onto the star resumes at a very high rate. Since the viscous time is short at such small distances from the star, accretion rate onto the star matches the mass loss rate by the planet (except for very brief periods of time). An FU Ori like outburst commences which is powered by the star devouring the material shaved off the planet. At the beginning of the outburst, a quasi equilibrium is established: the star accretes the planet material at exactly the rate at which it is lost by the planet. The mass of the planet starts to decrease rapidly (see the dashed curve in the top panel of the figure). The equilibrium is however soon destabilised as rapid transitions between the low and the high temperature states in the disc occur [*in the gap region of the disc*]{}, and hence the disc switches between the two states much more rapidly than could be expected, leading to the complex quasi-periodic behaviour seen in the lower panel of Fig. \[fig:NL12\]. Such rapid transitions may be related to the less violent and shorter duration outbursting sources known as EXORs [@Herbig89; @SAEtal08; @LorenzettiEtal09]. The long duration outbursts seen in other examples in [@NayakshinLodato12] may correspond to the high luminosity long duration classical FU Ori events, as suggested earlier by [@VB05; @VB06; @BoleyEtal10]. The planet eventually loses so much mass that the gap closes; this triggers an even faster mass loss rate, producing the large spike in the accretion rate at $t\approx 2600$ years in the bottom panel of Fig. \[fig:NL12\]. The second disruptions also leave behind solid cores assembled within the planets during pre-collapse stage. This may lead to a metallicity signature in the period distribution of small planets (see §\[sec:MM\_valley\]). Cores in Tidal Downsizing scenario {#sec:cores} ================================== Grain sedimentation inside the fragments {#sec:grandest} ---------------------------------------- Grain sedimentation time scales can be made assuming for simplicity constant density within the gas fragment [@Boss98]. Combining the Epstein and the Stokes drag regimes, it is possible to derive [eq. 41 in @Nayakshin10a] the sedimentation velocity for a spherical grain of radius $a_g$ and material density $\rho_a$: $$v_{\rm sed} = \frac{4 \pi G \rho_a a_g R}{3 c_s} {\lambda + a_g \over \lambda} \left(1 + f_{\rm g}\right)\;, \label{u_epstein}$$ where $\lambda = 1/(n \sigma_{\rm H2})$ is the mean free path for hydrogen molecules, $n$ and $\sigma_{\rm H2}\approx 10^{-15}$cm$^{2}$ are the gas density and collision cross section, $R$ is the distance from the centre of the fragment, and $c_s$ is the sound speed. The dimensionless factor $f_{\rm g}$ is the mass fraction of grains in the fragment interior to radius $R$; it is initially small, $f_{\rm g}\sim 0.01$, but may become greater than unity when grains sediment to the fragment centre. For a reference, at $a_g = 1$ cm, $v_{\rm sed} \approx 1.2$ m/s in the Epstein’s regime ($a_g \ll \lambda$) for $R=1$ AU and fragment temperature of 300 K. Note that $v_{\rm sed} \propto a_g$, so that large grains fall to the centre faster. Sweeping smaller grains in their path as they fall, larger grains are grow by accretion of the smaller ones [see, e.g., @DD05]. The time to reach the centre from radius $R$ is independent of $R$: $$t_{\rm sed}= \frac{R}{v_{\rm sed}} \approx 5 \times 10^3 \;\hbox{yrs}\; \left({3\; {\rm g \; cm}^{-2}\over \rho_a a}\right) {\lambda \over \lambda + a_g} \label{tsed1}$$ for $f_{\rm g} \ll 1$. We observe that this time scale is shorter than the planet migration time for grains with size $a_g \gtrsim 1$ cm. This opens up the possibility of making solid cores within the fragment prior to its tidal disruption [@McCreaWilliams65; @DecampliCameron79; @Boss98; @BoleyEtal10]. Numerical modelling shows that convection presents a significant hurdle to grain sedimentation [@HelledEtal08; @HS08 and §\[sec:g\_and\_c\]]. Gravitational collapse of the “grain cluster” {#sec:core_collapse} --------------------------------------------- The main difficulty in forming planets by a direct gravitational collapse of the solid component [*in the protoplanetary disc*]{} is the differential shear [@GoldreichWard73] and turbulence in the disc [@Weiden80]. Just a tiny fraction of the circular motion of the protoplanetary disc, $v_{\rm K} = 30$ km/s at 1 AU, transferred into gas turbulent motions is sufficient to result in the maximum mass made by the gravitational collapse being negligibly small compared to a planet mass [see §7.3 in @Nayakshin10b]. In Tidal Downsizing making planetary mass cores by direct collapse of the grain component inside a gas fragment may be simpler. Once a significant fraction of the fragment grains sediment into the central region of the fragment, grains start to dominate the mass density there, so that $f_{\rm g} \gg 1$ in the central region [see Fig. \[fig:ChaN11\] here, and also figs. 2 or 4 in @Nayakshin10a]. Gas fragments found in simulations of self-gravitating discs usually rotate approximately as solid bodies, making rotational velocities in their centres rather small [@Nayakshin11a]; thus rotation is not likely to prevent gravitational collapse of the grain cluster (the region where $f_{\rm g}\gg 1$) entirely. In [@Nayakshin10a], §3.6.2, evolution of a single size grain population within a constant density gas background was considered. If was shown that when the fragment grains sediment within the radius $$R_{\rm gc} \approx 0.1 R_{\rm p} \left({f_{\rm g}\over 0.01}\right)^{1/2}\;, \label{Rgc1}$$ where $f_{\rm g}$ is the [*initial*]{} grain mass fraction in the fragment, and $R_{\rm p}$ is the planet radius, gas pressure gradient is no longer able to counteract the collapse. The grain cluster may then collapse into a dense core. Hierarchical formation of smaller bodies {#sec:hier} ---------------------------------------- Many astrophysical systems follow the hierarchical fragmentation scenario first suggested for galaxies by [@Hoyle53]. In his model, as a very massive gas cloud contracts under its own weight, smaller and smaller regions of the cloud become self-gravitating. The Jeans mass in the cloud is $M_{\rm Jeans} \sim c_s^3/(G^3 \rho)^{1/2}$, where $c_s$ and $\rho$ are the gas sound speed and density, respectively. The Jeans mass is originally equal to that of the cloud (galaxy). Provided that $c_s$ remains roughly constant due to cooling, increasing $\rho$ during the collapse implies that smaller sub-parts of the cloud start to satisfy the condition $M < M_{\rm Jeans}$, where $M$ is mass of the sub-part. These regions can then collapse independently from the larger system. This process continues, eventually making star clusters, groups of stars, individual stars, and perhaps even gas giant planets on the smallest scales where the hierarchical collapse stops because gas can no longer cool effectively below the opacity fragmentation limit [@Rees76]. Is there a similar hierarchy of collapse scales for the grains sedimenting down inside the gas fragments? Consider an off-centre spherical region with radius $\Delta R$ and gas density $\rho$ somewhat higher than the background density. Grains inside the region will sediment towards the centre of that region on a time scale $\Delta t$ [*independent*]{} of $\Delta R$: $$\Delta t \approx \frac{3 c_s \mu }{4 \pi G \rho_a a_g^2 \sigma_{\rm H2}} \; {1 \over \rho (1 + f_{\rm g})}\;, \label{tsed3}$$ where $f_{\rm g} > 1 $ is the local grain concentration and it is assumed that $\lambda \ll a_g$. From this we see that if the total density in the perturbed region, $\rho(1+f_{\rm g})$, is greater than that of the surroundings, it will collapse more rapidly than the whole grain cluster considered in §\[sec:core\_collapse\]. The collapse accelerates with time: $\Delta t$ is inversely proportional to density and the density increases as the perturbation collapses. Thus the grains in this region are able to collapse into an [*independent*]{} solid body before the whole grain cluster collapses. This argument suggests that perturbations of [*all*]{} sizes can collapse. A very small $\Delta R$ region collapses slowly since the collapse velocity, proportional to $\Delta R$, is quite small. However the collapse time is as short as that for a much more extended perturbation. Taken at face value, this would imply that even tiny solid bodies, with final post-collapse radius $a_{\rm fin}$ as small as $\lesssim 1 $ m could form via this process. However, in practice there is another limit to consider. A small body born by collapse of a small perturbation is very likely to be inside of a larger perturbation (which in itself may be a part of a yet bigger one). Therefore, the small body will be incorporated into a larger collapsing system unless the body can decouple dynamically from the larger system. Consider now a post-collapse body of radius $a_b$ , and material density $\rho_b \sim 1$ g cm$^{-3}$. Since the body is inside the region where $f_{\rm g} > 1$, we can neglect aerodynamical friction with the gas and consider only interaction of the body with grains in the region. The body may be able to decouple from the bulk of the grains collapsing into the core if the stopping distance of the body is larger than $R_{\rm gc}$. This requires that the column depth of the body $$\Sigma_b = \rho_b a_b > \rho_{\rm gc} R_{\rm gc} = \rho_0 R_{\rm f} \approx {M_{\rm f}\over \pi R_{\rm f}^2} \label{min_ast1}$$ is larger than the column depth of the parent gas fragment. Introducing a mean temperature of the fragment as $T_{\rm p} \approx GM_{\rm p}\mu/(3 k_B R_{\rm p})$, we obtain the minimum size of an object that can separate itself out of the core: $$a_{\rm min} = 3.7 \hbox{ km } T_3^2 {1{{\,{\rm M}_{\rm J}}}\over M_{\rm p}} \rho_b^{-1}\;. \label{min_ast2}$$ This is in the asteroid size range. Finally, we should demand that the body is able to resist gas drag for a long enough time after the core is formed (when the grains in the collapsing grain cluster are mainly incorporated into the core). This problem has been examined in [@NayakshinCha12], also leading to a minimum size in the range of $1-10$ km. Fig. \[fig:num3D\] shows two snapshots from a simulation (Nayakshin 2016, in preparation) of grain-loaded polytropic clump. The figure shows gas surface density (colours) for a slice between -0.1 AU $< y < $ 0.1 AU and $(x,z)$ as shown. The blue squares on top of the gas mark positions of individual grains. The simulation is started with a relaxed polytropic gas clump of mass $3{{\,{\rm M}_{\rm J}}}$, adiabatic index $\gamma=7/5$, and central temperature $T_c = 500$ K. The clump is instantaneously loaded with grains of size $a_{\rm g} = 10$ cm of total mass of $10$% of the clump mass, uniformly spread inside a spherically symmetric shell between radii of $0.8 R_{\rm p}$ and $R_{\rm p}$, where $R_{\rm p}$ is the planet radius. The initial configuration is displayed in the left panel of Fig. \[fig:num3D\]. The right panel of the figure shows what happens with the planet and grains at time $t=7$ years (which corresponds to about 3 dynamical times for the initial clump). Importantly, grain sedimentation process is not spherically symmetric, with “fingers” of higher grain concentration materials protruding inwards. Undoubtedly, the development of the infalling filaments is driven by the Rayleigh-Taylor instability. These preliminary results indicate that there may be additional physical reasons for development of many rather than one grain concentration centres, lending support to the hypothesis that pre-collapse gas fragments may be sites of both core and planetesimal formation. Also note that the fragment is contracting as predicted by the spherically symmetric model [@Nayakshin15a], although its latter evolution strongly depends on whether a massive core is formed in the centre. ![image](Figs/a01_Z01_snap0-eps-converted-to.pdf){width="47.00000%"} ![image](Figs/a10_Z01_snap05-eps-converted-to.pdf){width="47.00000%"} Core composition {#sec:composition} ---------------- A gas fragment of Solar composition [@Lodders03] contains $$M_{\rm Z} = 0.015 M_{\rm f} \approx 4.5 {{\,{\rm M}_{\oplus}}}\; \frac{Z}{0.015} \; {M_{\rm p}\over {{\,{\rm M}_{\rm J}}}} \label{MZ1}$$ of total mass in astrophysical metals. A third of this mass is in water which is very volatile – vaporisation temperature $T_{\rm vap}\sim 150-200$ K for the relevant range in gas pressure. Furthermore, another third of the grain mass is in volatile organics, commonly referred to as “CHON”, which is a mnemonic acronym for the four most common elements in living organisms: carbon, hydrogen, oxygen, and nitrogen. For this review, CHON is organic material other than water. CHON is a frequently used component in planet formation models [e.g., @PollackEtal96; @HelledEtal08]. The composition of CHON is set to be similar to that of the grains in Comet Halley’s coma [@Oberc04]. CHON vaporisation temperature is higher than that of water but is still rather low, $T_{\rm vap}\sim 350 - 450$ K for the range of gas pressures appropriate for the interiors of pre-collapse fragments.[^3] Given the fact that fragments migrate in as rapidly as $\sim 10^4$ years, the core must form similarly quickly or else the fragment will either collapse and become a second core or be disrupted, at which point core growth terminates. In practice, a rapid core formation requires that gas fragments are compact and dense, but this also means that water and ice and CHON are unlikely to be able to sediment into the centre because the fragments are too hot [@HS08]. [*Cores made by Tidal Downsizing are hence likely to be rock-dominated*]{}[^4]. This is significantly different from the classical Core Accretion where massive cores are most naturally assembled beyond the ice line and are thus ice-dominated [@PollackEtal96; @ColemanNelson16]. In §\[sec:core\_comp\] we shall discuss current observations of core compositions in light of these differences between the two theories. A Solar composition Jupiter mass fragment could only make a rocky core of mass $M_{\rm core} \sim 1.5 {{\,{\rm M}_{\oplus}}}$ if all refractory grains sediment to its centre. More massive gas fragments could be considered [as done by @Nayakshin10a; @Nayakshin10b] but such fragments contract radiatively very rapidly, making sedimentation of even refractory grains difficult. Thus, to make a massive solid core, $M_{\rm core}\gtrsim 10 {{\,{\rm M}_{\oplus}}}$, metal enrichment of fragments, such as pebble accretion or metal enrichment at birth [@BoleyDurisen10; @BoleyEtal11a], is necessary. Core feedback and maximum mass {#sec:feedback} ------------------------------ As the core is assembled, some of its gravitational potential energy is radiated into the surrounding gas envelope. How much exactly is difficult to say since the opacity, equation of state, and even the dominant means of energy transport for hot massive planetary cores are not well understood yet [@StamenovicEtal12]. The problem is also highly non-linear since the overlying gas envelope structure may modify the energy loss rate of the core, and the temperature of the surrounding gas in turn depends on the luminosity of the core [@HoriIkoma11; @NayakshinEtal14a]. ### Analytical estimates Nevertheless, assuming that a fraction $0 < \xi_c \lesssim 1$ of core accretion energy, $E_{\rm core} \sim G M_{\rm core}^2/R_{\rm core}$, is released into the fragment and that the latter cannot radiate it away quickly, the core mass is limited by the following order of magnitude estimate: $$\xi_c {G M_{\rm core}^2\over R_{\rm core}} \lesssim {G M_{\rm p}^2 \over R_{\rm p}}\;. \label{fb1}$$ Defining the escape velocity as $v_{\rm esc} = \sqrt{G M_{\rm p}/R_{\rm p}}$, $${M_{\rm core}\over M_{\rm p}} \lesssim {v_{\rm esc, p}^2 \over \xi_c v_{\rm esc, c}^2}$$ Since $v_{\rm esc, p}\sim $ 1 km/s and $v_{\rm esc, c} \gtrsim 10$ km/s, this yields $M_{\rm core}/M_{\rm p} \lesssim 0.01 \xi_c^{-1} $. A more careful calculation, in which the fragment is treated as a polytropic sphere with index $n = 5/2$ yields the following maximum “feedback” core mass [@Nayakshin16a]: $$M_{\rm core} \le M_{\rm fb} = 5.8 {{\,{\rm M}_{\oplus}}}\left({T_3 M_{\rm p} \over 1 {{\,{\rm M}_{\rm J}}}}\right)^{3/5} \rho_{\rm c}^{-1/5} \xi_c^{-1}\;, \label{Mcrit}$$ where $T_3 = T_c/(1000$ K) is the central temperature of the fragment and $\rho_{\rm c}$ is the core mean density in units of g/cm$^3$. $T_3$ cannot exceed $\approx 1.5$ because at higher temperatures grains vaporise and the core stops growing via their sedimentation anyway. Also, although not necessarily clear from the analytic argument, fragments with masses higher than a few ${{\,{\rm M}_{\rm J}}}$ are not normally able to hatch massive cores because they contact quickly radiatively [cf. Fig. 18 in @NayakshinFletcher15 and also §\[sec:rad\_collapse\]]. Therefore, the factor in the brackets in eq. \[Mcrit\] cannot actually exceed a few, leading to the maximum core mass of $\sim 10-20{{\,{\rm M}_{\oplus}}}$. ### Radiative hydrodynamics calculation {#sec:rhd} Numerical calculations are desirable to improve on these estimates. In [@Nayakshin16a], a 1D radiative hydrodynamics (RHD) code of [@Nayakshin14b] is employed to study the evolution of a fragment accreting pebbles. Unlike the earlier study of core-less fragments in [@Nayakshin15a], grain growth and sedimentation onto the core are allowed. The energy equation for the fragment (see eq. \[etot1\]), now taking into account the energy release by the core, reads $${d E_{\rm tot}\over dt} = - L_{\rm rad} + L_{\rm core} - L_{\rm peb} \;, \label{etot2}$$ where the new term on the right hand side, $L_{\rm core}$, is the core luminosity. This term is positive because energy release by the core injects energy into the gas envelope (the fragment). ![Panel (a) shows the gas fragment central temperature $T_3 = T_{\rm c}/10^3 K$, and planet radius, $R_{\rm p}$, versus time for simulations with (solid curves) and without (dotted) core formation, as described in §\[sec:rhd\]. Panel (b) shows core luminosity, $L_{\rm core}$, pebble luminosity, $L_{\rm peb}$, and the radiative luminosity of the fragment as labelled. Panel (c): The core mass, $M_{\rm core}$, and the total metal content of the fragment.[]{data-label="fig:rhd"}](Figs/DIS_RHD_m1_Nocore-eps-converted-to.pdf){width="0.99\columnwidth"} In the experiments shown in this section, the initial cloud mass, metallicity and central temperature are $M_{\rm p}=1{{\,{\rm M}_{\rm J}}}$, $Z= 1 Z_\odot$ and $150$ K, respectively. The metal loading time scale is set to $t_{\rm z} = 2000$ years. Figure \[fig:rhd\] compares two runs, one without grain growth and without core formation [so identical in setup to @Nayakshin15a], and the other with grain growth and core formation allowed. Panel (a) of the figure shows in black colour the evolution of $T_3$, the central fragment temperature measured in $10^3$ K, and the planet radius, $R_{\rm p}$ \[AU\], shown with blue curves. The solid curves show the case of the fragment with the core, whereas the dotted ones correspond to the core-less fragment. Panel (b) of Fig. \[fig:rhd\] presents the three luminosities in eq. \[etot2\]. The dust opacity for this calculation is set to 0.1 times the interstellar opacity[^5] at $Z=Z_\odot$ [@ZhuEtal09]. This increases the importance of $L_{\rm rad}$ term by a factor of $\sim 10$; for the nominal grain opacity, $L_{\rm rad}$ would be completely negligible. Finally, panel (c) of Fig. \[fig:rhd\] shows the total metal mass in the planet and the core mass with the black and red curves, respectively. Consider first the core-less fragment. As the fragment contracts, $L_{\rm rad}$ quickly becomes negligible compared to $L_{\rm peb}$. This is the pebble accretion dominated no-core regime studied in §\[sec:pebbles\] and in [@Nayakshin15a]. The fragment contracts as it accretes pebbles. In the case with the core, panel (a) shows that the fragment collapse reverses when $L_{\rm core}$ exceeds $L_{\rm peb}+L_{\rm rad}$. By the end of the calculation the gas envelope is completely unbound, with the final $M_{\rm core} = 15.2 {{\,{\rm M}_{\oplus}}}$, consistent with equation \[Mcrit\]. It is worth emphasising that the appropriate fragment disruption condition is not the luminosity of the core, which first exceeds the sum $L_{\rm peb}+L_{\rm rad}$ when $M_{\rm core}\approx 10{{\,{\rm M}_{\oplus}}}$, but the total energy released by the core. On the other hand, for a migrating planet, the fact that the fragment stopped contracting when the core reached $\approx 8{{\,{\rm M}_{\oplus}}}$ may be sufficient to change the fate of the fragment as is it is more likely to be disrupted when it stops contracting. ### Comparison to Core Accretion {#sec:fb_comp} In Core Accretion theory, the core is more massive and much more compact than the envelope in the interesting stage, that is before the atmosphere collapses [@Mizuno80; @PollackEtal96; @PT99]. Therefore, in this theory $L_{\rm core} \gg L_{\rm peb}$ always, and so one can neglect $L_{\rm peb}$ in equation \[etot2\]. The luminosity of the core is an obstacle that needs to be overcome in Core Accretion before the atmosphere collapses. It is thought that grain growth reduces the opacity in the atmosphere by factors of $\sim 100$, so that the atmosphere can re-radiate the heat output of the core and eventually collapse [@PollackEtal96; @Mordasini13]. In Tidal Downsizing, there are two regimes in which the pre-collapse gas clump (planet) reacts to pebble accretion onto it differently. While the mass of the core is lower than a few ${{\,{\rm M}_{\oplus}}}$, the gas clump contracts because $L_{\rm core} \ll L_{\rm peb}$. The latter is large because the gas envelope mass is very much larger than that of a pre-collapse Core Accretion planet. This is the regime studied in [@Nayakshin15a; @Nayakshin15b], where pebble accretion was shown to be the dominant effective contraction mechanism for moderately massive gas giants. The second regime, when core mass exceeds $\sim 5{{\,{\rm M}_{\oplus}}}$, is analogous to Core Accretion. Here the core luminosity is large and cannot be neglected. This effect was studied recently in [@Nayakshin16a] and is equally key to Tidal Downsizing. Due to this, massive cores are not simply passive passengers of their migrating gas clumps (§\[sec:feedback\]). The roles of massive cores in Tidal Downsizing and Core Accretion are diagonally opposite. Gas atmospheres of cores {#sec:atm} ------------------------ [@NayakshinEtal14a] studied formation of a dense gas envelope around the core. This effect is analogous to that of Core Accretion, although the envelope (called atmosphere here) is attracted not from the disc but from the surrounding gas fragment. Assuming hydrostatic and thermal equilibrium for the envelope of the core, the atmospheric structure was calculated inward starting from $r_i = G M_{\rm core}/c_\infty^2$, where $c_\infty$ is the sound speed in the first core sufficiently far away from the core, so that its influence on the gas inside the fragment may be approximately neglected. It was then shown that for given inner boundary conditions (gas pressure and temperature at $r_i$), there exists a maximum core mass, $M_{\rm crit}$, for which the hydrostatic solution exists. For core masses greater than $M_{\rm crit}$, the atmosphere weight becomes comparable to $M_{\rm core}$, and the iterative procedure with which one finds the atmosphere mass within radius $r_i$ runs away to infinite masses. $M_{\rm crit}$ was found to vary from a few ${{\,{\rm M}_{\oplus}}}$ to tens of ${{\,{\rm M}_{\oplus}}}$. In [@NayakshinEtal14a], it was suggested that the fragments in which the mass of the core reached $M_{\rm crit}$ will go through the second collapse quickly and hence become young gas giant planets. However, the steady state assumptions in [@NayakshinEtal14a] may not be justified during collapse. Experiments (unpublished) with [*hydrodynamic*]{} code of [@Nayakshin14b] showed that when the atmospheric collapse happens, there is a surge in the luminosity entering $r_i$ from the inner hotter regions. This surge heats the gas up and drives its outward expansion. This reduces gas density at $r_i$, causing the pressure at $r_i$ to drop as well, halting collapse. If the fragment is sufficiently hot even without the core, e.g., $2000 - T_{\rm c} \ll T_{\rm c}$, then the presence of a massive core may be able to accelerate the collapse by compressing the gas and increasing the temperature in the central regions above 2000 K. However, if the fragment managed to reach the near collapse state [*without the core being important*]{}, then it would seem rather fine tuned that the fragment would then need the core to proceed all the way into the second collapse. The fragment is already close to collapse, so presumably it can collapse without the help from the core. Therefore, at the present it seems prudent to discount the atmospheric collapse instability as an important channel for gas fragment collapse. While this conclusion on the importance of bound gas atmospheres near the solid cores differs from that of [@NayakshinEtal14a], their calculation of the atmosphere structure and the mass of the gas bound to the core is still relevant. If and when the fragment is disrupted, the atmosphere remains bound to the core. This is how Tidal Downsizing may produce planets with atmospheres composed of volatiles and H/He (cf. §\[sec:atmo\]). Population synthesis {#sec:dp_code} ==================== Detailed numerical experiments such as those presented in the previous sections are very computationally expensive and can be performed for only a limited number of cases. This is unsatisfactory given the huge parameter space and uncertainties in the initial conditions and microphysics, and the fact that observations have now moved on from one planetary system to $\sim $ a thousand. A more promising tool to confront a theory with statistics of observed planets is population synthesis modelling [PSM; see @IdaLin04a]. A widely held opinion “with enough free parameters everything can be fit” could be justifiable only perhaps a decade ago. Now, with with $\sim O(100)$ observational constraints from the Solar System and exoplanets, population synthesis is becoming more and more challenging. A balanced view of population synthesis is that it cannot ever prove that a model is right, but experience shows that it can challenge theories strongly. It can also highlight differences between planet formation theories and point out areas where more observations and/or theory work is needed. There is much to borrow from Core Accretion population synthesis [@IdaLin04b; @MordasiniEtal09a; @MordasiniEtal09b]. It is quite logical to follow the established approaches to modelling the protoplanetary disc, but then differ in planet formation physics. A planet formation module of the population synthesis should evolve the planet-forming elements of the model, integrating their internal physics, and interaction with the disc via grains/gas mass exchange and migration. The outcome of a calculation is the mass, composition, location, and orbit of one or more planets resulting from such a calculation. By performing calculations for different initial conditions (e.g., disc mass or radial extent) one obtains distributions of observables that can then be compared to the observations. Galvagni & Meyer model {#sec:GM} ---------------------- [@GalvagniMayer14] study was focused on whether hot Jupiters could be accounted for by gas fragments rapidly migrating from the outer self-gravitating disc. This (pre-pebble accretion) study was based on 3D SPH simulations of pre-collapse gas fragment contraction and collapse by [@GalvagniEtal12], who used a prescription for radiative cooling of the fragments, and found that gas fragments may collapse up to two orders of magnitude sooner than found in 1D [e.g., @Bodenheimer74; @HelledEtal08]. [@GalvagniMayer14] concluded that many of the observed hot Jupiters could actually be formed via Tidal Downsizing. The model did not include grain growth and sedimentation physics, thus not addressing core-dominated planets. Forgan & Rice model {#sec:FR13} ------------------- [@ForganRice13b] solved the 1D viscous time dependent equation for the disc, and introduced the disc photo-evaporation term. Their protoplanetary disc model is hence on par in complexity with some of the best Core Accretion population synthesis studies [e.g., @MordasiniEtal09a; @MordasiniEtal12]. Both icy and rocky grains were considered to constrain the composition of the cores assembled inside the fragments. Fragments were allowed to accrete gas from the protoplanetary disc. For the radiative cooling of gas fragments, analytical formulae from [@Nayakshin10c] were employed, which have two solutions for dust opacity scaling either as $\kappa(T) \propto T$ or as $\kappa(T) \propto T^2$, where $T$ is the gas temperature. [@ForganRice13b] also allowed multiple gas fragments per disc to be followed simultaneously. [@ForganRice13b] made four different population synthesis calculations, varying the opacity law, the disc migration rate and the assumptions about what happens with the disc beyond 50 AU after it produces fragments. Results of one of such population synthesis experiments are presented in Fig. \[fig:FR13\], showing the fragment mass at time $t=1$ Million years versus its separation from the star. The colour of the circles shows the core mass within the fragments. The authors conclude that the model falls way short of explaining the data. Gas fragments are either disrupted well before they are able to enter the central few AU region, producing hardly any hot Jupiters, or accrete gas rapidly, becoming brown dwarfs (BDs) and even more massive stellar companions to the host star. No massive cores are released into the disc because the fragments that are disrupted do not manage to make massive cores, and the fragments that do make massive cores are in the brown dwarf regime and are not disrupted. ![Population synthesis results from Forgan & Rice (2013b; the right panel of their figure 10), showing the mass of the fragment versus its separation from the host star. Colours show the mass of the cores assembled inside the fragments.[]{data-label="fig:FR13"}](Figs/FR13_fig10b-eps-converted-to.pdf){width="0.99\columnwidth"} Nayakshin (2015) model {#sec:1disc} ---------------------- In [@Nayakshin15c; @Nayakshin15d], pebble accretion onto precollapse gas fragments was added to population synthesis for the first time. The disc model is similar to that of [@ForganRice13b], but also includes the interaction of the planet with the disc as in [@NayakshinLodato12]. The disc not only influences the planet but also receives the back torques from the planet, so that a gap and even a large inner hole can be self-consistently opened. If the planet is disrupted, its gas is deposited in the disc around the planet location. The disc photo-evaporation rate is a Monte-Carlo variable and the limits are adjusted such as to ensure that the disc fraction decays with the age of the system as observed, e.g., $\propto \exp(-t/t_l)$, where $t_l = 3$ Myr [@HaischEtal01]. ### Grains and cores in the model {#sec:g_and_c} The internal physics of the fragments is modelled numerically rather than analytically. The fragments are strongly convective [@HelledEtal08; @HS08], which implies that a good approximation to the gaseous part of the fragment is obtained by assuming that it is in a hydrostatic balance and has a constant entropy. The entropy however evolves with time as the fragment cools or heats up. This is known as “follow the adiabats” approach [e.g., @HenyeyEtal64; @MarleauCumming14]. The irradiation of the planet by the surrounding disc [the thermal “bath effect”, see @VazanHelled12] is also included. The gas density and temperature profiles within the fragment are solved for numerically. The dust evolution module of the code considers three grain species: rocks (combined with Fe), organics (CHON) and water. Grain growth, sedimentation and convective grain mixing are included. Grains are shattered in fragmenting collisions when the sedimentation velocity is too high [e.g., @BlumMunch93; @BlumWurm08; @BeitzEtal11]. Finally, grains are vaporised if gas temperature exceeds vaporisation temperature for the given grain species. Grains reaching the centre accrete onto the solid core. The core initial mass is set to a “small” value ($10^{-4} {{\,{\rm M}_{\oplus}}}$). Growing core radiates some of its gravitational potential energy away, but a self-consistent model for energy transfer within the core is not yet possible due to a number of physical uncertainties [e.g., @StamenovicEtal12]. For this reason the energy release by the core is parameterised via the Kelvin-Helmholtz contraction time of the core, $t_{\rm kh}$, which is set to be of order $t_{\rm kh}\sim 10^5 - 10^6$ years. The luminosity released by the core is injected into the fragment. Figure \[fig:structure\] shows an example calculation of the internal structure of a gas fragment from population synthesis modelling by [@Nayakshin15c]. Since the gas is hot in the inner part and cool in the outer parts, volatile grains (ice and CHON) are able to settle down only in the outer parts of the fragment. In contrast, rocky grains can sediment all the way into the core. This is best seen in the bottom panel (c) of the figure: water ice grains are only large in the outermost $\sim 5$% of the fragment. Interior to this region, the planet is too hot so that water ice vaporises. Strong convective mixing then ensures that the ratio of the water volume density to the gas density is constant to a good degree (compare the blue dotted and the black solid curves in panel b of the figure) in most of the cloud. Similarly, CHON grains can grow and sediment in the outer $\sim$ half of the fragment only. Note that in the region where CHON grains are large and can sediment, their density shows a significant concentration towards the central parts of the fragment. The density of rocky grains is very strongly peaked in Fig. \[fig:structure\], cf. the red dash-dotted curve in panel (b). In fact, most of the silicates are locked into the central core, and only the continuing supply of them from the protoplanetary disc via pebble accretion keeps rock grain densities at non-negligible levels. ![Internal structure of a planet [at time $t=24450$ years in simulation M1Peb3 from @Nayakshin15c] as a function of total (gas plus metals, including the core) enclosed mass. Panel (a) shows the temperature, Lagrangian radius (in units of AU), and local metallicity, $z(M)$. Panel (b) shows gas (solid) and the three grain metal species density profiles, while panel (c) shows the species’ grain size, $a_{\rm gr}$. []{data-label="fig:structure"}](Figs/M1_hjup_structure-eps-converted-to.pdf){width="0.99\columnwidth"} Also note that the relative abundance of the three grain species varies strongly in the fragment due to the differences in sedimentation properties of these species, as explained above. The outer region is very poor in rocks and very rich in water ice. The innermost region is dominated by rocks. The results of population synthesis planet evolution module are in a very good qualitative agreement with earlier more detailed stand-alone pre-collapse planet evolution calculations [@Nayakshin10b; @Nayakshin14b]. ### The combined disc-planet code {#sec:combi} ![image](Figs/disc_planet_giant-eps-converted-to.pdf){width="0.99\columnwidth"} ![image](Figs/disc_planet_SE-eps-converted-to.pdf){width="0.99\columnwidth"} The disc and the fragment evolutionary codes are combined in one, with interactions between them occurring via (a) gravitational torques that dictate the planet migration type and rate, and the structure of the disc near the planet and downstream of it; (b) via pebble accretion that transfers the solids from the disc into the fragment; (c) energy exchange via the disc irradiating the outer layers of the planet, and the planet heating the disc up due to the migration toques close to its location [e.g., @LodatoEtal09]. One significant shortcoming of the present population synthesis [@Nayakshin15c] is limiting the numerical experiments to one fragment per disc, unlike [@ForganRice13b] who were able to treat multiple fragments per disc. Numerical simulations show that fragments form rarely in isolation [e.g., @VB06; @BoleyEtal11a; @ChaNayakshin11a] and so this limitation should be addressed in the future. Since planet migration is stochastic in nature in self-gravitating discs [e.g., @BaruteauEtal11 see Fig. \[fig:Bar11\]], a migration time scale multiplier, $f_{\rm migr}> 0$, is introduced. This parameter is fixed for any particular run but is one of the Monte Carlo variables [for example, in @Nayakshin16a $f_{\rm migr}$ is varied between 1 and 4].Further details on the population synthesis code are found in [@Nayakshin15c; @Nayakshin15d; @NayakshinFletcher15]. ### Two example calculations {#sec:two_ex} Figure \[fig:disc\_planet\] presents two example calculations from [@Nayakshin15d] which show how Tidal Downsizing can produce a warm jupiter and a hot super-Earth. The two calculations have same initial fragment mass, $M_{\rm p0} = 1{{\,{\rm M}_{\rm J}}}$. The main distinction is the migration factor $f_{\rm migr} = 8 $ and $1.3$ for the left and the right panels in Fig. \[fig:disc\_planet\], respectively. The top panels show the disc surface density evolution sampled at several different times as indicated in the legend. The initial disc mass is similar in both runs, $M_{\rm d0}\sim 0.07{{\,{\rm M}_{\odot}}}$. The crosses on the bottom of the panels depict the planet position at the same times as the respectively coloured surface density curves. The initial surface density of the discs is shown with the solid curve. The red dotted curves show the disc surface density at the time when a deep gap in the disc is first opened. Since the planet on the right migrates in more rapidly, the surrounding disc is hotter when it arrives in the inner ten AU, so that the gap is opened when the planet is closer in to the host star than in the case on the right. The contraction of both fragments is dominated by pebble accretion from the disc (§\[sec:pebbles\]). The major difference between the two calculations is the amount of time that the two planets have before they arrive in the inner disc. The slowly migrating fragment on the left has a much longer time to contract, so that it manages to collapse at time $t=1.32$ Million years. The other fragment, however, is disrupted at time $\sim 0.2$ Million years. On detailed inspection, it turns out that the fragment would also collapse if it continued to accrete pebbles. However, when the gap is opened, pebble accretion shuts down. The fragment in fact expands (note the upturn in the blue dashed curve in the middle panel on the right) due to the luminosity of the massive $M_{\rm core} \approx 6.4{{\,{\rm M}_{\oplus}}}$ core assembled inside. The fragment continues to migrate after opening the gap, a little slower now in type II regime. Nevertheless, this continuous migration and puffing of the fragment up by the internal luminosity of the core is sufficient to disrupt it tidally just a little later. After the disruption, the core continues to migrate and arrives in the inner disc at $a=0.23$ AU by the time the disc is dissipated. Overview of population synthesis results {#sec:bird} ---------------------------------------- ![image](Figs/Exoplanets_ORG.pdf){width="0.95\columnwidth"} ![image](Figs/DIS_scatter_Vbr_All-eps-converted-to.pdf){width="1.05\columnwidth"} The left panel of figure \[fig:scatter\] shows planetary mass versus separation from the host star taken from the “exoplanets.org” catalogue [@HanEtal14]. The colours of the points depict which one of the four exoplanet detection techniques were used to discover the plnet, as described in the caption. The lower right hand corner of the figure is almost certainly empty of planets only due to observational selection biases. This region is difficult to observe because the planets are too dim or too low mass and also have very long periods. It may well be teeming with planets. In addition to this bias, there is also a strong tendency towards detecting massive planets while missing lower mass ones at a given orbital period or separation. Due to these selection biases, the figure seems to indicate that massive gas giants at small separation are quite abundant. In reality, however, hot Jupiters – gas giants at $a\lesssim 0.1$ AU – are over 10 times less frequent than gas giants at $a \gtrsim 1$ AU [@SanterneEtal15]. Gas giants at any separation are about an order of magnitude less frequent than planets with size/mass smaller than that of Neptune [@MayorEtal11; @HowardEtal12]. The right panel of the figure shows population synthesis from [@Nayakshin16a]. Only 10% of the 30,000 population synthesis runs are shown in this figure to improve visibility. The colours on this plot refer to four metallicity bins as explained in the legend. The vertical dashed line at 0.09 AU is set close to the inner boundary of the protoplanetary disc in the population synthesis, $R_{\rm in} = 0.08$ AU. Since population synthesis is not modelling the region inside $R_{\rm in}$, it is not quite clear what would actually happen to the planets that migrated all the way to $R_{\rm in}$. It may be expected that the radius of the inner boundary of real protoplanetary discs spans a range of values from very close to the stellar surface to many times that, and that some of the planets inside our $R_{\rm in}$ will actually survive to present day[^6]. Without further modelling it is not possible to say which planets will survive inside $R_{\rm in}$ and which would not. Therefore, we simply show only 1% of the planets that went inside 0.09 AU in the right panel of Fig. \[fig:scatter\]. They are randomly selected from the total pool of planets that arrived in the region. Their position in the figure is a random Monte Carlo variable uniformly spread in the $\log$ space between $a=0.03$ AU and $a=0.09$ AU. The red line in the right column of Fig. \[fig:scatter\] shows the “exclusion zone” created by the Tidal Downsizing process (equation \[aex1\]), which is the region forbidden for pre-collapse gas fragments. Such fragments are tidally disrupted when reaching the exclusion zone (see further discussion in §\[sec:Pvalley\]). Migration of post-collapse fragments dilutes the sharpness of the exclusion boundary somewhat. Also, the exclusion zone arguments of course do not apply to low mass planets (cores) that were already disrupted. For this reason the red line in the figure is not continued to lower planet masses. There are some similarities and some differences between the observed (left panel in Fig. \[fig:scatter\]) and the simulated (right panel) planets. On the positive side, (a) both population synthesis and observations are dominated by the smaller, core-dominated planets; (b) simulated planets cover the whole planet-star separation parameter space, without a need to invoke different models for close-in and far out planets; (c) there is a sharp drop in the planet abundance for planets more massive than $\sim 0.1{{\,{\rm M}_{\rm J}}}$ in both simulations and observations; (d) gas giants at separations $0.1 < a < 1$ AU are relatively rare in both observations and population synthesis. Further analysis (§\[sec:Z\]) will show that correlations between planet presence and host star metallicity in the model and observations are similar. However, (a) there is an over-abundance of massive planets at tens of AU in the models compared to observations; (b) the mass function of hot Jupiters is centred on $\sim 1{{\,{\rm M}_{\rm J}}}$ in the observation but is dominated by more massive planets in population synthesis; (c) there is no small planets in the population synthesis at $a\lesssim 0.1$ AU. Metallicity correlations {#sec:Z} ======================== Moderately massive gas giants {#sec:giants_Z} ----------------------------- A strong positive correlation of giant planet frequency of detection versus host star metallicity, \[M/H\], is well known [@Gonzalez99; @FischerValenti05; @MayorEtal11; @WangFischer14]. [@IdaLin04b] found in their population synthesis that if massive cores, $M_{\rm core} \sim 10{{\,{\rm M}_{\oplus}}}$, appear in the disc only after $\sim 3$ Million years for a typical Solar metallicity protoplanetary disc, then metal-poor systems will tend to make massive cores only after the gas disc is dissipated. Metal-rich systems make cores earlier, before the gas disc is dissipated. Therefore, Core Accretion predicts a strong preference for gas giant planet presence around \[M/H\] $> 0$ hosts. This argument is based on the assumption that planetesimals are more abundant at high \[M/H\] hosts (§\[sec:Z\_debris\]). Since gas fragments collapse more rapidly when accreting pebbles at higher rates (§\[sec:pebbles\]), a positive correlation with host star metallicity is also expected in Tidal Downsizing. Figure \[fig:Z\_giants\] shows the host star metallicity distribution for gas giants with mass $0.3{{\,{\rm M}_{\rm J}}}< M_{\rm p} < 5 {{\,{\rm M}_{\rm J}}}$ from population synthesis of [@NayakshinFletcher15] with the blue filled histogram. Only planets that end up at separations less than 5 AU are shown in the figure. The red histogram is for massive cores (see §\[sec:sub-giants\]). The continuous curves show the corresponding cumulative distributions. The initial metallicity distribution of fragments in this calculation is a gaussian centred on \[M/H\]=0 with dispersion $\sigma = 0.22$. Survived gas giants are strongly skewed toward metal-rich hosts, as expected, and qualitatively as observed. Luckily, the similarity in predictions of Core Accretion and Tidal Downsizing essentially ends with the $\sim 1$ Jupiter mass planets inside the inner few AU. ![Distribution of host star metallicity for planets survived in the inner 5 AU region from Nayakshin & Fletcher (2015). Gas giant planets correlate strongly with \[M/H\], whereas sub-giant planets do not. See text in §\[sec:giants\_Z\] and §\[sec:sub-giants\] for detail.[]{data-label="fig:Z_giants"}](Figs/ST_Zdist_SupE-eps-converted-to.pdf "fig:"){width="0.95\columnwidth"} 1 cm Sub-giant planets {#sec:sub-giants} ----------------- Observations show that massive core-dominated planets are abundant at all metallicities [e.g. @MayorEtal11; @BuchhaveEtal14; @BuchhaveLatham15], in contrast to the results for the gas giant planets. More qualitatively, the recent analysis of data by [@WangFischer14] shows that gas giants are $\sim 9$ times more likely to form around \[M/H\]$ >0$ hosts than they are around \[M/H\]$<0$ hosts. For sub-Neptune planets the ratio is only around 2. The red histogram in Fig. \[fig:Z\_giants\] shows the metallicity distribution from [@NayakshinFletcher15] of hosts of “super-Earth” planets defined here as planets with mass in the range $2 {{\,{\rm M}_{\oplus}}}< M_{\rm p} < 15 {{\,{\rm M}_{\oplus}}}$. This distribution is much more centrally peaked than it is for gas giants, in qualitative consistency with the observations. As already explained in §\[sec:giants\_Z\], at low \[M/H\], most gas fragments migrating inward from their birth place at tens of AU are tidally disrupted. This would in fact yield an anti-correlation between the number of cores created per initial fragment and \[M/H\] of the star in the context of Tidal Downsizing. However, low metallicity gas fragments contain less massive cores on average (many of which are less massive than $2 {{\,{\rm M}_{\oplus}}}$). Thus, while there are more cores at low \[M/H\] environments, the more massive cores are found at higher metallicity. The net result is an absence of a clear correlation in Tidal Downsizing between the core-dominated planet and the metallicity of their hosts, unlike for gas giants. This result is not due to “cherry picking” parameters for population synthesis and is very robust at least qualitatively. Same physics – the fact that gas fragments are disrupted more frequently at low \[M/H\] environments – explains [*simultaneously*]{} why gas giants correlate and sub-giants do not correlate with metallicity. A weak correlation of massive cores with \[M/H\] of the host star in Core Accretion was explained as following. Cores grow in gas-free environment in discs of low metallicity stars [e.g., @IdaLin04b; @MordasiniEtal09b]. These cores are then not converted into gas giants because they had no gas to accrete to make gas-dominated planets. However, this scenario does not tally well with the fact that many of close-in sub-giant planets reside in multi-planet systems, and these are by large very flat [have mutual inclinations $i \lesssim 2^\circ$, see @FabryckyEtal14] and have low eccentricities ($e\sim 0.03$). Such systems are best explained by assembly via migration of planets (made at larger distances) in a [*gaseous*]{} protoplanetary disc which naturally damps eccentricities and inclinations away [@Paardekooper13; @HandsEtal14]. Gas giants beyond a few AU {#sec:cold_giants_Z} -------------------------- The exclusion zone shown with the red line in the right panel of Fig. \[fig:scatter\] divides the Tidal Downsizing gas giant population in two. Inwards of the line, gas giants must have collapsed into the second cores before they entered this region. Since this is more likely at high metallicities of the host disc, there is a positive \[M/H\] correlation for the inner gas giants as explored in §\[sec:giants\_Z\]. Outside the exclusion zone, however, gas giants may remain in the pre-collapse configuration and still survive when the disc is dispersed. Thus, higher pebble accretion rates do not offer survival advantages at such relatively large distances from the star. This predicts that gas planets beyond the exclusion zone may not correlate with the metallicity of the host [see Fig. 11 in @NayakshinFletcher15]. Core Accretion is likely to make an opposite prediction. Observations show that protoplanetary discs are dispersed almost equally quickly at small and large distances [see the review by @AlexanderREtal14a]. Since classical core assembly takes longer at larger distances, one would expect gas giants at larger distances to require even higher metallicities to make a core in time before the gas disc goes away. Exact separation where this effect may show up may however be model dependent. While statistics of gas giant planets at distances exceeding a few AU is far less complete than that for planets at $a < 1$ AU, [@AdibekyanEtal13] reports that planets orbiting metal-rich stars tend to have longer periods than planets orbiting metal-rich stars (see Fig. \[fig:Vardan\]). ![image](Figs/ST_Zdist_giants_highM-eps-converted-to.pdf){width="0.95\columnwidth"} ![image](Figs/ST_Zdist_Mord12_hist-eps-converted-to.pdf){width="1.05\columnwidth"} ![image](Figs/Zdist_comp_ORG.pdf){width="1.02\columnwidth"} ![image](Figs/Troup16.png){width="0.97\columnwidth"} Very massive gas giants {#sec:massive_giants_Z} ----------------------- As explained in §\[sec:transition\], Tidal Downsizing makes a robust prediction for how planet-host metallicity correlation should change for more massive planets. For planets $M_{\rm p}\gtrsim 5{{\,{\rm M}_{\rm J}}}$, the radiative cooling time is comparable to $10^4$ years, implying that fragments of such a mass may collapse before they reach the exclusion zone. High mass planets may therefore avoid tidal disruption simply by radiative cooling. Accordingly, we should expect that high mass planets and brown dwarfs should be found with roughly equal frequency around metal rich and metal poor stars, in stark contrasts to Jupiter-mass planets. Fig. \[fig:Z\_massive\], the top left panel, shows the host metallicity distribution for planets ending up at $a< 15$ AU in simulation ST from [@Nayakshin15d]. The figure shows two mass bins, $0.75 {{\,{\rm M}_{\rm J}}}\le M_{\rm p} \le 3 {{\,{\rm M}_{\rm J}}}$ (black) and $M_{\rm p} \ge 5{{\,{\rm M}_{\rm J}}}$ (cyan). The red curve shows the initial (gaussian, centred on \[M/H\] = 0 and with dispersion $\sigma = 0.22$) distribution of host disc \[M/H\]. It is seen that moderately massive giants are shifted towards significantly higher metallicities, as previously found (Fig. \[fig:Z\_giants\]). In particular, only 20% of the planets in the black distribution have \[M/H\] $< 0$. Planets more massive than $5{{\,{\rm M}_{\rm J}}}$ are distributed more broadly, with 45% of the planets having negative \[M/H\]. The Core Accretion model makes an opposite prediction. The inset in the top right panel of Fig. \[fig:Z\_massive\] reproduces[^7] Fig. 4 from [@MordasiniEtal12], whereas the black and the cyan histograms show the host metallicity distribution for planets in the same mass ranges as for the top left panel. The blue histogram shows the metallicity distribution for brown dwarfs. It is easy to see from the figure that the more massive a gas giant planet is, the more metal rich the parent star should be to make that planet by Core Accretion. This result is probably quite robust since it relies on the key physics of the model. It takes a long time to make massive cores and planets in Core Accretion scenario [@PollackEtal96; @IdaLin04b]. The more massive the planet is to be, the earlier it must start to accrete gas to arrive at its final mass or else the gas disc dissipates away. More metal rich hosts make massive cores more rapidly, so most massive planets should be made in most metal rich discs. These predictions can be contrasted with the data. The bottom left panel of Fig. \[fig:Z\_massive\] shows the observed metallicity distributions for hosts of gas giant planets that are currently on the “exoplanets.org” database. Planets more massive than $M_{\rm p} = 5 {{\,{\rm M}_{\rm J}}}$ are shown with the filled cyan histogram, whereas the moderately massive giants correspond to the black histogram, selected by $0.75 {{\,{\rm M}_{\rm J}}}\le M_{\rm p} \le 3 {{\,{\rm M}_{\rm J}}}$. The mass cut is the only selection criterion applied to the data. Both histograms are normalised on unit area. The massive group of planets is comprised of 96 objects and has a mean metallicity of $-0.014$, whereas the less massive group is more populous, with 324 objects and the mean metallicity of $0.066$. While the statistics of exoplanetary data remains limited, we can see that the trend towards lower \[M/H\] hosts at higher $M_{\rm p}$ is definitely present in the data. We can also confidently conclude that there is no shift towards [*higher*]{} \[M/H\] for the more massive planets. The bottom right panel of the figure shows \[M/H\] correlations for brown dwarf mass companions to stars from [@TroupEtal16] which are discussed in the next section. No fine tuning was done to the population synthesis parameters to achieve this agreement. One physical caveat here is that gas accretion onto the planet is entirely neglected for both pre-collapse and post-collapse configurations. If some post-collapse fragments do accrete gas, then some of the massive planets, $M_{\rm p} \gtrsim 5 {{\,{\rm M}_{\rm J}}}$, could have started off as less massive planets. These planets would then remain sensitive to the metallicity of the host. Therefore, if the observed massive $M_{\rm p} > 5 {{\,{\rm M}_{\rm J}}}$ planets are a mix of accreting and non-accreting populations, then there would remain some preference for these planets to reside in metal rich systems, but this preference should be weaker than that for the moderately massive gas giants. Close brown dwarf companions to stars {#sec:Z_BD} ------------------------------------- ### Are brown dwarfs related to planets? {#sec:BD_vs_planets} It is often argued [e.g., @WF14] that brown dwarfs and low mass stellar companions must form in a physically different way from that of planets because (a) the frequency of brown dwarf occurrence around Solar type stars is an order of magnitude lower than that for gas giant planets at periods less than a few years [e.g., @SahlmannEtal11; @SanterneEtal15]; (b) Gas giant planets correlate strongly with metallicity of the host star, whereas for brown dwarfs the metallicity distribution is very broad with no evidence for a positive correlation [@RaghavanEtal10]; (c) Gas giant planets are over-abundant in metals compared to their host stars [@Guillot05; @MillerFortney11] whereas brown dwarfs have compositions consistent with that of their host stars [@LeconteEtal09]. These arguments are not water tight, however. The occurrence rate of gas giants drops with planet mass towards the brown dwarf regime monotonically [e.g., Fig. 13 in @CummingEtal08]; the host metallicity correlation of very massive gas giants becomes weak towards masses of $\sim 10{{\,{\rm M}_{\rm J}}}$, before hitting the brown dwarf regime, as discussed in §\[sec:massive\_giants\_Z\]; and the metallicity of gas giants also continuously drops with $M_{\rm p}$ increasing towards brown dwarfs [e.g., @MillerFortney11 and also §\[sec:Zpl\_giants\] and Fig. \[fig:Zpl\] below]. Based on the continuity of the transition in all of these properties, it is to consider gas giant planets and brown dwarfs as one continuous population that forms in the same way. [@ReggianiEtal16] argue that the observed companion mass function at wide orbits around solar-type stars can be understood by considering giant planets and brown dwarfs a part of the same population as long as a cutoff in planet separation distribution is introduced around $\sim 100$ AU. A physically similar origin for planets and brown dwarfs is allowed by both planet formation scenarios. In Tidal Downsizing, brown dwarfs were either born big or managed to gain more gas. In Core Accretion, brown dwarfs are over-achieving gas giant planets [@MordasiniEtal12]. ### Metallicity correlations of brown dwarfs {#sec:BD_z_corr} . [@RaghavanEtal10] showed that brown dwarf companions to solar mass stars are very broadly distributed over host \[M/H\]. For low mass [*stellar*]{} companions, it is the low metallicity hosts that are more likely to host the companion. Very recent observations of [@TroupEtal16] detail the picture further. These authors presented a sample of 382 close-in stellar and sub-stellar companions, about a quarter of which are brown dwarfs at separations between $\sim 0.1$ to $\sim 1$ AU. Out of these brown dwarfs, 14 have \[M/H\]$< -0.5$. To put this in perspective, out of many hundreds of planets with mass $0.5 {{\,{\rm M}_{\rm J}}}< M_{\rm p} < 5{{\,{\rm M}_{\rm J}}}$ on “exoplanets.org” [@HanEtal14], only 4 have \[M/H\]$<-0.5$. The bottom right panel of Fig. \[fig:Z\_massive\] shows the host star metallicity distribution for brown dwarfs (yellow) and for all companions more massive than $0.013 {{\,{\rm M}_{\odot}}}$ (green) from the [@TroupEtal16] data. As authors note, their observations strongly challenge Core Accretion model as an origin for the brown dwarfs in their sample. Indeed, [@MordasiniEtal12] in their §4.3 state: “While we have indicated in Sect. 4.1 that metallicity does not significantly change the distribution of the mass for the bulk of the population, we see here that the metallicity determines the maximum mass a planet can grow to in a given disk, in particular for subsolar metallicities. There is an absence of very massive planets around low-metallicity stars”. To emphasise the point, the authors look at the maximum planet mass in their models at metallicity \[M/H\]$< -0.4$. For their nominal model, the resulting maximum planet mass of the low \[M/H\] tail of the population is only $7{{\,{\rm M}_{\rm J}}}$. This is at odds with the observations [@RaghavanEtal10; @TroupEtal16]. Debris discs {#sec:Z_debris} ------------ There is another checkpoint we can use to compare theoretical models of host metallicity correlations with observations: the debris discs [@Wyatt08]. Detailed calculations of planetesimal formation [e.g., @JohansenEtal07; @JohansenEtal09], suggest that planetesimal formation efficiency is a strong function of metallicity of the parent disc. It is therefore assumed that higher \[M/H\] discs have more abundant supply of planetesimals. This is in fact required if Core Accretion is to explain the positive gas giant correlation with the host star metallicity [e.g., @IdaLin04b; @MordasiniEtal09b]. As detailed in §\[sec:planetesimals\], Tidal Downsizing scenario offers a different perspective on formation of minor solid bodies. The very central parts of the self-gravitating gas fragments may be producing solid bodies greater than a few km in size by self-gravitational collapse mediated by gas drag (§\[sec:hier\]). Observable planetesimals are however created only when the parent gas fragment is disrupted; in the opposite case the planetesimal material is locked inside the collapsed gas giant planet. Debris discs are detected around nearby stars [@Wyatt08] via thermal grain emission in the infra-red [@OudmaijerEtal92; @ManningsBarlow98]. Interestingly, debris discs detection frequency does not correlate with \[M/H\] of their host stars [@MaldonadoEtal12; @MarshallEtal14; @Moro-MartinEtal15]. Observed debris discs also do not correlate with the presence of gas giants [e.g., @MMEtal07; @BrydenEtal09; @KospalEtal09]. It is not that debris discs do “not know” about planets: stars with an observed gas giant are half [*as likely*]{} to host a detected debris discs than stars orbited by planets less massive than $30 M_\oplus$ [@Moro-MartinEtal15]. The suggestion that debris discs are destroyed by interactions with gas giants [@RaymondEtal11] could potentially explain why debris discs do not correlate with \[M/H\] or gas giant presence. However, the observed gas giants (for which the correlations were sought) are orbiting their hosts at separations $\lesssim 1$ AU, whereas the observed debris discs can be as large as tens and even hundreds of AU, making their dynamical interaction (in the context of Core Accretion) unlikely. Further, radial velocity, microlensing and direct imaging results all show that there is of order $\sim 0.1$ gas giant planets per star [@SanterneEtal15; @ShvartzvaldEtal15; @BillerEtal13; @BowlerEtal15; @WittenmyerEtal16] [*at both small and large separations from the host star*]{}, whereas [@RaymondEtal11]’s scenario needs several giants in a debris disc-containing system to work. In Tidal Downsizing, higher \[M/H\] discs provide higher pebble accretion rates, so that few gas fragments are destroyed. Debris disc formation is hence infrequent at high metallicities compared to low \[M/H\] hosts. However, each disrupted fragment contains more metals in higher \[M/H\] than their analogs in low metallicity systems. [@FletcherNayakshin16a] found that the debris disc – host metallicity correlation from Tidal Downsizing would dependent on the sensitivity of synthetic survey. A high sensitivity survey picks up low \[M/H\] hosts of debris discs most frequently because they are more frequent. So such surveys would find an anti-correlation between debris disc of presence and host metallicity. A medium sensitivity surveys however would find no correlation, and a low sensitivity surveys shows preference for debris around high metallicity hosts. These results appear qualitatively consistent with observations of debris disc – host star metallicity correlation. [@FletcherNayakshin16a] also considered planet – debris discs correlations in Tidal Downsizing. A detected gas giant planet implies that the parent fragment [*did not*]{} go through a tidal disruption – hence not producing a debris disc at all. A detected sub-Saturn mass planet, on the other hand, means that there was an instance of debris disc formation. In a single migrating fragment scenario, that is when there is only one fragment produced by the parent disc, this would imply that gas giants and debris discs are mutually exclusive, but sub-Saturn planets and debris discs are uniquely linked. However, in a multi-fragment scenario, which is far more realistic based on numerical simulations of self-gravitating discs (§\[sec:3D\]), other fragments could undergo tidal disruptions and leave debris behind. Survival of a [*detectable*]{} debris disc to the present day also depends on where the disruption occurred, and the debris discs – migrating gas fragment interactions, which are much more likely in Tidal Downsizing scenario than in Core Accretion because pre-collapse gas giants are widespread in Tidal Downsizing and traverse distances from $\sim 100$ AU to the host star surface. Therefore, we expect a significant wash-out of the single fragment picture, but some anti-correlation between debris discs and gas giants and the correlation between debris discs and sub-giants may remain. Cores closest to their hosts {#sec:MM_valley} ---------------------------- [@AdibekyanEtal13] shows that planets around low metallicity hosts tend to have larger orbits than their metal rich analogs. The trend is found for all planet masses where there is sufficient data, from $\sim 10 {{\,{\rm M}_{\oplus}}}$ to $4{{\,{\rm M}_{\rm J}}}$. Their Fig. 1, right panel, reproduced here in Fig. \[fig:Vardan\], shows this very striking result. The dividing metallicity for the metal poor vs metal rich hosts was set at \[M/H\]$=-0.1$ in the figure. The figure was modified (see below) with permission. The blue crosses show metal rich systems whereas the red circles show metal-poor systems. [@AdibekyanEtal15] extended this result to lower mass/radius cores, showing that metal-rich [*systems of cores*]{} tend to be more compact than systems of planets around metal poor stars (see the bottom panels in their Fig. 1). ![The right panel of figure 1 from Adibekyan et al (2013), showing the planet period versus its mass. The sample is separated into the metal poor and metal rich sub-samples. The green, blue and red lines are added on the plot with permission from the authors. The green line is the exclusion zone boundary (eq. \[aex1\]), which shows approximately how far a pre-collapse gas fragment of mass $M_{\rm p}$ can approach an $M_* = 1{{\,{\rm M}_{\odot}}}$ star without being tidally disrupted. The blue and red lines contrast how gas fragments evolve in a metal-rich and a metal poor disc, respectively. See text in §\[sec:MM\_valley\] for more detail.[]{data-label="fig:Vardan"}](Figs/Adibekyan_SN2.pdf){width="0.95\columnwidth"} As noted by [@AdibekyanEtal13], in the Core Accretion context, massive cores in metal poor discs are expected to appear later than they do in metal rich ones. At these later times, the protoplanetary discs may be less massive on average. Cores formed in metal poor systems should therefore migrate slower (cf. eq. \[tmig1\]). They also have less time before their parent gas discs are dissipated. Hence one may expect that cores made in metal-deficient environments migrate inward less than similar cores in metal-rich environment. However, planet masses span a range of $\sim 1000$ in Fig. \[fig:Vardan\]. This means that planet migration rates may vary by a similar factor – from some being much longer than the disc lifetime, and for the others being as short as $\sim 10^4$ years. It is therefore not clear how a difference in timing of the birth of the core by a factor of a few would leave any significant imprint in the final distribution of planets [*across such a broad planet mass range*]{}. In Tidal Downsizing, there is no significant offset in when the cores are born in metal rich or metal poor discs. All cores are born very early on. However, as described in §\[sec:pebbles\] and \[sec:giants\_Z\], gas fragments in metal-poor discs tend to be disrupted by stellar tides when they migrate to separations of a few AU. This forms an exclusion zone barrier (cf. the red line in the right panel of fig \[fig:scatter\] and the green line in Fig. \[fig:Vardan\]), so that, as already explained (§\[sec:cold\_giants\_Z\]), moderately massive gas giants around metal poor hosts are to be found mainly above the green line in Fig. \[fig:Vardan\]. Fragments in metal-rich systems, however, are more likely to contract and collapse due to pebble accretion [*before*]{} they reach the exclusion zone, so they can continue to migrate into the sub-AU regions. The exclusion zone hence forms a host metallicity dependent filter, letting gas giants pass in metal rich systems but destroying them in metal poor ones. Further, as explained in §\[sec:massive\_giants\_Z\], planets more massive than $\sim 5{{\,{\rm M}_{\rm J}}}$ cool rapidly radiatively, and thus they are able to collapse and pass the barrier without accreting pebbles. These high $M_{\rm p}$ planets are not expected to correlate wth \[M/H\] strongly at any separation (§\[sec:massive\_giants\_Z\]). This is consistent with Fig. \[fig:Vardan\] – note that a larger fraction of gas giants are metal-rich at high planet masses. Let us now consider what happens with $M_{\rm p}\sim 1{{\,{\rm M}_{\rm J}}}$ fragments after they reach the exclusion zone in some more detail. The blue lines with arrows show what may happen to such a planet in the metal rich case. Since the planet is in the second, dense configuration, it may continue to migrate in as long as the gas disc is massive enough. The fragments will eventually enter the hot Jupiter regime (periods $P\lesssim 10$ days). Some just remain there when the gas disc dissipates; others are pushed all the way into the star. Yet others can be disrupted at about $a\sim 0.1$ AU by a combination of over-heating because of the very hot disc environment and disruption by stellar tides (this was called the “second disruption” in §\[sec:2nd\_dis\]). The disrupted fragments then travel approximately horizontally in the diagram, as indicated by the blue horizontal arrow, becoming hot sub-Saturn or hot super Earth planets [@Nayakshin11b]. In contrast, tidal disruption of gas fragments in metal-poor systems occurs at around the exclusion zone boundary. The planet also travels horizontally to lower planet mass regime, as shown with the horizontal red line in Fig. \[fig:Vardan\]. After the disruption these low mass planets (usually dominated by massive cores), continue to migrate inward, now evolving vertically downward as shown in Fig. \[fig:Vardan\] with the vertical red line. Planet migration rate in type I regime is relatively slow for core-dominated planets, thus one can then expect that the “red” cores will in general not migrate as far in as did the “blue” ones. Focusing on the lowest mass cores, $M_{\rm core} \le 0.03 {{\,{\rm M}_{\rm J}}}\sim 10 {{\,{\rm M}_{\oplus}}}$ in Fig. \[fig:Vardan\], we note quite clearly a dearth of metal rich (blue crosses) cores beyond the period of $\sim 10-20$ days, which corresponds to $a \approx 0.1 - 0.15$ AU. In principle, this could be a detectability threshold effect – planets are progressively more difficult to detect at longer periods. However, the approximate (empirical) detection threshold is shown in the figure with the dotted line, which is a factor of several longer than the 10 day period; so these observational results are unlikely to be due to detection biases. Second disruptions have not yet been included in rigorous enough detail in the population synthesis. Planet compositions {#sec:comp} =================== Metal over-abundance in gas giants {#sec:Zpl_giants} ---------------------------------- Heavy element content of a giant planet can be found with some certainty by knowing just the planet mass and radius [@Guillot05], provided it is not too strongly illuminated [@MillerFortney11; @ThorngrenEtal15]. Heavy elements contribute to the total mass of the planet, but provide much less pressure support per unit weight. Metal over-abundance of gas giant planets is expected in Tidal Downsizing thanks to partial stripping of outer metal-poor layers [@Nayakshin10c] and pebble accretion. In [@Nayakshin15a], it was estimated that accreted pebbles need to account for at least $\sim10$% of planet mass for it to collapse via pebble accretion as opposed to the radiative channel. This number however depends on the mass of the fragment. As explained in §\[sec:transition\] and §\[sec:massive\_giants\_Z\], more massive gas giants cool more rapidly at the same dust opacity. For this reason they are predicted to not correlate as strongly with the host star metallicity (see §\[sec:massive\_giants\_Z\]) [*and*]{} require less pebbles to accrete in order to collapse. Fig. \[fig:Zpl\] shows the relative over-abundance of gas giant planets, that is, the ratio $Z_{\rm pl}$ to star metal content, $Z_*$, as a function of the planet mass from population synthesis by [@NayakshinFletcher15] compared with the results of [@MillerFortney11], who deduced metal content for a number of exoplanets using observations and their planet evolution code. No parameter of the population synthesis was adjusted to reproduce the [@MillerFortney11] results. ![Metal over-abundance of gas giant planets versus their mass. Blue squares with error bars shows the results of Miller & Fortney (2011). The other symbols are results from population synthesis, binned into four host star metallicity bins as detailed in the legend.[]{data-label="fig:Zpl"}](Figs/ST_planet_composition-eps-converted-to.pdf){width="1\columnwidth"} Fig. \[fig:Zpl\] also shows that there is a continuous metallicity trend with $M_{\rm p}$ from $\sim 0.1 {{\,{\rm M}_{\rm J}}}$ all the way into the brown dwarf regime. The [*continuous*]{} transition in metal over-abundance from gas giants to brown dwarfs argues that brown dwarf formation may be linked to formation of planets (§\[sec:BD\_vs\_planets\]). Core compositions {#sec:core_comp} ----------------- Tidal Downsizing predicts rock-dominated composition for cores [@NayakshinFletcher15 and §\[sec:composition\]]. Core Accretion scenario suggests that massive core formation is enhanced beyond the snow line since the fraction of protoplanetary disc mass in condensible solids increases there by a factor of up to $\sim 3$ [e.g., see Table I in @PollackEtal96]. Most massive cores are hence likely to contain a lot of ice in the Core Accretion model. Reflecting this, Neptune and Uranus in the Solar System are often referred to as “icy giants” even though there is no direct observational support for their cores actually being composed of ice [see §5.1.2 in @HelledEtal13a]. For example, for Uranus, the gravity and rotation data can be fit with models containing rock or ice as condensible material [@HelledEtal10]. When SiO$_2$ is used to represent the rocks, Uranus interior is found to consist of 18% hydrogen, 6% helium, and 76% rock. Alternatively, when H$_2$O is used, UranusÕ composition is found to be 8.5% and 3% of H and He, respectively, and 88.5% of ice. Composition of extrasolar cores is obviously even harder to determine. [@Rogers15] shows that most [*Kepler*]{} planets with periods shorter than 50 days are not rocky for planet radii greater than $1.6 R_\oplus$ as their density is lower than an Earth-like core would have at this size. Unfortunately, just like for the outer giants in the Solar System, the interpretation of this result is degenerate. It could be that these planets contain icy cores instead of rocky ones, but it is also possible that the data can be fit by rocky cores with small atmospheres of volatiles on top. To avoid these uncertainties, we should focus on cores that are unlikely to have any atmospheres. Close-in ($a \lesssim 0.1$ AU) moderately massive cores [$M_{\rm core}\lesssim 7{{\,{\rm M}_{\oplus}}}$, see @OwenWu13] are expected to lose their atmosphere due to photo-evaporation. The observed close-in planets in this mass range all appear to be very dense, requiring Venus/Earth rock-dominated compositions [e.g., Fig. 4 in @DressingEtal15]. [@EspinozaEtal16] present observations of a Neptune mass planet of radius $R_{\rm p}\approx 2.2 R_\oplus$, making it the most massive planet with composition that is most consistent with pure rock. [@WeissEtal16] re-analyse the densities of planets in the Kepler-10 system and find that planet c has mass of $\approx 14{{\,{\rm M}_{\oplus}}}$ and its composition is consistent with either rock/Fe plus $0.2$% hydrogen envelope by mass or Fe/rock plus (only) 28% water. There thus appears to be no evidence so far for ice-dominated massive cores in exoplanetary systems. Another interesting way to probe the role of different elements in making planets is to look at the abundance difference between stars with and without planets. Observations show little difference in differential element abundances between “twin stars” [*except*]{} for refractory elements [@MaldonadoV16], again suggesting that ices are not a major planet building material, whereas silicates could be. These results may be disputed, however, because the effects of Galactic stellar evolution [@GHEtal13] drive extra variations in abundance of metals. These effects are hard to deconvolve from the possible planet/debris disc formation signatures. Cleaner although very rare laboratories are the nearly identical “twin” binaries, which certainly suffer identical Galactic influences. [@SaffeEtal16] studies the $\zeta$ Ret binary which contains nearly identical stars separated by $\sim 4000$ AU in projection. One of the twins has a resolved debris disc of size $\sim 100$ AU [@EiroaEtal10], whereas the other star has no planet or debris disc signatures. Refractory elements in the debris disc hosting star are deficient [@SaffeEtal16] compared to its twin by at least $3 {{\,{\rm M}_{\oplus}}}$, which the authors suggest is comparable to the mass of solids expected to be present in a debris disc of this spatial size. Results of [@SaffeEtal16] are therefore consistent with that of [@MaldonadoV16] and could not be driven by the Galactic chemical evolution. This twin binary observation is especially significant since the debris disc size is $\sim 100$ AU, well beyond a snow line, so ices should be easily condensible into planets/debris. If ices were the dominant reservoir from which debris discs and planets are made then they should be missing in the star with the observed debris. Planet Mass Function {#sec:PMF} ==================== Mass function {#sec:CMF} ------------- Small planets, with radius less than that of Neptune ($\sim 4 R_\oplus$) are ubiquitous [@HowardEtal12]. This planet size translates very roughly to mass of $\sim 20{{\,{\rm M}_{\oplus}}}$ [@DressingEtal15]. Observations of close-in exoplanets show that planet mass function (PMF) plummets above this size/mass [@HowardEtal12; @MayorEtal11]. These observations add to the long held belief, based on the Solar System planets’ observations, that theplanetary cores of mass $M_{\rm core}\sim 10-20 {{\,{\rm M}_{\oplus}}}$ have a very special role to play in planet formation. In the Core Accretion scenario, this special role is in building gas giant planets by accretion of protoplanetary disc gas onto the cores [@Mizuno80; @PT99; @Rafikov06]. In Tidal Downsizing, the role of massive cores in building gas giant planets is negative due to the feedback that the core releases (§\[sec:feedback\]). The observed dearth of gas giants and abundance of small planets means in the context of Tidal Downsizing that most of the gas fragments originally created in the outer disc must be disrupted or consumed by the star to be consistent with the data. ![[**Top panel:**]{} Planet mass function (PMF) from HARPS spectrograph observations from Mayor et al (2011). The black histogram gives observed number of planets, whereas the red corrects for observational bias against less massive planets. [**Bottom panel:**]{} PMF from the Tidal Downsizing population synthesis calculations, exploring the role of core feedback. The histograms are for runs without core formation (NC), with core formation but feedback off (DC) and standard (ST), which includes core feedback. Without feedback, the PMF of Tidal Downsizing scenario looks nothing like the observed mass function.[]{data-label="fig:pmf_fb"}](Figs/Mayor11.pdf "fig:"){width="0.84\columnwidth"} ![[**Top panel:**]{} Planet mass function (PMF) from HARPS spectrograph observations from Mayor et al (2011). The black histogram gives observed number of planets, whereas the red corrects for observational bias against less massive planets. [**Bottom panel:**]{} PMF from the Tidal Downsizing population synthesis calculations, exploring the role of core feedback. The histograms are for runs without core formation (NC), with core formation but feedback off (DC) and standard (ST), which includes core feedback. Without feedback, the PMF of Tidal Downsizing scenario looks nothing like the observed mass function.[]{data-label="fig:pmf_fb"}](Figs/DIS_PMF_compare-eps-converted-to.pdf "fig:"){width="1\columnwidth"} The top panel of Fig. \[fig:pmf\_fb\] shows the observed PMF from [@MayorEtal11]. The black shows the actual number of planets, whereas the red shows the PMF corrected for observational bias. The bottom panel of Fig. \[fig:pmf\_fb\] shows PMF from three population synthesis calculations performed with three contrasting assumptions about the physics of the cores [@Nayakshin16a] in the model, to emphasise the importance of core feedback in Tidal Downsizing. Simulation ST (standard) includes core feedback, and is shown with the blue histogram. This PMF is reasonably similar to the observed one in the top panel. In simulation NC (no cores), shown with the yellow histogram, core formation is artificially turned off. In this case tidal disruptions of gas fragments leave behind no cores. Thus, only gas giant planets are formed in this simulation. In simulation DC (dim cores), shown with the red histogram in the bottom panel, core formation is allowed but the core luminosity is arbitrarily reduced by a factor of $10^5$ compared with simulation ST. By comparing simulations ST and DC we see that the core luminosity is absolutely crucial in controlling the kind of planets assembled by the Tidal Downsizing scenario. A strong core feedback leads to a much more frequent gas fragment disruption, reducing the number of survived gas fragments at all separations, small or large. This also establishes the maximum core mass ($10-20{{\,{\rm M}_{\oplus}}}$, eq. \[Mcrit\] and §\[sec:feedback\]), above which the cores do not grow because the parent clumps cannot survive so much feedback. In simulation DC (dim cores), cores grow unconstrained by their feedback and so they become much more massive [see also Fig. 5 in @Nayakshin16a on this] than in simulation ST, with [*most*]{} exceeding the mass of $10 {{\,{\rm M}_{\oplus}}}$. Given that they are also dim, these cores are always covered by a massive gas atmosphere even when the gas fragment is disrupted (cf. the next section). This is why there are no “naked cores” in simulation DC. One potentially testable prediction is this. As core mass approaches $\sim 10 {{\,{\rm M}_{\oplus}}}$, feedback by the core puffs up the fragment and thus $dM_{\rm core}/dt$ actually drops. Therefore, growing cores spend more time in the vicinity of this mass. Since core growth is eventually terminated by the fragment disruption or by the second collapse, whichever is sooner, the mass of cores should cluster around this characteristic mass. In other words, the core mass function should show a peak at around $\sim 10{{\,{\rm M}_{\oplus}}}$ before it nose-dives at higher masses. There may be some tentative evidence for this from the data. [@SilburtEtal15] looked at the entire [*Kepler*]{} sample of small planets over all 16 quarters of data, and built probably the most detailed to date planet radius function at $R_{\rm p} \le 4 R_\oplus$. They find that there is in fact a peak in the planet radius distribution function at $R_{\rm p} \approx 2.5 R_\oplus$, which corresponds to $M_{\rm core} \approx 15 {{\,{\rm M}_{\oplus}}}$. Atmospheres of cores: the bimodality of planets {#sec:atmo} ----------------------------------------------- One of the most famous results of Core Accretion theory is the critical mass of the core, $M_{\rm crit} \sim$ a few to $\sim 10-20 {{\,{\rm M}_{\oplus}}}$, at which it starts accreting gas from the protoplanetary disc [@Mizuno80; @Stevenson82; @IkomaEtal00; @Rafikov06; @HoriIkoma11]. For core masses less than $M_{\rm crit}$, the cores are surrounded by usually tiny atmospheres. In §\[sec:atm\] it was shown that a massive core forming inside a self-gravitating gas fragment in the context of Tidal Downsizing also surrounds itself by a dense gas atmosphere for exactly same reasons, except that the origin of the gas is not the surrounding protoplanetary disc but the parent fragment. [@NayakshinEtal14a] calculated the atmosphere structure for a given central properties of the gas fragment (gas density, temperature, composition), core mass and luminosity. The population synthesis model of [@Nayakshin15c; @NayakshinFletcher15] uses the same procedure with a small modification. To determine the mass of the atmosphere actually bound to the core, I consider the total energy of atmosphere shells. Only the innermost layers with a negative total energy are considered bound to the core. These layers are assumed to survive tidal disruption of the fragment. Figure \[fig:atmo\] is reproduced from [@NayakshinFletcher15], and shows the mass of all of the cores in the inner 5 AU from the host at the end of the simulations (green shaded), while the red histogram shows the mass distribution of [*gas*]{} in the same planets. Gas fragments that were not disrupted remain in the Jovian mass domain, within the bump at $\log (M_{\rm gas}/{{\,{\rm M}_{\oplus}}}) > 2$. These planets are dominated by the gas but do have cores. The second, much more populous peak in the red histogram in Fig. \[fig:atmo\] is at tiny, $\sim 10^{-3}{{\,{\rm M}_{\oplus}}}$ masses. This peak corresponds to the gas fragments that were disrupted and became a few Earth mass cores with the small atmospheres. Tidal Downsizing scenario thus also naturally reproduces the observed bi-modality of planets – planets are either dominated by cores with low mass (up to $\sim 10$% of core mass, generally) atmospheres, or are totally swamped by the gas. The conclusion following from this is that the special role of $\sim 10{{\,{\rm M}_{\oplus}}}$ cores in planet formation may dependent on how the planets are made only weakly. It is likely that the ability of massive ($M_{\rm core} \gtrsim 10{{\,{\rm M}_{\oplus}}}$) cores to attract gas atmospheres of comparable mass is a fundamental property of matter (hydrogen equation of state, opacities) and [*does not tell us much about the formation route of these planets*]{}, at least not without more model-dependent analysis. ![The distribution of core and gas masses for planets in the inner 5 AU from population synthesis calculations of Nayakshin & Fletcher (2015). Note that the planets are either core-dominated with tiny atmospheres or gas giants. See §\[sec:atmo\] for more detail[]{data-label="fig:atmo"}](Figs/mass_function_atmo-eps-converted-to.pdf){width="1\columnwidth"} Distribution of planets in the separation space {#sec:radial} =============================================== Period Valley of gas giants {#sec:Pvalley} --------------------------- The radial distribution of gas giant planets has a “period valley” at $0.1 < a < 1$ AU [@CummingEtal08], which was interpreted as a signature of protoplanetary disc dispersal by [@Alexander2012]. In their model, photo-evaporation removes disc gas most effectively from radii of $\sim 1-2$ AU for a Solar type star, hence creating there a dip in the surface density profile. Therefore, planets migrating from the outer disc into the sub-AU region may stall at $a\sim 1-2$ AU and thus pile up there. The period valley issue has not yet been studied in Tidal Downsizing, but preliminary conclusions are possible. The photo-evaporation driven process of stalling gas giant planets behind $\sim 1$ AU should operate for both planet formation scenarios because it has to do with the disc physics. However, the timing of gas giant planet formation is different in the two models. Core Accretion planets are born late in the disc life, when the disc has lost most of its mass through accretion onto the star. Tidal Downsizing fragments are hatched much earlier, when the disc is more massive. Most of Tidal Downsizing fragments hence migrate through the disc early on, well before the photo-evaporative mass loss becomes important for the disc. During these early phases the disc surface density profile does not have a noticeable depression at $\sim 1-2$ AU [see @Alexander2012]. Therefore, the photo-evaporative gap is probably not as efficient at imprinting itself onto the gas giant period or separation distribution in Tidal Downsizing as it is in the Core Accretion. However, the exclusion zone boundary at $\sim 1$ to a few AU is a hot metallicity-dependent filter for the gas giant planets (§\[sec:giants\_Z\] & \[sec:MM\_valley\]). Current population synthesis calculations in the Tidal Downsizing scenario show that the surface density of planets decreases somewhat at $\sim 1$ AU for all masses $M_{\rm p}\gtrsim 1{{\,{\rm M}_{\rm J}}}$ (cf. Fig. \[fig:scatter\]), and this effect is dominated by the tidal disruptions. The period valley should thus be stronger for metal poor hosts than for metal-rich ones in Tidal Downsizing scenario. On the rarity of wide separation gas giants {#sec:wide} ------------------------------------------- Although there are some very well known examples of giant planets orbiting Solar type stars at separations of tens to $\sim 100$ AU, statistically there is a strong lack of gas giant planets observed at wide separations [e.g., @ViganEtal12; @ChauvinEtal15; @BowlerEtal15]. For example, [@BillerEtal13], finds that no more than a few % of stars host $1-20{{\,{\rm M}_{\rm J}}}$ companions with separations in the range $10 - 150$ AU. [[@GalicherEtal16] makes the most definitive statement, finding that the fraction of gas giants beyond 10 AU is $\approx 1$%.]{} The HL Tau challenge {#sec:HLT} ==================== HL Tau is a young ($\sim 0.5-2$ Myr old) protostar that remains invisible in the optical due to obscuration on the line of sight, but is one of the brightest protoplanetary discs in terms of its millimetre radio emission [@AndrewsWilliams05; @KwonEtal11]. For this reason, Atacama Large Millimetre/Submillimetre Array (ALMA) observed HL Tau as one of the first targets, in the science verification phase, with baseline as long as 15 km [@BroganEtal15]. This yielded resolution as small as 3.5 AU at the distance for the source, and resulted in the first ever [*image*]{} of a planet forming disc. The image of HL Tau shows a number of circular dark and bright rings in the dust emissivity of the disc. Such rings can be opened by embedded massive planets [e.g., @LinPap86; @RiceEtal06; @CridaEtal06]. Note that it is the dust emission that observable in the radio continuum, the gas of the disc can only be traced by its CO and HCO$^+$ line emission. [@PinteEtal16] performed a detailed modelling of the dust component in HL Tau disc assuming circular orbits for the gas. The well-defined circular gaps observed at all azimuthal angles (HL Tau disc is inclined to the line of sight) imply that $\sim$ millimetre sized dust has settled in a geometrically thin, $H_{\rm dust}/R \sim 0.02$, disc. This is much thinner than the gas disc which has $H/R \sim 0.1$ at these radii. The strong degree of grain settling sets an upper limit on the viscosity coefficient of the disc, requiring $\alpha \sim 3\times 10^{-4}$. The observed CO and HCO$^+$ line profiles constrain the protostar mass, $M_* =1.7 {{\,{\rm M}_{\odot}}}$. [@PinteEtal16] find hotter gas disc than [@ZBB15], who argued that the observed rings are formed by grain condensation at ice lines of abundant molecular species, and therefore their condensation fronts do not coincide with the gaps’ positions. The small but non zero eccentricity of the rings, the surprisingly small magnitude of disc viscosity, coupled with irregular spacings of the rings, probably rule out Rossby wave instabilities or zonal flows [@PinillaEtal12] as possible origins of the rings, leaving planets as the most likely origin of the gaps [@BroganEtal15]. A number of authors performed detailed coupled gas-dust hydrodynamical simulations to try to determine the properties of planets that are able to open gaps similar to those observed in HL Tau [@DipierroEtal15; @JinEtal16; @PicognaK15; @DipierroEtal16a; @RosottiEtal16]. The main conclusion from this work is that the minimum planet mass to produce the observed signatures is $\sim 15 {{\,{\rm M}_{\oplus}}}$, while the maximum appears to be around $0.5 {{\,{\rm M}_{\rm J}}}$. [@DipierroEtal16a] find that the best match to the data is provided by planets of mass $M_{\rm p} \approx 20 {{\,{\rm M}_{\oplus}}}$, $30 {{\,{\rm M}_{\oplus}}}$ and $0.5{{\,{\rm M}_{\rm J}}}$ orbiting the star at orbits with semi-major axes of $a \approx 13$, 32 and 69 AU, respectively. These results challenge classical ideas of planet formation. It should take $\sim 100$ Myr to grow massive cores at tens of AU distances from the star via planetesimal accretion [e.g., @KobayashiEtal11; @KB15]. The presence of massive cores in a $\sim 1$ Myr old disc at $\sim 70$ AU is unexpected and also contradicts the metallicity correlations scenario presented by [@IdaLin04a; @IdaLin04b; @MordasiniEtal09b; @MordasiniEtal12]. In that scenario, core growth takes $\sim $ 3-10 Million years at separations $a\lesssim 10$ AU, which should be much faster than core growth at 70 AU. Therefore, in the Core Accretion framework, HL Tau observations strongly favour assembly of cores via pebble accretion [e.g., @OrmelKlahr10; @LambrechtsJ12; @JohansenEtal15b] rather than by the standard planetesimal accretion [@Safronov72]. Further, planets with masses greater than $10-15{{\,{\rm M}_{\oplus}}}$ should be accreting gas rapidly [e.g., @PollackEtal96]. The largest problem here is for the outermost planet whose mass is estimated at $M_{\rm p} \sim 0.5{{\,{\rm M}_{\rm J}}}$. Such planets should be in the runaway accretion phase where gas accretion is limited by the supply of gas from the disc [e.g., @HubickyjEtal05]. Using equation (34) of [@GoodmanTan04] to estimate the planet accretion rate, $\dot M_{\rm p} \sim \Sigma \Omega_K R_{\rm H}^2$, we find that $$\dot M_{\rm p} \sim 2\times 10^{-4} {{{\,{\rm M}_{\rm J}}}\over \text{yr}}\; {M_{\rm d} \over 0.03 {{\,{\rm M}_{\odot}}}} \left({M_{\rm p}\over 0.5 {{\,{\rm M}_{\rm J}}}}\right)^{2/3}. \label{mdotp}$$ On the other hand, the accretion rate onto the planet should not be much larger than $\sim M_{\rm p}/(1 Myr) = 5 \times 10^{-6} {{\,{\rm M}_{\rm J}}}/$ yr, where 1 Myr is the planet likely age. Thus the accretion rate onto the $a\sim 70$ AU planet must be much smaller than the classical planet assembly picture predicts [@PollackEtal96]. Classical Gravitational disc Instability model of planet formation also may not explain formation of the observed HL Tau planets because the innermost planets are too close in and their mass is much too low to form by direct gravitational collapse. Tidal Downsizing predicts planets with properties needed to understand the observations of HL Tau [@Nayakshin16a]. In §\[sec:feedback\] it was shown massive cores, $M_{\rm core}\sim 10{{\,{\rm M}_{\oplus}}}$, release enough accretion energy to puff up the gas envelopes of $M_{\rm p}\sim 1 {{\,{\rm M}_{\rm J}}}$ pre-collapse gas fragments, and eventually destroy them. Population synthesis calculations show that massive cores located at distances of tens of AU from the host star is a very frequent outcome (cf. the right panel of Fig. \[fig:scatter\]), made even more frequent in realistic discs if dozens of fragments are born initially in its outskirts. The outermost planet in this picture has not yet (or will not) be disrupted because its core is not massive enough. It does not accrete gas as explained in §\[sec:AorM\]. Kepler-444 and other highly dynamic systems {#sec:kepler444} =========================================== ![image](Figs/Kepler444_alpha1e2-eps-converted-to.pdf){width="0.95\columnwidth"} ![image](Figs/Kepler444_alpha1e4-eps-converted-to.pdf){width="0.95\columnwidth"} Kepler-444A is a solar type star with mass of $M_A =(0.76\pm 0.03){{\,{\rm M}_{\odot}}}$ widely separated from a tightly bound pair of M dwarf stars B and C with almost equal masses, $M_B+M_C \approx (0.54\pm 0.05){{\,{\rm M}_{\odot}}}$ [@CampanteEtal15]. The upper limit on separation of stars B & C is 0.3 AU. The projected current separation of A and BC pair is $\approx 66$ AU. Star A has a very low metallicity, \[Fe/H\] $\approx -0.69\pm 0.09$ which means that the metal content of the disc around A should have been $ 10^{0.7} \approx 5$ times lower than would be in a Solar composition disc [@CampanteEtal15]. Kepler-444A is orbited by 5 rather small planetary companions at separations ranging from 0.04 AU to 0.08 AU, with planet radii ranging from $0.4 R_\oplus$ to $0.74 R_\oplus$. [@DupuyEtal16] were able to measure an unexpectedly small astrometric motion for the stellar system A-BC, suggesting that its orbit is very eccentric. They also measure a change in the radial velocity of the A-BC orbit, which allows the authors to constrain the orbit eccentricity as $e = 0.86\pm 0.02$. The pericentre separation of A-BC is only $a_{\rm peri} = 5\pm 1$ AU. The orbital planes of the planetary system and the stellar components coincide within a few degrees [@DupuyEtal16]. This high degree of the orbital alignment argues against the pair BC being captured in some kind of an N-body encounter after the planetary system formation [@DupuyEtal16] and is more likely to mean that the planets and the M dwarf pair were formed during a phase when a gas disc of some kind connected all the components of this puzzling system. The minimum mass of gas from which the $\approx 1.5 {{\,{\rm M}_{\oplus}}}$ worth of planets in the system were made is approximately 5 Jupiter mass for Kepler-444A. In this estimate it is assumed that planets’ composition is Earth-like, given that small exoplanets observed within $0.1$ AU appear to be very dense [see @RappaportEtal13; @DressingEtal15 and discussion in §\[sec:core\_comp\]]. Assuming that “only” half of refractories in the disc gets locked into the observed planets, we require a disc of initial mass $M_{\rm min} = 10 {{\,{\rm M}_{\rm J}}}$ around Kepler-444A for the planets to be made. We can now discuss at what separation from the star these planets could have formed. Suppose that the disc size was $R$ at the time of planet formation. This yields the disc surface density, $\Sigma \sim M_{\rm min}/(\pi R^2)$, at that radius. Assuming a value for the disc viscosity coefficient $\alpha < 1$, we can then calculate the disc midplane temperature and other interesting parameters from the [@Shakura73] disc theory. Of particular interest are the disc accretion rate, $\dot M$, and the scale-height $H$. Knowing these two we can calculate the disc viscous timescale, $t_{\rm visc} = M_{\rm min}/{\dot M}$, and the type I migration time for the planets (eq. \[tmig1\]). Figure \[fig:Kep444\] presents two such calculations, for two different values of the viscosity parameter, $\alpha = 10^{-2}$ and $\alpha = 10^{-4}$ for the left and the right panels, respectively. The solid blue, the dashed red and green curves show the disc midplane temperature, the viscous and the (smallest) planet migration time scales, respectively, all as functions of distance $R$ from the star A. In situ formation at $a\sim 0.04-0.1$ AU {#sec:Kep444_insitu} ---------------------------------------- The most obvious conclusion is that Kepler-444 planets could not have formed in situ as the gas would be simply too hot. $10{{\,{\rm M}_{\rm J}}}$ of gas at radii $R\lesssim 0.1$ AU yields a very large disc surface density $\gtrsim 5 \times 10^6$ g cm$^{-2}$. This is larger than the disc surface density at which hydrogen in the disc must transition to the fully ionised state, that is, the upper branch of the well known “S-curve” for the disc [@Bell94; @LodatoClarke04 see point A in Fig. 1 of the latter paper], even for $\alpha$ as small as $10^{-4}$. In fact, with opacities from [@ZhuEtal09] that include more grain species than the [@Bell94] opacities did, the disc is even hotter and so I find the transition to the unstable branch at somewhat lower $\Sigma$ than given by eq. 6 in [@LodatoClarke04]. As is well known from previous work, such values of $\Sigma$ would result in FU Ori like outbursts [see §\[sec:rapid\] and \[sec:2nd\_dis\] and @HK96; @ArmitageEtal01], during which even the surface layers of the disc are [*observed*]{} to be as hot as $\sim (2-5) \times 10^3$ K out to radii of $\sim 0.5-1$ AU [@EisnerH11]. In fact time-dependent model of discs push the disc onto the very hot branch for an order of magnitude lower values of the disc surface densities [see figs. 13-16 in @NayakshinLodato12]. At disc midplane temperature as high as $10^5$ or more Kelvin, not only grains but even km-sized or larger planetesimals will not survive for long[^8]. [@ChiangLaughlin13] propose that super-Earth mass planets orbiting their host stars at separation $a$ as small as $0.1 AU$ formed in situ. However,[@ChiangLaughlin13] assume that the disc midplane temperature is 1000 K. Here, the accretion disc theory was used to evaluate the temperature for the requested $\Sigma$, and it is concluded that not only dust but planetesimals would be vaporised rapidly in the inner sub-AU region on Kepler-444. The nearly isothermal $T\sim 10^3$ K zone to which [@ChiangLaughlin13] appeal based on work of [@DAlessioEtal01] only exists for disc surface densities smaller than those needed for in-situ planet assembly inside $0.1$ AU by 2-3 [*orders of magnitude*]{} (see figs. 3-5 in the quoted paper). Forming the planets in a few AU disc {#sec:Kep444_few_au} ------------------------------------ We now assume that Kepler-444 planets must have migrated from further out. Let us try to estimate the minimum radius beyond which they could have formed. We have the usual constraint that the disc must be cooler than about 1500 K. In addition, the outer radius of the disc would have been truncated by the tidal torques from the Kepler-444BC pair, so that the outer radius of the disc, $R_{\rm out}$, is likely to be between 1 and 2 AU [@DupuyEtal16]. The vertical dot-dash line in Fig. \[fig:Kep444\] shows $R_{\rm out} = 2$ AU constraint. This introduces two additional constraints: (1) the disc must be cold enough for dust coagulation [*within*]{} $R_{\rm out}$ and (2) the planet migration time to their final positions should be shorter than the disc viscous time. Since the disc has a finite extent, there is a finite amount of mass, and once that gas accretes onto the Kepler-444A there is no more disc to keep pushing the planets in. For the second constraint, it is the least massive planet Kepler-444b, the innermost one at $a=0.04$ AU with planet radius $R_{\rm p} =0.4 R_\oplus$ that places the tightest constraint since migration timescale in type I $\propto M_{\rm p}^{-1}$ (eq. \[tmig1\]). The planet radius is just $\sim 5$% larger than that of Mercury, whose mass is $M_{\rm p} = 0.055 {{\,{\rm M}_{\oplus}}}$. I therefore estimate Kepler-444b mass as $M_{\rm p} = 0.07{{\,{\rm M}_{\oplus}}}$. Focusing first on the larger $\alpha$ case, the left panel of Fig. \[fig:Kep444\], we note that the disc is too hot in the inner few AU to allow grains of any composition to get locked into larger objects. Furthermore, even if it were possible to form Kepler-444b in such a disc, planet migration time is $\gtrsim 10^6$ years whereas the disc viscous time is just thousands of years or less [again, recall that such high values of $\Sigma$ are above those needed to power FU Ori outbursts, which are known to wane rapidly by damping most of the disc mass onto the star; see @LodatoClarke04]. Therefore, values of $\alpha$ as large as $10^{-2}$ are ruled out for Kepler-444 planetary system. Shifting the focus to the right panel of Fig. \[fig:Kep444\] now, the situation is somewhat better for $\alpha=10^{-4}$ but $t_{\rm visc}$ is still shorter than the migration time for Kepler-444b by more than an order of magnitude. Continuing the game of lowering $\alpha$, it is found that the value of $\alpha \lesssim 3\times 10^{-5}$ finally satisfies both constraints (1) and (2). Unfortunately, such a low viscosity parameter is not expected for discs hotter than about 800-1000 K because the ionisation degree of the gas becomes sufficiently high [@Gammie96; @ArmitageEtal01] and the disc becomes MRI-active. Observations of Dwarf Novae systems show that $\alpha \gtrsim 0.1$ in the ionised state; even in quiescence, when H$_2$ molecules dominates the disc, the inferred values of $\alpha\gtrsim 0.01$ [see @KingLLP13]. The corresponding region where the disc could be sufficiently cold for the disc to be “dead” is $R \gtrsim 2$ AU, clashing with condition (2). Therefore, there appears to be no corner in the parameter space $\alpha$ and $R < R_{\rm out}$ that would satisfy all the observational and physical constraints on formation of Kepler-444 planets. A TD model for Kepler-444 system {#sec:Kep444_TD} -------------------------------- Clearly, a detailed 3D simulation is desirable to study any formation scenario of this highly dynamic system. In the absence of such, any preliminary formation scenario that does not appear to contradict basic physics of star and planet formation is still a step in the right direction. Stars grow by gas accretion on first cores, first hydrostatic condensations of gas that form when the parent molecular cloud collapses [@Larson69 see also §\[sec:term\]]. First cores start off being as large as $\sim 10$ AU, and contract as they accrete more gas. This large initial size of the first cores suggests that the A – BC system is unlikely to have formed on its present orbit because the peri-centre of the orbit is just 5 AU. More likely, the parent gas reservoir from which the triple star system formed had a strong $m=2$ perturbation [’bar type’ in terminology of @MH03] which is best described as a filament. Filaments are observed in collapsing molecular clouds, see, e.g., [@HacarT11]. For Kepler-444, the two main self-gravitating centres corresponding to A and BC could have formed on opposing sides of the filament/bar, roughly at the same time. They were probably separated initially by $R_{\rm bin,0}\sim 10^3$ AU or more. With time these two self-gravitating centres coalesce as the filament collapses along its length. Dissipation and accretion of gas onto the growing proto-stars shrinks the binary [e.g., @BateB97] on the timescale of a few free fall times from $R_{\rm bin,0}$, $t_{\rm ff} \sim R_{\rm bin,0}^{3/2}/(GM_{444})^{1/2} \sim 5\times 10^3 (R_{\rm bin,0}/1000)^{3/2}$ years, where $M_{444} = 1.3{{\,{\rm M}_{\odot}}}$. This means that during some $10^4$ years the systems A and BC evolve independently, accreting gas mainly from their immediate environment rather than exchanging it. If star A possessed a disc larger than $\sim 30$ AU, the disc may fragment on multiple fragments. Migration of gas fragments from those distances would take only $\sim 1000$ years in a strongly self-gravitating disc (§\[sec:rapid\]). The fragments are presumably disrupted in the inner disc and leave behind their low mass cores – ready made planets Kepler-444b though Kepler-444f. When the filament collapses, and the configuration of A-BC system becomes comparable to the current one, the planets are already in the inner $\sim 1$ AU region from star A. Their eccentricities are pumped up every time BC passes its pericentre, but the gas disc acts to dump their eccentricities and in doing so forces the planets to migrate in faster than the type I rate. The eccentricity dumping time scale for type I migrating planets is known to be shorter by as much as factor $(H/R)^2$ than the canonical migration time scale for circular orbits [e.g., @BitschKley10]. This mechanism may perhaps bring the planets to their current location faster than the disc would dissipate. Note that eccentricity pumping migration scenario proposed here would not work for the classical Core Accretion scenario cores because core growth by planetesimal accretion would be too slow for the eccentric orbits. The Solar System {#sec:SS} ================ In §\[sec:SS\_basic\], a schematic model for formation of the Solar System (SS) was presented. The main difference of the Solar System from many of the exoplanetary systems observed to date, many of which have very close-in planets, is that the Solar System protoplanetary disc should have been removed before the planets had time to migrate closer to the Sun. Rotation of planets {#sec:SS_rotation} ------------------- \[sec:CA\_spin\] Five out of eight Solar System planets rotate rapidly in the prograde fashion, that is, in the direction of their revolution around the Sun (the Sun spins in the same direction too). The spins of the two inner planets, Mercury and Venus, are thought to have been strongly affected by the tidal interactions with the Sun. Another exception to the prograde rotation is Uranus, with its spin inclined at more than $90^\circ$ to the Sun’s rotational axis. Therefore, out of the major six planets not strongly affected by the Solar tides, the only exception to the prograde rotation is Uranus. The planets spin with a period of between about half a day and a day. The origin of these large and coherent planetary spins is difficult to understand [e.g., @LissauerKary91; @DonesTremaine93] in the context of the classical Earth assembly model [e.g., @Wetherill90]. A planet accreting planetesimals should receive similar amounts of positive and negative angular momentum [@Giuli68; @Harris77]. For this reason, the large spins of the Earth and the Mars are most naturally explained by one or a few “giant” planetesimal impacts [@DonesTremaine93]. The impacts would have to be very specially oriented to give the Earth and the Mars similar spin directions, also consistent with that of the Sun. [@JohansenLacerda10] show that accretion of pebbles onto bodies larger than $\sim 100$ of km from the disc tends to spin them up in the prograde direction. Provided that planets accreted $\sim 10-50$% of their mass via pebble accretion their spin rates and directions are then as observed. In the case of the Earth, a giant impact with the right direction is still needed to explain the Earth-Moon system angular momentum. In Tidal Downsizing, gas clumps formed in 3D simulations of fragmenting discs rotate in the prograde direction [@BoleyEtal10; @Nayakshin11b]. Massive cores formed inside the clumps would inherit the rotational direction of the parent. An exceptional direction of planetary spin, such as that of Uranus, may arise if the host fragment interacted with another fragment and was spun up in that non-prograde direction during the interaction. Such interactions do occur in 3D simulations [e.g., there were a number of such interactions in simulations presented in @ChaNayakshin11a]. The Moon {#sec:Moon} -------- The Moon is thought to have formed due to a giant impact of a large solid body on the Earth [@HartmannDavis75; @CA01]. However, Earth-Moon compositional constraints present a very tough challenge. In Core Accretion, composition of planetesimals change as a function of distance from the Sun, so Theia (the impactor) is expected to have a similar yet somewhat different composition from the proto-Earth. However, the Moon and the Earth have not just similar, they have undistinguishable isotopic compositions for oxygen [@WiechertEtal01], and very close isotopic ratios for chromium [@LugShu98], silicon [@GeorgEtal07] and tungsten [@TouboulEtal07]. This motivated suggestions of complicated and highly efficient mixing processes during the Earth-Theia collision [@PahlevanStevenson07]. Numerical simulations of giant impacts indicate that the Moon would have been mainly made of the impactor [$\sim$ 80%, see @Canup08]. The situation has not been improved by the use of much more sophisticted numerical simulation methods [see @HosonoEtal16]. In the framework of Tidal Downsizing, (a) assembly of the Earth and the Moon in the centre of the same parent gas clump may also account for the nearly identical isotope compositions, and (b) the prograde orientation of the Earth-Moon angular momentum is the record of the prograde rotation of its parent gas clump [@Nayakshin11a]. Satellites of giant planets {#sec:SS_sat_giants} --------------------------- In the Solar System, giant planets have many satellites, while terrestrial planets, with the exception of the Earth-Moon system, have no significant satellites to speak of. This is usually interpreted as evidence of satellite assembly in a circum-planetary disc that surrounded giant planets during their formation. Circum-planetary discs also form in Tidal Downsizing after second collapse of the rotating parent gas fragment [@GalvagniEtal12]. 3D numerical simulations of these authors show that the central hydrostatic core (accounting for only $\sim 50$% of the total fragment mass) is initially surrounded by a thick gas disc. These circum-planetary disc may form the satellites via collapse of the grains rather than H/He phase. The satellites made in this way would be “regular”, i.e., those rotating around the planet in the same way as the planet spin axis. Irregular satellites may be those solid bodies that orbited the solid core before the gas envelope of the parent gas fragment was destroyed. When the envelope is removed, the bodies that are weakly bound to the core obtain much more irregular orbits [@NayakshinCha12]. Bulk composition of planets {#sec:SS_comp1} --------------------------- As explained in §\[sec:composition\], due to the high temperature ($T\gtrsim 500$ K or so) in the centres of the host gas fragments, water ice and organic grains are not likely to sediment all the way into the centre of gas fragments and get locked into the core [@HelledEtal08; @HS08]. This means that cores made by Tidal Downsizing are dominated by rocks and Fe [@ForganRice13b; @NayakshinFletcher15]. This prediction is consistent with the rock-dominated composition of the inner four planets in the SS. In [@Nayakshin14b] it has been additionally shown that mechanical strength of grains may also regulate which grains get locked into the core first. In this model, proposed to explain the observed Fe-dominant composition of Mercury [@PeplowskiEtal11; @SmithEtal12a], Fe grains sediment before the silicates because their mechanical strength is higher, so that their settling velocity is larger. Most of the silicates remain suspended in gas in the form of small grains, and are removed with the envelope when the parent gas fragment of Mercury is disrupted. The cores of the Solar System giants Neptune and Uranus are often considered to be icy. However, as shown by [@HelledEtal10], current observations and theoretical calculations of the structure of these two planets do not constrain the core composition (and even its mass) uniquely. Models in which the cores contain only rock or only ice both produce reasonable fits to the data with slightly different fractions of mass in hydrogen and helium (cf. §\[sec:core\_comp\]). The fact that gas giant planets Saturn and Jupiter are over-abundant in metals, containing $\sim 30-40 {{\,{\rm M}_{\oplus}}}$ of solids, compared to the Sun is well known [@Guillot05]. Tidal Downsizing scenario is consistent with this result (see §\[sec:Zpl\_giants\]), predicting a similar amounts of solids inside gas giant planets of Saturn and Jupiter masses (see Fig. \[fig:Zpl\]). The Asteroid and the Kuiper belts {#sec:SS_belts} --------------------------------- In the context of Tidal Downsizing, planetesimals are born inside pre-collapse gas fragments [§\[sec:hier\] and \[sec:planetesimals\], and @NayakshinCha12], and are released into the disc when these fragments are disrupted. [@NayakshinCha12] suggested that this model may explain (a) the eccentricity versus semi-major axis correlation for the classical Kuiper Belt objects; (b) the presence of two distinct populations in the belt; (c) the sharp outer edge of the Kuiper belt. In addition, as is well known, $\sim 99.9$% of the initial planetesimals are required to have been removed from the Kuiper belt [@PfalznerEtal15] in order to reconcile its current small mass with the existence of bodies as large as Pluto. In Tidal Downsizing, however, massive bodies are assembled inside the environment of a gas fragment, not a disc, so that this “mass deficit” problem of the Kuiper belt does not apply. For the astroid belt, Tidal Downsizing correctly predicts its location (see eq. \[aex1\]). Additionally, asteroids are observed to have orbital eccentricities $e\sim 0.1$ and inclinations of 10-20$^\circ$. Tidal disruption of a Jupiter mass gas fragment naturally creates orbits with such properties simply because the size of the Hill radius is $\sim 0.1$ of the orbital separation at the point of the fragment disruption [@NayakshinCha12]. Since the asteroids result from disruptions in the inner few AU of the Solar System, their host fragments must have been rather dense and therefore hot, with gas temperatures likely exceeding $\sim 1000$ K. This predicts refractory composition for both planetary cores and the asteroids. On the other hand, asteroids on orbits beyond the snow line could have accreted water and other volatiles on their surfaces by sweeping the latter up inside the disc, although efficiency of this process needs to be clarified. Kuiper belt objects (KBO) would result from tidal disruption of more extended and therefore cooler parent fragments. Volatiles (CHON) may now be available for contributing material to building large solid bodies, so Kuiper belt objects made by Tidal Downsizing may contain a larger fraction of ices and volatiles than the asteroids. The NICE model for the Solar System architecture [e.g., @GomesEtal05; @TsiganisEtal05] has been very successful, especially in its outer reaches [@Morbidelli10]. The model is based on the Core Accretion ideas, in particular on the presence of a massive Kuiper belt that drives migration of Neptune and Uranus. Without detailed calculations it is difficult to assess whether a similarly successful theory of the Solar System structure could be build starting from the end product of a Tidal Downsizing phase. This is a widely open issue. Timing of planet and planetesimal formation {#sec:pl_ages} ------------------------------------------- The inner terrestrial planets are usually believed to have grown in gas-free environment because their formation ages are found to be in tens of Million years after the formation of the Sun. For example, the age of the Earth is estimated between $\sim 30$ and $\sim 100$ Million years from Hf-W and U-Pb chronometry [e.g., @Patterson56; @KoenigEtal11; @RudgeEtal10]. If this is true then a Tidal Downsizing origin for the Earth is ruled out since the Earth is nearly coeval with the Sun in this scenario. However, terrestrial samples provide us with information about only the upper hundreds of km of the Earth. It may well be that the bulk of the planet, that is, $\sim 99\%$ of the mass, is significantly older than the Earth’s surface. In confirmation of this, recent research [e.g., @BallhausEtal13] indicates that the Earth accreted lots of volatiles tens of million years after the core formation, suggesting that the U-Pb system of the Earth’s silicate mantle has little chronological significance [e.g., §2.5 in @PfalznerEtal15]. Measured “formation ages” for the other planets and the Moon suffer from similar uncertainties in their interpretation. Discussion {#sec:discussion} ========== Tidal Downsizing, summary of outcomes {#sec:exo_basic} ------------------------------------- ![image](Figs/TD_outcomes_sketch5.pdf){width="95.00000%"} In the top right corner of the figure, the main object of Tidal Downsizing, a pre-collapse gas clump with an ongoing grain sedimentation and core formation is shown. The two arrows pointing away from the clump show the first important bifurcation in the fate of the clump. If the (2A) [*A gas giant planet*]{} (green arrows in the sketch). If the inward radial migration of the fragment is slower than planet contraction, and if the core feedback is sufficiently weak, the fragment contracts and survives as a gas giant planet. Usually, this requires the core mass to be below a Super Earth mass ($\lesssim 5 {{\,{\rm M}_{\oplus}}}$, §\[sec:feedback\]). Planet migration may bring the planet arbitrarily close to the host star, including plunging it into the star. No debris ring of planetesimals is created from this clump since it is not disrupted. (2B)[*A low mass solid core planet*]{}, $M_{\rm p}\lesssim$ a few ${{\,{\rm M}_{\oplus}}}$ (red arrows). Similar to the above, but the fragment is migrating in more rapidly than it can collapse. In this case it fills its Roche lobe somewhat outside the exclusion zone boundary and gets tidally disrupted. This results, simultaneously, in the production of a small rocky planet and an Asteroid belt like debris ring at a few AU distance from the host star. (2C) [*A high mass solid core planet*]{}. If the fragment is able to make a massive solid core, $M_{\rm core}\gtrsim 5-10{{\,{\rm M}_{\oplus}}}$, its feedback on the fragment may unbind the fragment at separations as large as tens of AU. This process is shown with the blue arrow and leaves behind the massive core, plus a Kuiper-belt like debris ring. All of the planets and even stars so created may continue to migrate in, as shown by the black open arrow on the bottom right of the sketch, until the disc is finally removed. Note that a much more massive disc is needed to move a brown dwarf or a star into the inner disc region as opposed to moving a planet. Because very massive gas discs cannot be very common, this predicts that brown dwarfs and stellar mass companions are more likely to be found at large (tens of AU or more) separations; gas giant planets are more likely to migrate closer in to the host star. Observations to test this scenario ---------------------------------- Dozens of independent numerical simulations (§\[sec:rapid\]) show that Jupiter mass planets migrate from $\sim 100$ AU into the inner $\sim 10$ AU or less in about 10,000 years or even less. Therefore, the popular idea [e.g., @Boley09] of dividing the observed population of planets onto “made by Core Accretion” (inside the inner tens of AU) and “made by Gravitational Instability” (outside this region) is not physically viable. Based on the rapid migration speeds found in the simulations, a giant planet observed at $\sim 0.1$ AU is as likely to have migrated there from a few AU as it is to have migrated there from $100$ AU. Likewise, due to tidal disruptions, Tidal Downsizing produces a numerous supply of core-dominated planets, many of which may end up at same distances as normally reserved for the Core Accretion planets. We thus need to be crystal clear on which observables can be used to differentiate between the two scenarios and which are actually less discriminating than previously thought. ### Similarities between the two scenarios The observed planets naturally divide into two main groups – those dominated by solid cores, usually below mass of $\sim 20{{\,{\rm M}_{\oplus}}}$, and those dominated by gas, usually more massive than Saturn ($\sim 100 {{\,{\rm M}_{\oplus}}}$). This has been interpreted as evidence for gas accretion runaway [e.g., @MordasiniEtal09b; @MayorEtal11] above the critical mass for the core-nucleated instability [@Mizuno80; @Stevenson82; @Rafikov06]. However, a similar bi-modality of planets is found in Tidal Downsizing (Fig. \[fig:atmo\]). When the parent gas fragment is disrupted, the mass of the gas remaining bound to the core is usually a small fraction of the core mass for reasons quite analogous to those of Core Accretion (§\[sec:atm\]). This implies that the observed dichotomy of planets may be driven by the fundamental properties of matter (equation of state and opacities) rather than by how the planets are made. The bulk composition of planets is another example where the predictions of the two theories are not so different. In Core Accretion, the more massive the planet is, the smaller the fraction of the total planet mass made up by the core. This may account for the observed over-abundance of metals decreasing with the planet mass [@MillerFortney11]. In Tidal Downsizing, the more massive the gas giant is, the smaller is the “pebble accretion boost” needed for it to collapse, and this may also account for the observations (see Fig. \[fig:Zpl\] & §\[sec:Zpl\_giants\]). The strong preference amongst gas giants to orbit metal rich rather than metal poor hosts is well known [e.g., @Gonzalez99; @FischerValenti05; @SanterneEtal15], and is normally attributed to the more rapid assembly of massive cores in metal rich discs [@IdaLin04b; @MordasiniEtal09b]. However, if gas giants collapse due to “metal loading” [@Nayakshin15a] rather than due to the classical radiative collapse [@Bodenheimer74], then the frequency of their survival is also a strong function of the host disc metallicity [@Nayakshin15b; @NayakshinFletcher15]. These observations cannot be claimed to support one of the two planet formation scenarios. ### Observable differences between the theories Tidal Downsizing however predicts that beyond the exclusion zone at $a\sim$ a few AU, there should be no correlation between the gas giant presence and the host star metallicity because the tidal disruption “filter” does not apply or at least applies not as strongly there (§\[sec:cold\_giants\_Z\]). Observations [@AdibekyanEtal13] started to probe the few-AU region of the parameter space, and there is a hint that this prediction is supported by the data [@AdibekyanEtal15 see also Fig. \[fig:Vardan\]], but more observations are needed. Similarly, planets more massive than $\sim 5-10{{\,{\rm M}_{\rm J}}}$ and brown dwarfs should not correlate with the metallicity of the host in the Tidal Downsizing model (§\[sec:transition\]), whatever the separation from the star. Currently, this prediction is clearly supported by observations of brown dwarfs and low mass stellar companions to stars [@RaghavanEtal10; @TroupEtal16] but the transition region between planets and brown dwarfs is not well studied. Massive gas giant planets do appear to become less sensitive to the host metallicity above the mass of $5{{\,{\rm M}_{\rm J}}}$ (§\[sec:massive\_giants\_Z\] and Fig. \[fig:Z\_massive\]), but more data are desirable to improve the statistics. At the lower mass end, there are differences between the models too. In the framework of Tidal Downsizing, planetary debris is only made when the gas clumps – the future gas giant planets – are disrupted (see §\[sec:planetesimals\] & \[sec:hier\]). Since tidal disruption of the clumps anti-correlates with the host metallicity as explained above, no simple correlation between the debris disc presence and host \[M/H\] is predicted [@FletcherNayakshin16a]. Secondary predictions of this picture (see §\[sec:Z\_debris\]) include a possible correlation of the debris disc presence with that of a sub-Saturn planet (that is, any downsized planet), and an anti-correlation with the presence of gas giant planets. [Further, post-collapse planets are too hot to permit existence of asteroid or comet like debris inside of them. Pre-collapse planets are disrupted not closer than the exclusion zone, as mentioned above, so that debris belts made by Tidal Downsizing must be never closer than $\sim 1$ AU to the host solar type star. This is different from Core Accretion where planetesimals are postulated to exist as close as $\sim 0.1$ AU from the host star [e.g., @ChiangLaughlin13]. [@KenyonEtal16] identifies the very low frequency of observed [*warm*]{} debris discs ($\sim 2-3$%) in young debris discs as a significant puzzle for Core Accretion, and offers a solution. Another difference is the likely much smaller mass of the debris rings made by Tidal Downsizing, and their significant birth eccentricities [up to $e\sim 0.1$; @NayakshinCha12].]{} For cores, the host star metallicity correlation is predicted to depend on the core mass in Tidal Downsizing. Low mass cores, $M_{\rm core} \lesssim$ a few ${{\,{\rm M}_{\oplus}}}$, are most abundant around low metallicity hosts because of the already mentioned tendency of the parent gas clumps to be disrupted more frequently at low metalicites. High mass cores, on the other hand, are mainly made in disruptions of gas clumps made by metal-rich discs [e.g., see the black curve in Fig. 3 in @FletcherNayakshin16a]. Therefore cores more massive than $\sim 10-15{{\,{\rm M}_{\oplus}}}$ are likely to correlate with the metallicity of the host. For a broad range of core masses, one gets no strong correlation with \[M/H\], somewhat as observed [@NayakshinFletcher15]. Future observations and modelling of core correlations with metallicity of the host are a sensitive probe of the two planet formation scenarios. While some of the Core Accretion population synthesis models also predict no strong correlation between core-dominated planets and the host star metallicity [e.g., @MordasiniEtal09b], the degeneracy between the two models may be broken in two areas. Tidal Downsizing predicts that massive core formation is a very rapid process, even at $\sim 100$ AU, requiring less than $\sim 10^5$ years [@Nayakshin16a], whereas Core Accretion takes $\sim 1-3$  Million years even at distances $a\lesssim 10$ AU. ALMA observations of protoplanetary discs such as HL Tau (§\[sec:HLT\]), showing signs of very early planet formation, is key to constrain the timing of massive core growth and is a challenge to the classical version of Core Accretion.[^9] Another area where the two models differ is the expected core composition. Core Accretion predicts that ices may be the dominant contributor to the mass budget of massive cores [@PollackEtal96]. While these cores would form beyond the snow line, many would migrate all the way into the inner tenths of an AU region that is accessible to modern observations [e.g., see Fig. A1 in @ColemanNelson16]. Tidal Downsizing predicts that ices and organics are less likely to contribute to making planetary cores than silicates because the ices and organics are too volatile to sediment into the centres of hot pre-collapse fragments [@HS08; @HelledEtal08 also §\[sec:composition\]]. Cores that are further away than $\sim 0.1$ AU from their hosts, including the Solar System giants, do not present us with a clean composition test because their mass-radius relation is degenerate due to the unknown H/He mass fraction [e.g., see §5.1.2 in @HelledEtal13a]. However, moderately massive cores [$M_{\rm core}\lesssim 7{{\,{\rm M}_{\oplus}}}$, see @OwenWu13] lose their H/He envelopes due to photo-evaporation at separations less than $\sim 0.1$ AU. It is thus sensible to concentrate on these close-in cores when pitting Tidal Downsizing against Core Accretion. The close-in cores are (so far) observed to have a rocky Earth-like composition (§\[sec:core\_comp\]), but the current data are still scarce. Observations show a strong roll-over in frequency of planets more massive than $\sim 20 {{\,{\rm M}_{\oplus}}}$ [@MayorEtal11] or larger than $\sim 4 R_\oplus$ [@HowardEtal12]. Building solid cores via accretion of planetesimals or via giant impacts has no obvious limit at this mass range except for the run away by gas accretion [@PollackEtal96; @MordasiniEtal09b]. This scenario should however not apply to metal-poor systems: if these are made in gas-free discs [@IdaLin04b], then their cores should be free to grow more massive than $M_{\rm crit}$. Very massive solid cores are however not observed around metal-poor stars. In Tidal Downsizing, the drop above the mass of $\sim 20 {{\,{\rm M}_{\oplus}}}$ may be due to the strong feedback unleashed by the massive cores onto their host gas fragments (§\[sec:feedback\] and Fig. \[fig:pmf\_fb\]). This mechanism should affect both metal rich and metal poor systems. Observations of stars more massive than the Sun may be helpful here, as these are expected to have more massive discs [@MordasiniEtal12], and thus their cores should be more massive if made by Core Accretion and not if made by Tidal Downsizing. Finally, planet formation in extreme systems such as binaries is a very tough test for any planet formation scenario. Kepler-444 may be an example of a system where the observed planets could not have been made by Core Accretion, as argued in §\[sec:kepler444\], due to the inner disc being both too hot to make the planets in situ, and yet not long lived enough to move them in place if made further out. However, it remains to be seen if detailed simulations in the framework of Tidal Downsizing could produce such an extreme planetary system. Open issues {#sec:dis_ass} ----------- The population synthesis model of [@NayakshinFletcher15] assumes, for simplicity, that gas fragments evolve at a constant [*gas*]{} mass until they are disrupted or they collapse. The disruption is assumed to remove all of the gas envelope except for the dense layers of gas strongly bound to the core, the core atmosphere (§\[sec:atm\]). This is based on the fact that a polytropic gas clump with index $n=5/2$ is strongly unstable to the removal of mass as it expands as $R_{\rm p}\propto M_{\rm p}^{-3}$ when the mass is lost. Within these assumptions, the model requires gas clumps with the minimum initial mass min\[$M_{\rm in}$\]$\sim (0.5-1){{\,{\rm M}_{\rm J}}}$ to account for the observed gas giant planets, many of which have mass around that of Jupiter or less. This is somewhat uncomfortable since most authors [e.g., @ForganRice13 and §\[sec:AorM\]] find that the minimum initial mass of a gas clump born by gravitational instability of a protoplanetary disc is $M_{\rm in} \sim 3-10{{\,{\rm M}_{\rm J}}}$, and that gas clumps may accrete more gas [e.g., @KratterEtal10]. This important disagreement needs to be investigated with 3D numerical simulations of both fragmenting discs and individual gas clumps. Similarly, 3D numerical simulations of gas fragment collapse are needed to ascertain angular momentum evolution of gas clumps, which is of course not resolved in the current 1D population synthesis. This evolution may dictate how much of the clump collapses into the planet proper and how much into the circum-planetary disc [@BoleyEtal10; @GalvagniEtal12], and what the spins of the planets and the core are [@Nayakshin11a]. Formation of the circum-planetary disc is key to formation of planet satellites. Further, grain sedimentation, core formation and especially planetesimal/debris formation within the fragment are certainly not spherically symmetric (e.g., see Fig. \[fig:num3D\]), so 3D coupled gas-grain simulations of gas clumps are urgently needed. Another unsolved issue is gas accretion onto gas clumps, which is likely to control the frequency with which planets are made as opposed to brown dwarfs [@ZhuEtal12a; @NayakshinCha13; @Stamatellos15 see also §\[sec:AorM\]]. 3D simulations are also needed to address how the presence of multiple gas clumps changes the predictions of population synthesis [@ForganRice13 allowed multiple gas fragments in their protoplanetary discs, but it was not possible to track stochastic clump-clump interactions or orbit interchanges]. So far, 3D numerical simulations of fragmenting discs did not resolve the internal processes within the fragments, and have also been performed for a relatively small number of test cases [e.g., @BoleyEtal10; @ChaNayakshin11a]. Ideally, the strengths of the 1D isolated clump models (grain physics, long term evolution of the clumps and the disc) should be imported into the 3D simulations of global discs with self-consistent fragment formation in order to overcome the shortcomings. Another assumption made in the population synthesis presented here is that dust opacity has not been modified much by grain growth inside the clumps. This is an approximation only. Grain growth clearly occurs in protoplanetary discs and should be included into the models. Numerical experiments of [@Nayakshin15c] suggest that grain opacity reduction by a factor of $\sim 3$ can be tolerated, but factors of tens would be too large. Self-consistent models of fragment evolution with grain growth [in the style of @HB11] and metal loading are needed to explore these issues better. Tidal Downsizing hypothesis is very young and is so far untested on dozens of specific planet formation issues in the Solar System and beyond, such as formation of short period tightly packed systems [e.g., @HandsEtal14], the role of ice lines in the model, etc. and etc. these issues have not been covered here not because of the author’s desire to hide away from the data but rather due to a lack of detailed work on these specific issues. Commenting on these without performing a thorough calculation first would amount to speculation one way or another. The author plans, and invites the community, to examine these additional constraints in the future. This research was funded by STFC and made use of ALICE and DiRAC High Performance Computing Facilities at the University of Leicester. I thank Ed Vorobyov, Ravit Helled, Richard Alexander, Vardan Adibekyan, Duncan Forgan, Dimitris Stamatellos, Lucio Mayer, Eugene Chiang, Alexandre Santerne, and Mark Fletcher for valuable discussions and comments on the draft. Christoph Mordasini is thanked for providing his data for one of the figures. The Chief Editor of PASA, Daniel Price, is thanked for the invitation to collect the rather broad material into one, hopefully coherent, story, and for his encouragement and patience while this review was completed. A special thank you is to an anonymous referee whose detailed report improved presentation of this review significantly. natexlab\#1[\#1]{} , V., [Figueira]{}, P., & [Santos]{}, N. C. 2015, ArXiv e-prints, arXiv:1509.02429 , V. Z., [Figueira]{}, P., [Santos]{}, N. C., [et al.]{} 2013, , 560, A51 , R., [Pascucci]{}, I., [Andrews]{}, S., [Armitage]{}, P., & [Cieza]{}, L. 2014, Protostars and Planets VI, 475 , R. D., & [Pascucci]{}, I. 2012, , 422, L82 , Y., [Mordasini]{}, C., [Benz]{}, W., & [Winisdoerffer]{}, C. 2005, , 434, 343 , S. M., & [Williams]{}, J. P. 2005, , 631, 1134 , P. J., [Livio]{}, M., & [Pringle]{}, J. E. 2001, , 324, 705 , C., [Laurenz]{}, V., [M[ü]{}nker]{}, C., [et al.]{} 2013, Earth and Planetary Science Letters, 362, 237 , C., [Meru]{}, F., & [Paardekooper]{}, S.-J. 2011, , 416, 1971 , C., [Crida]{}, A., [Paardekooper]{}, S.-J., [et al.]{} 2014, Protostars and Planets VI, University of Arizona Press, Tucson, 667 , M. R., & [Bonnell]{}, I. A. 1997, , 285, 33 , E., [G[ü]{}ttler]{}, C., [Blum]{}, J., [et al.]{} 2011, , 736, 34 , K. R., & [Lin]{}, D. N. C. 1994, , 427, 987 , B. A., [Liu]{}, M. C., [Wahhaj]{}, Z., [et al.]{} 2013, , 777, 160 , B., & [Kley]{}, W. 2010, , 523, A30 , J., & [M[ü]{}nch]{}, M. 1993, , 106, 151 , J., & [Wurm]{}, G. 2008, , 46, 21 , P. 1974, Icarus, 23, 319 , A. C. 2009, , 695, L53 , A. C., & [Durisen]{}, R. H. 2010, , 724, 618 , A. C., [Hayfield]{}, T., [Mayer]{}, L., & [Durisen]{}, R. H. 2010, Icarus, 207, 509 , A. C., [Helled]{}, R., & [Payne]{}, M. J. 2011, , 735, 30 , A. P. 1998, , 503, 923 —. 2011, , 731, 74 , B. P., [Liu]{}, M. C., [Shkolnik]{}, E. L., & [Tamura]{}, M. 2015, , 216, 7 , J. C., [Changela]{}, H. G., [Nayakshin]{}, S., [Starkey]{}, N. A., & [Franchi]{}, I. A. 2012, Earth and Planetary Science Letters, 341, 186 , C. L., [Perez]{}, L. M., [Hunter]{}, T. R., [et al.]{} 2015, ArXiv e-prints, arXiv:1503.02649 , G., [Beichman]{}, C. A., [Carpenter]{}, J. M., [et al.]{} 2009, , 705, 1226 , L. A., & [Latham]{}, D. W. 2015, , 808, 187 , L. A., [Latham]{}, D. W., [Johansen]{}, A., [et al.]{} 2012, , 486, 375 , L. A., [Bizzarro]{}, M., [Latham]{}, D. W., [et al.]{} 2014, , 509, 593 , A., [Hubbard]{}, W. B., [Lunine]{}, J. I., & [Liebert]{}, J. 2001, Reviews of Modern Physics, 73, 719 , A. G. W., [Decampli]{}, W. M., & [Bodenheimer]{}, P. 1982, Icarus, 49, 298 , T. L., [Barclay]{}, T., [Swift]{}, J. J., [et al.]{} 2015, , 799, 170 , R. M. 2008, ICARUS, 196, 518 , R. M., & [Asphaug]{}, E. 2001, , 412, 708 , P. M., [Smith]{}, B. F., [Miller]{}, R. H., & [Reynolds]{}, R. T. 1981, ICARUS, 48, 377 , S.-H., & [Nayakshin]{}, S. 2011, , 415, 3319 , G., [Vigan]{}, A., [Bonnefoy]{}, M., [et al.]{} 2015, , 573, A127 , E., & [Laughlin]{}, G. 2013, , 431, 3444 , C. J. 2007, , 376, 1350 , C. J., & [Lodato]{}, G. 2009, , 398, L6 , G. A. L., & [Nelson]{}, R. P. 2016, ArXiv e-prints, arXiv:1601.03608 , A., [Morbidelli]{}, A., & [Masset]{}, F. 2006, , 181, 587 , A., [Butler]{}, R. P., [Marcy]{}, G. W., [et al.]{} 2008, PASP, 120, 531 , P., [Calvet]{}, N., & [Hartmann]{}, L. 2001, , 553, 321 , W. M., & [Cameron]{}, A. G. W. 1979, ICARUS, 38, 367 , G., [Laibe]{}, G., [Price]{}, D. J., & [Lodato]{}, G. 2016, , arXiv:1602.07457 , G., [Price]{}, D., [Laibe]{}, G., [et al.]{} 2015, ArXiv 1507.06719, arXiv:1507.06719 , L., & [Tremaine]{}, S. 1993, Icarus, 103, 67 , J. R., & [Williams]{}, I. P. 1975, , 172, 257 , C. D., [Charbonneau]{}, D., [Dumusque]{}, X., [et al.]{} 2015, , 800, 135 , C. P., & [Dominik]{}, C. 2005, , 434, 971 , M. M., & [Vorobyov]{}, E. I. 2012, , 747, 52 , T. J., [Kratter]{}, K. M., [Kraus]{}, A. L., [et al.]{} 2015, ArXiv e-prints, arXiv:1512.03428 , R. H., [Boss]{}, A. P., [Mayer]{}, L., [et al.]{} 2007, Protostars and Planets V, 607 , C., [Fedele]{}, D., [Maldonado]{}, J., [et al.]{} 2010, , 518, L131 , J. A., & [Hillenbrand]{}, L. A. 2011, ArXiv e-prints, arXiv:1106.1440 , N., [Brahm]{}, R., [Jord[á]{}n]{}, A., [et al.]{} 2016, ArXiv e-prints, arXiv:1601.07608 , D. C., [Lissauer]{}, J. J., [Ragozzine]{}, D., [et al.]{} 2014, , 790, 146 , D. A., & [Valenti]{}, J. 2005, , 622, 1102 , M., & [Nayakshin]{}, S. 2016, , 461, 1850 , D., & [Rice]{}, K. 2011, , 417, 1928 —. 2013, , 430, 2082 —. 2013, , 432, 3168 , H. 2001, , 378, 192 , R., [Marois]{}, C., [Macintosh]{}, B., [et al.]{} 2016, ArXiv e-prints, arXiv:1607.08239 , M., [Hayfield]{}, T., [Boley]{}, A., [et al.]{} 2012, , 427, 1725 , M., & [Mayer]{}, L. 2014, , 437, 2909 , C. F. 1996, , 457, 355 —. 2001, , 553, 174 , R. B., [Halliday]{}, A. N., [Schauble]{}, E. A., & [Reynolds]{}, B. C. 2007, , 447, 1102 , R. T. 1968, Icarus, 8, 301 , P., & [Ward]{}, W. R. 1973, , 183, 1051 , R., [Levison]{}, H. F., [Tsiganis]{}, K., & [Morbidelli]{}, A. 2005, , 435, 466 , G. 1999, , 308, 447 , J. I., [Delgado-Mena]{}, E., [Sousa]{}, S. G., [et al.]{} 2013, , 552, A6 , J., & [Tan]{}, J. C. 2004, , 608, 108 , T. 2005, Annual Review of Earth and Planetary Sciences, 33, 493 , A., & [Tafalla]{}, M. 2011, , 533, A34 , Jr., K. E., [Lada]{}, E. A., & [Lada]{}, C. J. 2001, , 553, L153 , E., [Wang]{}, S. X., [Wright]{}, J. T., [et al.]{} 2014, , 126, 827 , M. J., & [Williams]{}, I. P. 1975, AP&SS, 38, 29 , T. O., [Alexander]{}, R. D., & [Dehnen]{}, W. 2014, , 445, 749 , A. W. 1977, Icarus, 31, 168 , L., [Calvet]{}, N., [Gullbring]{}, E., & [D’Alessio]{}, P. 1998, , 495, 385 , L., & [Kenyon]{}, S. J. 1996, , 34, 207 , W. K., & [Davis]{}, D. R. 1975, ICARUS, 24, 504 , R., [Anderson]{}, J. D., & [Schubert]{}, G. 2010, , 210, 446 , R., & [Bodenheimer]{}, P. 2011, , 211, 939 , R., & [Guillot]{}, T. 2013, , 767, 113 , R., [Podolak]{}, M., & [Kovetz]{}, A. 2008, Icarus, 195, 863 , R., & [Schubert]{}, G. 2008, Icarus, 198, 156 , R., [Bodenheimer]{}, P., [Podolak]{}, M., [et al.]{} 2014, Protostars and Planets VI, University of Arizona Press, Tucson, 643 , L. G., [Forbes]{}, J. E., & [Gould]{}, N. L. 1964, , 139, 306 , G. H. 1989, in European Southern Observatory Conference and Workshop Proceedings, Vol. 33, European Southern Observatory Conference and Workshop Proceedings, ed. [B. Reipurth]{}, 233–246 , Y., & [Ikoma]{}, M. 2011, , 416, 1419 , N., [Saitoh]{}, T. R., [Makino]{}, J., [Genda]{}, H., & [Ida]{}, S. 2016, ArXiv e-prints, arXiv:1602.00843 , A. W., [Marcy]{}, G. W., [Bryson]{}, S. T., [et al.]{} 2012, , 201, 15 , F. 1953, , 118, 513 , O., [Bodenheimer]{}, P., & [Lissauer]{}, J. J. 2005, , 179, 415 , E., & [Podolak]{}, M. 2007, , 187, 600 , S., & [Lin]{}, D. N. C. 2004, , 604, 388 —. 2004, , 616, 567 , M., [Nakazawa]{}, K., & [Emori]{}, H. 2000, , 537, 1013 , S., [Machida]{}, M. N., & [Matsumoto]{}, T. 2010, , 718, L58 , S., [Li]{}, S., [Isella]{}, A., [Li]{}, H., & [Ji]{}, J. 2016, ArXiv e-prints, arXiv:1601.00358 , A., [Blum]{}, J., [Tanaka]{}, H., [et al.]{} 2014, Protostars and Planets VI, University of Arizona Press, Tucson, 547 , A., [Jacquet]{}, E., [Cuzzi]{}, J. N., [Morbidelli]{}, A., & [Gounelle]{}, M. 2015, ArXiv e-prints, arXiv:1505.02941 , A., & [Lacerda]{}, P. 2010, , 404, 475 , A., [Mac Low]{}, M.-M., [Lacerda]{}, P., & [Bizzarro]{}, M. 2015, Science Advances, 1, 1500109 , A., [Oishi]{}, J. S., [Low]{}, M., [et al.]{} 2007, , 448, 1022 , A., [Youdin]{}, A., & [Mac Low]{}, M.-M. 2009, , 704, L75 , H., [Watanabe]{}, J., [Furusho]{}, R., [et al.]{} 2004, , 601, 1152 , S. J., & [Bromley]{}, B. C. 2015, ArXiv e-prints, arXiv:1501.05659 , S. J., & [Luu]{}, J. X. 1999, , 118, 1101 , S. J., [Najita]{}, J. R., & [Bromley]{}, B. C. 2016, ArXiv e-prints, arXiv:1608.05410 , A. R., [Livio]{}, M., [Lubow]{}, S. H., & [Pringle]{}, J. E. 2013, , 431, 2655 , H., [Tanaka]{}, H., & [Krivov]{}, A. V. 2011, , 738, 35 , S., [M[ü]{}nker]{}, C., [Hohl]{}, S., [et al.]{} 2011, , 75, 2119 , [Á]{}., [Ardila]{}, D. R., [Mo[ó]{}r]{}, A., & [[Á]{}brah[á]{}m]{}, P. 2009, , 700, L73 , K. M., [Murray-Clay]{}, R. A., & [Youdin]{}, A. N. 2010, , 710, 1375 , G. P. 1951, Proceedings of the National Academy of Science, 37, 1 , W., [Looney]{}, L. W., & [Mundy]{}, L. G. 2011, , 741, 3 , M., & [Johansen]{}, A. 2012, , 544, A32 , R. B. 1969, , 145, 271 , G., & [Bodenheimer]{}, P. 1994, , 436, 335 , J., [Baraffe]{}, I., [Chabrier]{}, G., [Barman]{}, T., & [Levrard]{}, B. 2009, , 506, 385 , H. F., [Kretke]{}, K. A., & [Duncan]{}, M. J. 2015, , 524, 322 , D. N. C., [Bodenheimer]{}, P., & [Richardson]{}, D. C. 1996, , 380, 606 , D. N. C., & [Papaloizou]{}, J. 1986, , 309, 846 , D. N. C., & [Pringle]{}, J. E. 1987, , 225, 607 , J. J., & [Kary]{}, D. M. 1991, , 94, 126 , H. B., [Takami]{}, M., [Kudo]{}, T., [et al.]{} 2016, ArXiv e-prints, arXiv:1602.04068 , G., & [Clarke]{}, C. J. 2004, , 353, 841 , G., [Nayakshin]{}, S., [King]{}, A. R., & [Pringle]{}, J. E. 2009, , 398, 1392 , G., & [Rice]{}, W. K. M. 2004, , 351, 630 —. 2005, , 358, 1489 , K. 2003, , 591, 1220 , D., [Larionov]{}, V. M., [Giannini]{}, T., [et al.]{} 2009, , 693, 1056 , G. W., & [Shukolyukov]{}, A. 1998, , 62, 2863 , M. N., [Inutsuka]{}, S.-i., & [Matsumoto]{}, T. 2011, , 729, 42 , J., [Eiroa]{}, C., [Villaver]{}, E., [Montesinos]{}, B., & [Mora]{}, A. 2012, , 541, A40 , J., & [Villaver]{}, E. 2016, ArXiv e-prints, arXiv:1602.00835 , V., & [Barlow]{}, M. J. 1998, , 497, 330 , G.-D., & [Cumming]{}, A. 2014, , 437, 1378 , M. S., [Fortney]{}, J. J., [Hubickyj]{}, O., [Bodenheimer]{}, P., & [Lissauer]{}, J. J. 2007, , 655, 541 , C., [Macintosh]{}, B., [Barman]{}, T., [et al.]{} 2008, Science, 322, 1348 , C., [Zuckerman]{}, B., [Konopacky]{}, Q. M., [Macintosh]{}, B., & [Barman]{}, T. 2010, , 468, 1080 , J. P., [Moro-Mart[í]{}n]{}, A., [Eiroa]{}, C., [et al.]{} 2014, , 565, A15 , H., & [Inutsuka]{}, S.-i. 2000, , 531, 350 , T., & [Hanawa]{}, T. 2003, , 595, 913 , L., [Quinn]{}, T., [Wadsley]{}, J., & [Stadel]{}, J. 2004, , 609, 1045 , M., & [Queloz]{}, D. 1995, , 378, 355 , M., [Marmier]{}, M., [Lovis]{}, C., [et al.]{} 2011, ArXiv e-prints (astro-ph 1109.2497), arXiv:1109.2497 , W. H., & [Williams]{}, I. P. 1965, Royal Society of London Proceedings Series A, 287, 143 , F., & [Bate]{}, M. R. 2011, , 411, L1 —. 2012, , 427, 2022 , F., & [Meyer-Hofmeister]{}, E. 1981, , 104, L10 —. 1984, , 132, 143 , S., [Durisen]{}, R. H., & [Boley]{}, A. C. 2011, , 737, L42+ , N., & [Fortney]{}, J. J. 2011, , 736, L29 , H. 1980, Progress of Theoretical Physics, 64, 544 , A. 2010, ArXiv e-prints, arXiv:1010.6221 , C. 2013, ArXiv e-prints, arXiv:1306.5746 , C., [Alibert]{}, Y., & [Benz]{}, W. 2009, , 501, 1139 , C., [Alibert]{}, Y., [Benz]{}, W., [Klahr]{}, H., & [Henning]{}, T. 2012, , 541, A97 , C., [Alibert]{}, Y., [Benz]{}, W., & [Naef]{}, D. 2009, , 501, 1161 , C., [Molli[è]{}re]{}, P., [Dittkrist]{}, K.-M., [Jin]{}, S., & [Alibert]{}, Y. 2015, International Journal of Astrobiology, 14, 201 , A., [Carpenter]{}, J. M., [Meyer]{}, M. R., [et al.]{} 2007, , 658, 1312 , A., [Marshall]{}, J. P., [Kennedy]{}, G., [et al.]{} 2015, , 801, 143 , M. A., & [Desch]{}, S. J. 2010, , 722, 1474 , S. 2010, , 408, L36 —. 2010, , 408, 2381 —. 2011, , 413, 1462 —. 2011, , 416, 2974 —. 2011, , 410, L1 —. 2014, , 441, 1380 —. 2015, , 446, 459 —. 2015, , 448, L25 —. 2015, ArXiv e-prints (arXiv: 1502.07585) —. 2015, , 454, 64 —. 2015, ArXiv e-prints, arXiv:1510.01630 , S., & [Cha]{}, S.-H. 2012, , 423, 2104 —. 2013, , 435, 2099 , S., [Cha]{}, S.-H., & [Bridges]{}, J. C. 2011, , 416, L50 , S., & [Fletcher]{}, M. 2015, , 452, 1654 , S., [Helled]{}, R., & [Boley]{}, A. C. 2014, , 440, 3797 , S., & [Lodato]{}, G. 2012, , 426, 70 , P. 2004, , 171, 463 , C. W., & [Klahr]{}, H. H. 2010, , 520, A43 , C. W., [Shi]{}, J.-M., & [Kuiper]{}, R. 2015, , 447, 3512 , R. D., [van der Veen]{}, W. E. C. J., [Waters]{}, L. B. F. M., [et al.]{} 1992, , 96, 625 , J. E., & [Wu]{}, Y. 2013, , 775, 105 , S.-J. 2012, , 421, 3286 , S.-J., [Rein]{}, H., & [Kley]{}, W. 2013, , 434, 3018 , K., & [Stevenson]{}, D. J. 2007, Earth and Planetary Science Letters, 262, 438 , J. C. B., & [Terquem]{}, C. 1999, , 521, 823 , C. 1956, , 10, 230 , P. N., [Evans]{}, L. G., [Hauck]{}, S. A., [et al.]{} 2011, Science, 333, 1850 , S., [Davies]{}, M. B., [Gounelle]{}, M., [et al.]{} 2015, , 90, 068001 , G., & [Kley]{}, W. 2015, , 584, A110 , P., [Birnstiel]{}, T., [Ricci]{}, L., [et al.]{} 2012, , 538, A114 , C., [Dent]{}, W. R. F., [M[é]{}nard]{}, F., [et al.]{} 2016, , 816, 25 , J. B., [Hubickyj]{}, O., [Bodenheimer]{}, P., [et al.]{} 1996, Icarus, 124, 62 , S. L., [Irwin]{}, M., [Bouvier]{}, J., & [Clarke]{}, C. J. 2012, , 426, 3315 , D. J. 2012, Journal of Computational Physics, 231, 759 , R. R. 2005, , 621, L69 —. 2006, , 648, 666 , D., [McAlister]{}, H. A., [Henry]{}, T. J., [et al.]{} 2010, , 190, 1 , S., [Sanchis-Ojeda]{}, R., [Rogers]{}, L. A., [Levine]{}, A., & [Winn]{}, J. N. 2013, , 773, L15 , S. N., [Armitage]{}, P. J., [Moro-Mart[í]{}n]{}, A., [et al.]{} 2011, , 530, A62 , M. J. 1976, , 176, 483 , M., [Meyer]{}, M. R., [Chauvin]{}, G., [et al.]{} 2016, , 586, A147 , W. K. M., [Armitage]{}, P. J., [Wood]{}, K., & [Lodato]{}, G. 2006, , 373, 1619 , W. K. M., [Lodato]{}, G., & [Armitage]{}, P. J. 2005, , 364, L56 , W. K. M., [Lodato]{}, G., [Pringle]{}, J. E., [Armitage]{}, P. J., & [Bonnell]{}, I. A. 2004, , 355, 543 , L. A. 2015, , 801, 41 , P. D., & [Wadsley]{}, J. 2012, , 423, 1896 , G. P., [Juhasz]{}, A., [Booth]{}, R. A., & [Clarke]{}, C. J. 2016, ArXiv e-prints, arXiv:1603.02141 , J. F., [Kleine]{}, T., & [Bourdon]{}, B. 2010, Nature Geoscience, 3, 439 , C., [Flores]{}, M., [Jaque Arancibia]{}, M., [Buccino]{}, A., & [Jofre]{}, E. 2016, ArXiv e-prints, arXiv:1602.01320 , V. S. 1972, [Evolution of the protoplanetary cloud and formation of the earth and planets.]{} (Jerusalem (Israel): Israel Program for Scientific Translations, Keter Publishing House, 212 p.) , J., [S[é]{}gransan]{}, D., [Queloz]{}, D., [et al.]{} 2011, , 525, A95 , A., [Moutou]{}, C., [Tsantaki]{}, M., [et al.]{} 2015, ArXiv e-prints, arXiv:1511.00643 , D., [Henning]{}, T., [Helling]{}, C., [Ilgner]{}, M., & [Sedlmayr]{}, E. 2003, , 410, 611 , N. I., & [Sunyaev]{}, R. A. 1973, , 24, 337 , Y., [Maoz]{}, D., [Udalski]{}, A., [et al.]{} 2015, ArXiv e-prints, arXiv:1510.04297 , A., [Mer[í]{}n]{}, B., [Hormuth]{}, F., [et al.]{} 2008, , 673, 382 , A., [Gaidos]{}, E., & [Wu]{}, Y. 2015, , 799, 180 , D. E., [Zuber]{}, M. T., [Phillips]{}, R. J., [et al.]{} 2012, Science, 336, 214 , D. S., & [Burrows]{}, A. 2012, , 745, 174 , D. 2015, , 810, L11 , D., & [Whitworth]{}, A. P. 2008, , 480, 879 , V., [Noack]{}, L., [Breuer]{}, D., & [Spohn]{}, T. 2012, , 748, 41 , D. J. 1982, P&SS, 30, 755 , D. P., [Fortney]{}, J. J., & [Lopez]{}, E. D. 2015, ArXiv e-prints, arXiv:1511.07854 , A., [Latham]{}, D. W., & [Mason]{}, B. D. 2015, ArXiv e-prints, arXiv:1504.06535 , A. 1964, , 139, 1217 , M., [Kleine]{}, T., [Bourdon]{}, B., [Palme]{}, H., & [Wieler]{}, R. 2007, , 450, 1206 , N. W., [Nidever]{}, D. L., [De Lee]{}, N., [et al.]{} 2016, ArXiv e-prints, arXiv:1601.00688 , K., [Gomes]{}, R., [Morbidelli]{}, A., & [Levison]{}, H. F. 2005, , 435, 459 , Y., [Takahashi]{}, S. Z., [Machida]{}, M. N., & [Inutsuka]{}, S. 2015, , 446, 1175 , A., & [Helled]{}, R. 2012, , 756, 90 , A., [Patience]{}, J., [Marois]{}, C., [et al.]{} 2012, , 544, A9 , E. I. 2011, , 728, L45+ , E. I., & [Basu]{}, S. 2005, , 633, L137 —. 2006, , 650, 956 , J., & [Fischer]{}, D. A. 2013, ArXiv e-prints, arXiv:1310.7830 , S. J. 1977, , 180, 57 —. 1980, Icarus, 44, 172 , L. M., [Rogers]{}, L. A., [Isaacson]{}, H. T., [et al.]{} 2016, ArXiv e-prints, arXiv:1601.06168 , W. F., [Orosz]{}, J. A., [Carter]{}, J. A., [et al.]{} 2012, , 481, 475 , A. J., [Fakra]{}, S. C., [Gainsforth]{}, Z., [et al.]{} 2009, , 694, 18 , G. W. 1990, Annual Review of Earth and Planetary Sciences, 18, 205 , U., [Halliday]{}, A. N., [Lee]{}, D., [et al.]{} 2001, Science, 294, 345 , J. N., & [Fabrycky]{}, D. C. 2014, ArXiv e-prints (arXiv:1410.4199), arXiv:1410.4199 , R. A., [Butler]{}, R. P., [Tinney]{}, C. G., [et al.]{} 2016, ArXiv e-prints, arXiv:1601.05465 , D., [Desch]{}, S., [Harker]{}, D., [Gail]{}, H., & [Keller]{}, L. 2007, Protostars and Planets V, 815 , M. C. 2008, , 46, 339 , M. D., & [Clarke]{}, C. J. 2016, , 455, 1438 , K., [Blake]{}, G. A., & [Bergin]{}, E. A. 2015, , 806, L7 , Z., [Hartmann]{}, L., & [Gammie]{}, C. 2009, , 694, 1045 , Z., [Hartmann]{}, L., [Nelson]{}, R. P., & [Gammie]{}, C. F. 2012, , 746, 110 [^1]: E-mail: sn85@le.ac.uk [^2]: inclusion of radiative feedback would tend to stifle accretion of gas onto planets as explained in §\[sec:AorM\], favouring the planetary rather than the brown dwarf outcomes. [^3]: [@IaroslavitzP07] note that CHON composition is poorly known, so our results remain dependent on the exact properties of this material. [^4]: The feedback by the core may puff up contracting host fragment, cooling its centre and making it possible for some late volatile accretion onto the core (see §\[sec:rhd\]). However, creating of ice-dominated cores via this mechanism would appear too fine tuned. It would require the fragment to expand significantly to allow ices to sediment and yet not too strongly as to completely destroy it. [^5]: This is done for numerical convenience rather than a physical reason. The RHD code of [@Nayakshin14b] uses an explicit integration technique and so becomes very slow as the fragment contracts. For the case at hand, setting the opacity to lower values allows faster execution times without compromising the physics of feedback. For the sake of future coupled disc-planet evolution calculations, it is appropriate to note that the RHD code is impractical to use generally, and this is why the “follow adiabats” approach is used later on in §\[sec:1disc\]. [^6]: For example, [@ColemanNelson16] argue that the inner boundary of the disc is at $R\approx 0.05$ AU due to magnetospheric torques for a typical T Tauri star. In cases when the disc has created only one significant planet, and it migrated all the way to the inner disc edge, they find that the planet may survive at a separation somewhat smaller than $R_{\rm in}$. However, if the disc created several large planets, then the planets inside $R_{\rm in}$ interact via resonant torques with the ones migrating in next to them in the resonant planet “convoy”. The inner planets are then usually pushed further in and perish in the star completely. [^7]: I thank Cristoph Mordasini very much for providing me with the data from his simulations. [^8]: Interested reader may request detail of the calculation from the author [^9]: As an aside, the recently discovered rapid core growth via pebble accretion [e.g., @JohansenEtal14a; @JohansenEtal15a; @LevisonEtal15] may solve the HL Tau mystery in the context of Core Accretion, but then the classical framework for the metallicity correlations suggested by [@IdaLin04b; @MordasiniEtal09b] is in doubt because it is based on a long core growth time scale. Therefore, at the present it appears that Core Accretion may account for either the well known gas giant planet – host star metallicity correlations (§\[sec:giants\_Z\]) or the HL Tau young cores, but not both.
--- abstract: 'In this paper we construct an unfolded formulation for the massive higher spin $N=1$ supermultiplets in four dimensional $AdS$ space. We use the same frame-like gauge invariant multispinor formalism that was used previously for their Lagrangian formulation. We also consider an infinite spin limit of such supermultiplets.' author: - | M.V. Khabarov${}^{a}$[^1], Yu.M. Zinoviev${}^{ab}$[^2]\ *[${}^a$Institute for High Energy Physics of National Research Center “Kurchatov Institute”]{}\ *[Protvino, Moscow Region, 142281, Russia]{}\ *\ *[Dolgoprudny, Moscow Region, 141701, Russia]{}**** title: Massive higher spin supermultiplets unfolded --- Introduction {#introduction .unnumbered} ============ Lagrangian formulation for the massless higher spin supermultiplets (both on-shell and off-shell, both in flat space and in $AdS$) has been known for a long time [@Cur79; @Vas80; @KSP93; @KS93; @KS94]. However, any attempts to deform massless supermultiplets into the massive ones lead to the introduction of very complicated higher derivative corrections to the supertransformations without evident patterns. Moreover, the higher superspin of the supermultiplets is, the higher the number of derivatives one has to consider. Even the usage of the powerful superfield formalism allowed to construct only a couple of examples with relatively low superspins [@BGPL02; @BGLP02]. For the first time massive arbitrary superspin $N=1$ supermultiplets in flat four dimensional space were constructed in [@Zin07a] using the gauge invariant formulation for the massive bosonic [@Zin01] and fermionic [@Met06] fields. Initial idea was that the massive supermultiplet can be constructed out of the appropriately chosen set of the massless ones in the same way as the gauge invariant description for the massive fields can be constructed out of the appropriate set of the massless ones. The real picture (in a sense of the massless limit) appeared to be slightly more complicated, but anyway the construction was successful. Later on, the Lagrangian formulation for the higher spin massive supermultiplets in flat three dimensional space has also been constructed [@BSZ15], again using the gauge invariant formulation for massive bosonic and fermionic fields adopted for $d=3$ [@BSZ12a; @BSZ14a]. The correct procedure to deform such supermultiplets into $AdS_3$ space was not evident form the very beginning. It so happened that firstly the unfolded formulation has been constructed [@BSZ16] based on the results in [@Zin15]. After that the Lagrangian formulation for these supermultiplets in $AdS_3$ has also been completed [@BSZ17; @BSZ17a]. Recently, we have managed to construct the Lagrangian formulation for massive higher spin $N=1$ supermultiplets in $AdS_4$ [@BKhSZ19] using the frame-like gauge invariant formalism [@Zin08b] in its multispinor version adopted for $d=4$. Note that though the traditional classification of the supermultiplets describes only massless and massive ones, recently it was shown [@GHR18] that in $AdS_4$ space there exist the non-unitary higher spin supermultiplets containing partially massless fields. The explicit Lagrangian formulation for such supermultiplets has been constructed in [@BKhSZ19a]. Note also that the first examples of the infinite spin supermultiplets in flat space were constructed recently [@Zin17; @BKhSZ19b] (see also recent paper [@Naj19]). Here again it was crucial that the gauge invariant formalism used for the description of massive finite spin fields nicely works for the infinite spin limit as well [@Met16; @Met17; @Zin17; @KhZ17; @Met18; @KhZ19]. The main aim of this paper is to construct unfolded formulation for the massive higher spin (including infinite spin limit) $N=1$ supermultiplets in $AdS_4$. Recall that the unfolded formulation for massive higher spin bosons in arbitrary $d \ge 4$ has been constructed in [@PV10], while such formulation both for bosons as well as fermions in $AdS_4$ appeared recently in our work [@KhZ19]. Note here that, as far as we know, till now only unfolded formulation for the scalar supermultiplet was considered [@PV10a; @MV13]. The paper is organized as follows. In Section 1 as a simple illustration of our formalism we provide an unfolded formulation for the massless $N=1$ supermultiplets. Also, in Section 2 we give a pair of simple examples for the lower spin massive supermultiplets, namely the scalar and the vector ones. Section 3 is devoted to the main task — construction of the unfolded formulation for the massive arbitrary superspin $N=1$ supermultiplets. We follow the same strategy as in the construction of their Lagrangian formulation in [@BKhSZ19]. Namely, first of all we provide the unfolded equations for the massive bosons and fermions. Then we consider a pair of boson and fermion and construct supertransformations leaving their unfolded equations invariant. At last we consider complete supermultiplets containing two bosons and two fermions and adjust their parameters so that the algebra of the supertransformations is closed. Section 4 is devoted to the infinite spin supermultiplets. Massless higher spin supermultiplets ==================================== In this section we provide an unfolded formulation for the massless higher spin supermultiplets [@Cur79; @Vas80; @KSP93; @KS93; @KS94] in the frame-like multispinor formalism we use later on for the construction of the massive supermultiplets. Unfolded equations ------------------ Let us briefly recall the unfolded description of massless higher spin fields (see e.g. [@DS14]). To build a system of unfolded equations for spin-$s$ boson, one needs a set of gauge one-forms $\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)}$, $0\le m <s$ and a set of gauge invariant zero-forms $W^{\alpha(k+s)(k-s)}$, $k\ge s$ with their conjugates. The field $\Omega^{\alpha(s-1)\dot\alpha(s-1)}$ is the physical one. The gauge transformations for the one-forms are: $$\begin{aligned} \delta\Omega^{\alpha(s-1)\dot\alpha(s-1)} &=& D\eta^{\alpha(s-1)\dot\alpha(s-1)} + e^{\alpha}{}_{\dot\beta} \eta^{\alpha(s-2)\dot\alpha(s-1)\dot\beta} + e_{\beta}{}^{\dot\alpha} \eta^{\alpha(s-1)\beta\dot\alpha(s-2)} \nonumber \\ \delta\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)} &=& D\eta^{\alpha(s-1+m)\dot\alpha(s-1-m)} + \lambda^2e^{\alpha}{}_{\dot\beta} \eta^{\alpha(s-2+m)\dot\alpha(s-m-1)\dot\beta} \nonumber \\ && + e_{\beta}{}^{\dot\alpha} \eta^{\alpha(s+m-1)\beta\dot\alpha(s-2-m)}, \qquad 0<m<s-1 \\ \delta\Omega^{\alpha(2s-2)} &=& D\eta^{\alpha(2s-2)} + \lambda^2e^{\alpha}{}_{\dot\alpha} \eta^{\alpha(2s-3)\dot\alpha} \nonumber\end{aligned}$$ A set of gauge invariant two-forms - “curvatures” - can be build from these one-forms: $$\begin{aligned} \mathcal{R}^{\alpha(s-1)\dot\alpha(s-1)} &=& D\Omega^{\alpha(s-1)\dot\alpha(s-1)} + e^{\alpha}{}_{\dot\beta} \Omega^{\alpha(s-2)\dot\alpha(s-1)\dot\beta} + e_{\beta}{}^{\dot\alpha} \Omega^{\alpha(s-1)\beta\dot\alpha(s-2)} \nonumber \\ \mathcal{R}^{\alpha(s-1+m)\dot\alpha(s-1-m)} &=& D\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)} + \lambda^2e^{\alpha}{}_{\dot\beta} \Omega^{\alpha(s-2+m)\dot\alpha(s-m-1)\dot\beta} \nonumber \\ && + e_{\beta}{}^{\dot\alpha} \Omega^{\alpha(s+m-1)\beta\dot\alpha(s-2-m)}, \qquad 0<m<s-1 \\ \mathcal{R}^{\alpha(2s-2)} &=& D\Omega^{\alpha(2s-2)} + \lambda^2e^{\alpha}{}_{\dot\alpha} \Omega^{\alpha(2s-3)\dot\alpha} \nonumber\end{aligned}$$ The system of unfolded equations then can be split into the three parts. The first one is the zero-curvature conditions (analogue of the zero torsion condition in gravity): $$\mathcal{R}^{\alpha(s-1+m)\dot\alpha(s-1-m)} = 0, \qquad m<s-1$$ while the second one connects the one-forms and zero-forms sectors: $$\mathcal{R}^{\alpha(2s-2)} = -2E_{\beta(2)}W^{\alpha(2s-2)\beta(2)},$$ and the third one contains gauge invariant zero-forms only: $$\begin{aligned} 0 &=& DW^{\alpha(2s)} + e_{\beta\dot\alpha} W^{\alpha(2s)\beta\dot\alpha}, \nonumber \\ 0 &=& DW^{\alpha(2s+m)\dot\alpha(m)} + e_{\beta\dot\beta} W^{\alpha(2s+m)\beta\dot\alpha(m)\dot\beta} + \lambda^2e^{\alpha\dot\alpha} W^{\alpha(2s+m-1)\dot\alpha(m-1)},\quad m>0\end{aligned}$$ The unfolded equations can be regarded as a chain of equations of the form $DA_i = eA_{i+1}+O(\lambda^2)$. This means that the field $A_{i+1}$ is a parametrization of the derivatives of $A_i$, which do not vanish on-shell, up to the gauge transformations. In a similar fashion, the description of the massless fermion with spin ${\tilde{s}}=s+\iz$ is built. One needs a set of gauge one-forms $\Psi^{\alpha({\tilde{s}}-1+m)\dot\alpha({\tilde{s}}-1-m)}$, $\iz\le m < {\tilde{s}}$ and a set of gauge invariant zero-forms $Y^{\alpha(k+{\tilde{s}})(k-{\tilde{s}})}$, $k\ge {\tilde{s}}$ with their conjugates (where indices $k,m$ are half-integer). Here, the pair of fields $\Psi^{\alpha({\tilde{s}}-1\pm\iz)\dot\alpha({\tilde{s}}-1\mp\iz)}$ play the role of the physical ones. Similarly, a set of curvatures $\mathcal{F}^{\alpha({\tilde{s}}-1+m)\dot\alpha({\tilde{s}}-1-m)}$ can be constructed. The expressions for the gauge transformations and curvatures are similar to the bosonic case (up to the change $s\to\tilde{s}$, $\Omega\to\Psi$ and half-integer $m$), with the only exception being the case $m=\iz$: $$\begin{aligned} \delta\Psi^{\alpha({\tilde{s}}-\iz)\dot\alpha(s-\tz)} &=& D\eta^{\alpha({\tilde{s}}-\iz)\dot\alpha({\tilde{s}}-\tz)} + e^{\alpha}{}_{\dot\beta} \eta^{\alpha({\tilde{s}}-\iz)\dot\alpha({\tilde{s}}-\pz)\dot\beta} \nonumber \\ && + \epsilon\lambda e_{\beta}{}^{\dot\alpha} \eta^{\alpha({\tilde{s}}-\pz)\beta\dot\alpha({\tilde{s}}-\iz)} \\ \mathcal{F}^{\alpha({\tilde{s}}-\iz)\dot\alpha(s-\tz)} &=& D\Psi^{\alpha({\tilde{s}}-\iz)\dot\alpha({\tilde{s}}-\tz)} + e^{\alpha}{}_{\dot\beta} \Psi^{\alpha({\tilde{s}}-\iz)\dot\alpha({\tilde{s}}-\pz)\dot\beta} \nonumber \\ && + \epsilon\lambda e_{\beta}{}^{\dot\alpha} \Psi^{\alpha({\tilde{s}}-\pz)\beta\dot\alpha({\tilde{s}}-\iz)} \end{aligned}$$ In case of AdS space, the parameter $\epsilon=\pm 1$ here corresponds to the choice of the sign of mass-like terms. In the flat space, however, this parameter is arbitrary for the massless particle. With the gauge forms encapsulated in curvatures, the unfolded equations reproduce the exact form of the bosonic ones: $$\begin{aligned} 0 &=& \mathcal{F}^{\alpha({\tilde{s}}-1+m)\dot\alpha({\tilde{s}}-1-m)}, \qquad m<{\tilde{s}}-1 \nonumber \\ 0 &=& \mathcal{F}^{\alpha(2{\tilde{s}}-2)} - 2E_{\beta(2)}Y^{\alpha(2{\tilde{s}}-2)\beta(2)}, \nonumber \\ 0 &=& DY^{\alpha(2{\tilde{s}})} + e_{\beta\dot\alpha}Y^{\alpha(2{\tilde{s}})\beta\dot\alpha}, \\ 0 &=& DY^{\alpha(2{\tilde{s}}+m)\dot\alpha(m)} + e_{\beta\dot\beta}Y^{\alpha(2{\tilde{s}}+m)\beta\dot\alpha(m)\dot\beta} \nonumber \\ && + \lambda^2e^{\alpha\dot\alpha} Y^{\alpha(2{\tilde{s}}+m-1)\dot\alpha(m-1)}, \quad m \ge\iz \nonumber\end{aligned}$$ Note once again that numbers $k,m$ are half-integers here. Now we construct the massless supermultiplets. First, we introduce a supertransformation parameter $\zeta^{\alpha}$ with its hermitian conjugate $\zeta^{\dot\alpha}$ which obeys $D\zeta^{\alpha}=-\lambda e^{\alpha}{}_{\dot\alpha}\zeta^{\dot\alpha}$ (similarly for the hermitian conjugate). In the supermultiplet, the spins of boson and fermion are connected by the relation $\tilde{s}-s=\pm \iz$, so there are two possibilities. Half-integer superspin ---------------------- Our task here to construct supertransformations transforming bosonic equations into the fermionic ones and vice versa. It is natural to begin with the gauge invariant zero-forms because they form a closed sector. The most general ansatz for their supertransformations is rather simple: $$\begin{aligned} \delta W^{\alpha(k+s)\dot\alpha(k-s)} &=& \delta^{-0}_k Y^{\alpha(k+s-1)\dot\alpha(k-s)} \zeta^{\alpha} + \delta^{0+}_k Y^{\alpha(k+s)\dot\alpha(k-s)\dot\beta} \zeta_{\dot\beta}, \nonumber \\ \delta Y^{\alpha(k+s-1)\dot\alpha(k-s)} &=& \tilde{\delta}^{+0}_{k-\iz} W^{\alpha(k+s-1)\beta\dot\alpha(k-s)}\zeta_{\beta} + \tilde{\delta}^{0-}_{k-\iz} W^{\alpha(k+s-1)\dot\alpha(k-s-1)}\zeta^{\dot\alpha}\end{aligned}$$ where all the coefficients are in general complex. The solution for these coefficients turns out to be also simple: $$\delta^{0+}_k = C_b, \qquad \delta^{-0}_k = \lambda C_b, \qquad \tilde{\delta}^{+0}_{k-\iz} = C_f, \qquad \tilde{\delta}^{0-}_{k-\iz} = \lambda C_f$$ where $C_b$ and $C_f$ are two independent parameters (see below). Similarly, the supertransformations for the gauge one-forms (except a pair of the highest ones) look like: $$\begin{aligned} \delta\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)} &=& \gamma^{-0}_m \Psi^{\alpha(s-2+m)\dot\alpha(s-1-m)} \zeta^{\alpha} + \gamma^{0-}_m \Psi^{\alpha(s-1+m)\dot\alpha(s-2-m)} \zeta^{\dot\alpha}, \nonumber \\ \delta\Psi^{\alpha(s-2+m)\dot\alpha(s-1-m)} &=& \tilde{\gamma}^{+0}_{m-\iz} \Omega^{\alpha(s-2+m)\beta\dot\alpha(s-1-m)} \zeta_{\beta} + \tilde{\gamma}^{0+}_{m-\iz} \Omega^{\alpha(s-2+m)\dot\alpha(s-1-m)\dot\beta} \zeta_{\dot\beta}\end{aligned}$$ This gives the following solution for the coefficients with $m > 0$: $$\label{mlhssm_coeff1} \gamma_m^{0-} = C, \qquad \gamma_m^{-0} = \lambda C, \qquad \tilde{\gamma}^{+0}_{m+\iz} = \tilde{C}, \qquad \tilde{\gamma}^{0+}_{m+\iz} = \lambda \tilde{C}$$ where $C$ and $\tilde{C}$ are also independent. For $m=0$, we obtain $\gamma_{0}^{0-}=-C$, while for $m<0$, we have: $$\begin{aligned} \label{mlhssm_conj1} \gamma^{-0}_{m}=\epsilon\gamma^{0-}_{-m},\qquad \gamma^{0-}_{m}=\epsilon\gamma^{-0}_{-m},\qquad \tilde{\gamma}^{+0}_{m}=\epsilon\tilde{\gamma}^{0+}_{-m},\qquad \tilde{\gamma}^{0+}_{m}=\epsilon\tilde{\gamma}^{+0}_{-m}.\end{aligned}$$ At last, we have to consider two highest one-forms $\Omega^{\alpha(2s-2)}$ and $\Psi^{\alpha(2s-3)}$ (with their conjugates) because their equations connect the two sectors. The ansatz for the supertransformations is now: $$\begin{aligned} \delta\Omega^{\alpha(2s-2)} &=& \nu e_{\alpha\dot\alpha} Y^{\alpha(2s-1)} \zeta^{\dot\alpha} + \gamma^{0-}_{s-1} \Psi^{\alpha(2s-3)} \zeta^{\alpha}, \nonumber \\ \delta \Psi^{\alpha(2s-3)} &=& \tilde{\gamma}^{+0}_{s-\tz} \Omega^{\alpha(2s-3)\beta} \zeta_\beta + \tilde{\gamma}^{0+}_{s-\tz} \Omega^{\alpha(2s-3)\dot\alpha} \zeta_{\dot\alpha}\end{aligned}$$ and this provides the relations on the parameters from the two sections and fixes the only remaining coefficient: $$C_b = C, \qquad C_f = \tilde{C}, \qquad \nu = \frac{C}{2}$$ The hermiticity requires that $C=-\epsilon C^*$, $\tilde{C}=\epsilon\tilde{C}^*$. Then, either $C$ is imaginary and $\epsilon=1$ or $C$ is real and $\epsilon=-1$. The sign of $C^2$ determines the parity of the boson: it is even if $C^2>0$ and odd if $C^2<0$. Thus, bosonic parity and fermionic mass terms sign are connected. It is impossible to link $C$ and $\tilde{C}$ by considering unfolded equations only. However, these constants can be connected if one requires that the sum of their Lagrangians is invariant under the supertransformations. If one chooses the normalization of the Lagrangians as in [@KhZ19], it turns out that: $$\begin{aligned} C = 4i\epsilon(s-1) \tilde{C}\end{aligned}$$ Finally, we evaluate a commutator of two supertransformations to show that the superalgebra is indeed closed. Consider, for instance, the field $\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)}$ for $m>0$. We obtain: $$\begin{aligned} [\delta_1,\delta_2] \Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)} &=& C\tilde{C} \big[\lambda \Omega^{\alpha(s-2+m)\beta\dot\alpha(s-1-m)} \eta_{\beta}{}^{\alpha} + \lambda \Omega^{\alpha(s-1+m)\dot\beta\dot\alpha(s-2-m)} \eta_{\dot\beta}{}^{\dot\alpha} \nonumber \\ && \quad + \lambda^2\Omega^{\alpha(s-1+m)\beta\dot\alpha(s-2-m)} \xi_{\beta}{}^{\dot\alpha} + \Omega^{\alpha(s-2+m)\dot\beta\dot\alpha(s-1-m)} \xi^{\alpha}{}_{\dot\beta} \big]\end{aligned}$$ where $$\xi^{\alpha\dot\alpha} = {\zeta_2}^{\alpha}{\zeta_1}^{\dot\alpha}-{\zeta_1}^{\alpha}{\zeta_2}^{\dot\alpha}, \qquad \eta^{\alpha(2)} = 2{\zeta_2}^{\alpha}{\zeta_1}^{\alpha}, \qquad \eta^{\dot\alpha(2)} = 2{\zeta_2}^{\dot\alpha}{\zeta_1}^{\dot\alpha}.$$ and it is indeed a combination of pseudotranslations and Lorentz transformations. The expressions for other fields are similar. Now let us consider the flat space case. Contrary to the $AdS$ case, the equations for the coefficients with positive and negative $m$ fall into two independent subsystems so that we loose the hermiticity conditions on the parameters $C$ and $\tilde{C}$. The non-zero coefficients now are: $$\begin{aligned} \delta_{k}^{0+}&=& C, \qquad \tilde{\delta}_{k-\iz}^{+0}=\tilde{C}, \qquad \nu=\frac{C}{2}, \nonumber \\ \gamma_{m}^{0-}&=&C, \qquad \tilde{\gamma}^{+0}_{m+\iz}=\tilde{C}, \qquad m\ge 0, \\ \gamma_{m}^{-0}&=&C^*, \qquad \tilde{\gamma}^{0+}_{m-\iz}=\tilde{C}^*, \quad m\le 0. \nonumber\end{aligned}$$ To fix the phases of the coefficients $C$ and $\tilde{C}$, one has to consider a commutator of two supertransformations. Consider, for instance, field $\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)}$, $m>0$. The commutator of the supertransformations parametrized by ${\zeta_1}^\alpha$, ${\zeta_2}^\alpha$ is: $$[\delta_1,\delta_2]\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)} = C \tilde{C} \Omega^{\alpha(s+m)\dot\alpha(s-m-2)} \xi_{\alpha}{}^{\dot\alpha}.$$ The hermiticity requires that $C \tilde{C}$ is imaginary. With the requirement that the sum of the Lagrangians is invariant, a stronger condition can be obtained: $$\begin{aligned} C=4i(s-1) \tilde{C}^*\end{aligned}$$ Integer superspin ----------------- Again, we consider the $AdS$ case first. As in the previous case we begin with the sector of the gauge invariant zero-forms. In this case the most general ansatz for the supertransformations is: $$\begin{aligned} \delta W^{\alpha(k+s)\dot\alpha(k-s)} &=& \delta^{+0}_{k}Y^{\alpha(k+s)\beta\dot\alpha(k-s)}\zeta_{\beta} + \delta^{0-}_{k}Y^{\alpha(k+s)\dot\alpha(k-s-1)}\zeta^{\dot\alpha}, \nonumber \\ \delta Y^{\alpha(k+s+1)\dot\alpha(k-s)} &=& \tilde{\delta}^{-0}_{k+\iz}W^{\alpha(k+s)\dot\alpha(k-s)}\zeta^{\alpha} +\tilde{\delta}^{0+}_{k+\iz}W^{\alpha(k+s+1)\dot\alpha(k-s)\dot\beta}\zeta_{\dot\beta},\end{aligned}$$ where all coefficients are in general complex. The invariance of the unfolded equations under these supertransformations leads to: $$\delta_{k}^{+0} = C_b, \qquad \delta_{k}^{0-} = \lambda C_b, \qquad \tilde{\delta}_{k+\iz}^{0+}= C_f, \qquad \tilde{\delta}_{k+\iz}^{-0}=\lambda C_f.$$ where $C_b$ and $C_f$ are two independent parameters. Now let us consider a sector of gauge one-forms (except two highest ones $\Omega^{\alpha(2s-2)}$ and $\Psi^{\alpha(2s-1)}$ with their conjugates). Here the ansatz for the supertransformations looks like: $$\begin{aligned} \delta\Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)}&=& \gamma^{+0}_{m} \Psi^{\alpha(s-1+m)\beta\dot\alpha(s-1-m)}\zeta_{\beta} + \gamma^{0+}_{m} \Psi^{\alpha(s-1+m)\dot\alpha(s-1-m)\dot\beta} \zeta_{\dot\beta}, \nonumber \\ \delta\Psi^{\alpha(s+m)\dot\alpha(s-1-m)} &=& \tilde{\gamma}^{-0}_{m+\iz} \Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)}\zeta^{\alpha} + \tilde{\gamma}^{0-}_{m+\iz} \Omega^{\alpha(s+m)\dot\alpha(s-2-m)}\zeta^{\dot\alpha},\end{aligned}$$ and the solution gives us two additional independent parameters: $$\gamma^{+0}_{m} = C, \qquad \gamma^{0+}_{m} = \lambda C, \qquad \tilde{\gamma}_{m+\iz}^{0-}=\tilde{C}, \qquad \tilde{\gamma}_{m+\iz}^{-0} = \lambda \tilde{C} \qquad m > 0.$$ For $m=0$, we obtain $\gamma_{0}^{0+}=-C$, while for $m<0$, we have: $$\begin{aligned} \gamma^{-0}_{m}=\epsilon\gamma^{0-}_{-m},\qquad \gamma^{0-}_{m}=\epsilon\gamma^{-0}_{-m},\qquad \tilde{\gamma}^{+0}_{m}=\epsilon\tilde{\gamma}^{0+}_{-m}, \qquad \tilde{\gamma}^{0+}_{m}=\epsilon\tilde{\gamma}^{+0}_{-m}\end{aligned}$$ At last, we consider supertransformations for the remaining one-forms: $$\begin{aligned} \delta \Omega^{\alpha(2s-2)} &=& \gamma^{+0}_{s-1} \Psi^{\alpha(2s-2)\beta} \zeta_\beta + \gamma^{0+}_{s-1} \Psi^{\alpha(2s-2)\dot\beta} \zeta_{\dot\beta}, \nonumber \\ \delta\Psi^{\alpha(2s-1)} &=& \tilde{\nu} e_{\beta\dot\alpha} W^{\alpha(2s-1)\beta} \zeta^{\dot\alpha} + \tilde{\gamma}^{0-}_{s-\iz} \Omega^{\alpha(2s-2)}\zeta^{\alpha}\end{aligned}$$ which gives us the relations between the parameters of the two sectors and determines the only remaining one: $$C_b = C, \qquad C_f = \tilde{C}, \qquad \tilde{\nu} = \frac{\tilde{C}}{2}.$$ Again, this gives $C=-\epsilon C^*$, $\tilde{C}=\epsilon\tilde{C}^*$ together with the hermiticity requirement. Hence, the boson has the parity opposite to $\epsilon$, similarly to the half-integer superspin case. By considering the unfolded equations only, the only thing one can establish is that the product of the parameters $C$ and $\tilde{C}$ must be imaginary. The constants $C$ and $\tilde{C}$ can be linked by requirement that the sum of bosonic and fermionic Lagrangians is invariant under the supertransformations: $$\begin{aligned} (s-1)C = 4i\epsilon \tilde{C}\end{aligned}$$ The expression for the commutator of two supertransformations parametrized by ${\zeta_1}^\alpha$ and ${\zeta_2}^\alpha$ is the same as in the previous case: $$\begin{aligned} [\delta_1,\delta_2] \Omega^{\alpha(s-1+m)\dot\alpha(s-1-m)} &=& C\tilde{C} \big[\lambda\Omega^{\alpha(s-2+m)\beta\dot\alpha(s-1-m)} \eta_{\beta}{}^{\alpha} + \lambda \Omega^{\alpha(s-1+m)\dot\beta\dot\alpha(s-2-m)} \eta_{\dot\beta}{}^{\dot\alpha} \nonumber \\ && + \lambda^2\Omega^{\alpha(s-1+m)\beta\dot\alpha(s-2-m)} \xi_{\beta}{}^{\dot\alpha} + \Omega^{\alpha(s-2+m)\dot\beta\dot\alpha(s-1-m)} \xi^{\alpha}{}_{\dot\beta} \big],\end{aligned}$$ In the flat space, the invariance of the unfolded equations does not fix the phases of $C$ and $\tilde{C}$, so that the solution for the coefficients is: $$\begin{aligned} \delta_{k}^{+0} &=& C, \qquad \tilde{\delta}_{k+\iz}^{0+} = \tilde{C}, \qquad \tilde{\nu}=\frac{\tilde{C}}{2}, \nonumber \\ \gamma^{+0}_{m} &=& C, \qquad \tilde{\gamma}_{m+\iz}^{0-} = \tilde{C}, \qquad m\ge 0, \\ \gamma^{+0}_{m}&=&C^*, \qquad \tilde{\gamma}_{m+\iz}^{0-}=\tilde{C}^*, \qquad m < 0. \nonumber\end{aligned}$$ In this case the requirement that the $C\tilde{C}$ is imaginary follows only from the commutator of two supertransformations. A stronger relation $$\begin{aligned} (s-1)C = 4i \tilde{C}^*\end{aligned}$$ can still be obtained from the invariance of the sum of the two Lagrangians. Low spins examples ================== In this section we present two simplest examples of the massive $N=1$ supermultiplets: a scalar and a vector ones. Unfolded equations ------------------ First of all we need the unfolded equations for massive spin 1, spin $\iz$ and spin 0 fields. [**Massive vector**]{} In this case the unfolded formulations requires three infinite chains of the zero-forms: $W^{\alpha(k+m)\dot\alpha(k-m)}$, $k \ge 1$, $m = \pm 1,0$ corresponding to the three physical helicities $\pm 1,0$. The most general (up to the normalization) ansatz has the form: $$\begin{aligned} 0 &=& D W^{\alpha(k+1)\dot\alpha(k-1)} + e_{\beta\dot\beta} W^{\alpha(k+1)\beta\dot\alpha(k-1)\dot\beta} + \beta^{-+}_{k,1} e^\alpha{}_{\dot\beta} W^{\alpha(k)\dot\alpha(k-1)\dot\beta} + \beta^{--}_{k,k} e^{\alpha\dot\alpha} W^{\alpha(k)\dot\alpha(k-2)} \nonumber \\ 0 &=& D W^{\alpha(k)\dot\alpha(k)} + e_{\beta\dot\beta} W^{\alpha(k)\beta\dot\alpha(k)\dot\beta} + \beta^{-+}_{k,0} e^\alpha{}_{\dot\beta} W^{\alpha(k-1)\dot\alpha(k)\dot\beta} \nonumber \\ && + \beta^{+-}_{k,0} e_\beta{}^{\dot\alpha} W^{\alpha(k)\beta\dot\alpha(k-1)} + \beta^{--}_{k,0} e^{\alpha\dot\alpha} W^{\alpha(k-1)\dot\alpha(k-1)} \\ 0 &=& D W^{\alpha(k-1)\dot\alpha(k+1)} + e_{\beta\dot\beta} W^{\alpha(k-1)\beta\dot\alpha(k+1)\dot\beta} + \beta^{+-}_{k,k} e_\beta{}^{\dot\alpha} W^{\alpha(k-1)\beta\dot\alpha(k)} + \beta^{--}_{k,1} e^{\alpha\dot\alpha} W^{\alpha(k-2)\dot\alpha(k)} \nonumber\end{aligned}$$ The self-consistency of these equations leads to the following solutions for the coefficients: $$\begin{aligned} \beta^{+-}_{k,0} &=& \beta^{-+}_{k,0} = \frac{1}{k(k+1)} \nonumber \\ \beta^{+-}_{k,1} &=& \beta^{-+}_{k,1} = \frac{2m^2}{(k+1)(k+2)} \nonumber \\ \beta^{--}_{k,1} &=& - \frac{1}{k(k+1)} [m^2 - k(k+1)\lambda^2] \\ \beta^{--}_{k,0} &=& - \frac{(k-1)(k+2)}{k^2(k+1)^2} [m^2 - k(k+1)\lambda^2] \nonumber\end{aligned}$$ As is well known, in the flat Minkowski space all the members of the supermultiplet must have equal masses. But in $AdS$ space, as it has been shown in [@BKhSZ19], there must be a small splitting between the bosonic and fermionic masses of the order of the cosmological constant. For the lower spins we consider in this section, the bosonic mass $m$ and the fermionic one $\tilde{m}$ must satisfy: $$m^2 = \tilde{m}(\tilde{m} \pm \lambda)$$ In this case the $\beta$-functions take the form: $$\begin{aligned} \beta^{+-}_{k,0} &=& \beta^{-+}_{k,0} = \frac{1}{k(k+1)} \nonumber \\ \beta^{+-}_{k,1} &=& \beta^{-+}_{k,1} = \frac{2\tilde{m}(\tilde{m}\pm\lambda)}{(k+1)(k+2)} \\ \beta^{--}_{k,1} &=& - \frac{1}{k(k+1)} [\tilde{m} \pm (k+1)\lambda][\tilde{m} \mp k\lambda] \nonumber \\ \beta^{--}_{k,0} &=& - \frac{(k-1)(k+2)}{k^2(k+1)^2} [\tilde{m} \pm (k+1)\lambda][\tilde{m} \mp k\lambda] \nonumber\end{aligned}$$ It is this factorization of the $\beta^{--}$-functions that appears to be crucial for the construction of the supermultiplets in what follows.\ [**Massive spinor**]{} In this case there are two physical helicities $\pm 1/2$ and we need a pair of (conjugated) chains of the zero-forms $Y^{\alpha(k+1)\dot\alpha(k)}$, $Y^{\alpha(k)\dot\alpha(k+1)}$, $k \ge 0$. We choose the following ansatz for the unfolded equations: $$\begin{aligned} 0 &=& D Y^{\alpha(k+1)\dot\alpha(k)} + e_{\beta\dot\beta} Y^{\alpha(k+1)\beta\dot\alpha\dot\beta} + \tilde{\beta}^{-+}_k e^\alpha{}_{\dot\beta} Y^{\alpha(k)\dot\alpha(k)\dot\beta} + \tilde{\beta}^{--}_k e^{\alpha\dot\alpha} Y^{\alpha(k)\dot\alpha(k-1)} \nonumber \\ 0 &=& D Y^{\alpha(k)\dot\alpha(k+1)} + e_{\beta\dot\beta} Y^{\alpha(k)\beta\dot\alpha(k+1)\dot\beta} + \tilde{\beta}^{+-}_k e_\beta{}^{\dot\alpha} Y^{\alpha(k)\beta\dot\alpha(k)} + \tilde{\beta}^{--}_k e^{\alpha\dot\alpha} Y^{\alpha(k-1)\dot\alpha(k)}\end{aligned}$$ The self-consistency of these equations requires: $$\begin{aligned} \tilde{\beta}^{+-}_k &=& \tilde{\beta}^{-+}_k = \frac{\epsilon \tilde{m}}{(k+1)(k+2)}, \qquad \epsilon = \pm 1 \nonumber \\ \tilde{\beta}^{--}_k &=& - \frac{1}{(k+1)^2} [\tilde{m}{}^2 - (k+1)^2\lambda^2]\end{aligned}$$ Note that in what follows we always assume that the fermionic masses are positive and take into account the two possible signs of the $\tilde{\beta}^{+-}$ (which also play an important role in our construction) using the parameter $\epsilon = \pm 1$.\ [**Massive scalar**]{} In this case we have one chain of the zero-forms only with the unfolded equations: $$0 = D W^{\alpha(k)\dot\alpha(k)} + e_{\beta\dot\beta} W^{\alpha(k)\beta\dot\alpha(k)\dot\beta} + \beta^{--}_k e^{\alpha\dot\alpha} W^{\alpha(k-1)\dot\alpha(k-1)}$$ where $$\beta^{--}_k = - \frac{1}{k(k+1)} [m_0{}^2 - k(k+1)\lambda^2]$$ As in the spin 1 case, the factorization of the $\beta^{--}$ function is achieved at $m_0{}^2 = \tilde{m}(\tilde{m} \pm \lambda)$: $$\beta^{--}_k = - \frac{1}{k(k+1)} [\tilde{m} \pm (k+1)\lambda] [\tilde{m} \mp k\lambda]$$ Scalar supermultiplet --------------------- In the flat case such supermultiplet was considered in [@PV10a; @MV13]. We begin with a one pair of spinor and scalar fields. Our first task is to find supertransformations such that the variations of the fermionic equations be proportional to the bosonic ones and vice versa.\ [**Supertransformations for spinor**]{} We choose the following ansatz for the supertransformations where the coefficients are in general complex: $$\begin{aligned} \delta Y^{\alpha(k+1)\dot\alpha(k)} &=& \delta^{-0}_k W^{\alpha(k)\dot\alpha(k)} \zeta^\alpha + \delta^{0+}_k W^{\alpha(k+1)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} \nonumber \\ \delta Y^{\alpha(k)\dot\alpha(k+1)} &=& \delta^{+0}_k W^{\alpha(k)\beta\dot\alpha(k+1)} \zeta_\beta + \delta^{0-}_k W^{\alpha(k)\dot\alpha(k)} \zeta^{\dot\alpha} \label{eqs11}\end{aligned}$$ where $$\delta^{+0}_k = (\delta^{0+}_k)^*, \qquad \delta^{-0}_k = (\delta^{0-}_k)^*$$ The solution appears to be: $$\delta^{+0}_k = \epsilon \tilde{C}, \qquad \delta^{-0}_k = \frac{1}{(k+1)}[\tilde{m}\pm(k+1)\lambda] \tilde{C}, \qquad \tilde{C}^* = \mp \epsilon \tilde{C} \label{eqs12}$$ where the $\pm$-sign corresponds to that of the relation $m_0{}^2=\tilde{m}(\tilde{m}\pm\lambda)$ and $\epsilon$ comes from the $\tilde{\beta}^{+-}$ function.\ [**Supertransformations for scalar**]{} Similarly, for the spin-0 field we take the following supertransformations (also with the complex coefficients): $$\delta W^{\alpha(k)\dot\alpha(k)} = \delta^{+0}_k \phi^{\alpha(k)\beta\dot\alpha(k)} \zeta_\beta + \delta^{-0}_k \phi^{\alpha(k-1)\dot\alpha(k)} \zeta^\alpha + \delta^{-+}_k \phi^{\alpha(k)\dot\alpha(k)\dot\beta} \ \zeta_{\dot\beta} + \delta^{0-}_k \phi^{\alpha(k)\dot\alpha(k-1)} \zeta^{\dot\alpha} \label{eqp1}$$ where $$\delta^{0+}_k = - (\delta^{+0}_k)^*, \qquad \delta^{0-}_k = - (\delta^{-0}_k)^*$$ with the solution: $$\delta^{+0}_k = C, \qquad \delta^{-0}_k = - \frac{\epsilon}{(k+1)}[\tilde{m}\pm(k+1)\lambda]C, \qquad C^* = \pm \epsilon C \label{eqp2}$$ Now having the explicit form of the supertransformations at our disposal, it is easy to calculate their commutators and find that their superalgebra is not closed. The reason is clear: we must have an equal number of bosonic and fermionic degrees of freedom in each supermultiplet. As is well known the scalar supermultiplet contains two scalar fields, moreover, it is important that they must be scalar and pseudo-scalar. So we consider the supermultiplet $(1/2,0,0')$. For concreteness we take $\epsilon = +1$, then to have opposite parities for two scalars we choose: $$m_1{}^2 = \tilde{m}(\tilde{m} + \lambda), \qquad m_2{}^2 = \tilde{m}(\tilde{m} - \lambda)$$ The complete set of the supertransformations for the spinor now has the form: $$\begin{aligned} \delta Y^{\alpha(k+1)\dot\alpha(k)} &=& i\tilde{\delta}_{1,k}^- W_1^{\alpha(k)\dot\alpha(k)} \zeta^\alpha - i\tilde{C}_1 W_1^{\alpha(k+1)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} \nonumber \\ && + \tilde{\delta}_{2,k}^- W_2^{\alpha(k)\dot\alpha(k)} \zeta^\alpha + \tilde{C}_2 W_2^{\alpha(k+1)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} \nonumber \\ \delta Y^{\alpha(k)\dot\alpha(k+1)} &=& i\tilde{C}_1 W_1^{\alpha(k)\beta\dot\alpha(k+1)} \zeta_\beta - i\tilde{\delta}_{1,k}^- W_1^{\alpha(k)\dot\alpha(k)} \zeta^{\dot\alpha} \\ && + \tilde{C}_2 W_2^{\alpha(k)\beta\dot\alpha(k+1)} \zeta_\beta + \tilde{\delta}_{2,k}^- W_2^{\alpha(k)\dot\alpha(k)} \zeta^{\dot\alpha} \nonumber\end{aligned}$$ where $$\begin{aligned} \tilde{\delta}_{1,k}^- &=& \frac{1}{(k+1)} [\tilde{m} - (k+1)\lambda] \tilde{C}_1 \nonumber \\ \tilde{\delta}_{2,k}^- &=& \frac{1}{(k+1)} [\tilde{m} + (k+1)\lambda] \tilde{C}_2\end{aligned}$$ For the supertransformations of the two scalars we have: $$\begin{aligned} \delta W_1^{\alpha(k)\dot\alpha(k)} &=& C_1 Y^{\alpha(k)\beta\dot\alpha(k)} \zeta_\beta + \delta_{1,k}^- Y^{\alpha(k-1)\dot\alpha(k)} \zeta^\alpha \nonumber \\ && - C_1 Y^{\alpha(k)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} - \delta_{1,k}^- Y^{\alpha(k)\dot\alpha(k-1)} \zeta^{\dot\alpha} \nonumber \\ \delta W_2^{\alpha(k)\dot\alpha(k)} &=& iC_2 Y^{\alpha(k)\beta\dot\alpha(k)} \zeta_\beta + i\delta_{2,k}^- Y^{\alpha(k-1)\dot\alpha(k)} \zeta^\alpha \\ && + iC_2 Y^{\alpha(k)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} + i\delta_{2,k}^- Y^{\alpha(k)\dot\alpha(k-1)} \zeta^{\dot\alpha} \nonumber\end{aligned}$$ where $$\begin{aligned} \delta_{1,k}^- &=& - \frac{1}{(k+1)} [\tilde{m} + (k+1)\lambda] C_1 \nonumber \\ \delta_{2,k}^- &=& - \frac{1}{(k+1)} [\tilde{m} - (k+1)\lambda] C_2\end{aligned}$$ So we have four (real) arbitrary parameters $C_{1,2}$ and $\tilde{C}_{1,2}$. We proceed with calculations of the commutators. For the first scalar field we find: $$\begin{aligned} \ [\delta_1, \delta_2] W_1^{\alpha(k)\dot\alpha(k)} &=& - 2i C_1\tilde{C}_1 [ \xi_{\beta\dot\beta} W_1^{\alpha(k)\beta\dot\alpha(k)\dot\beta} + \beta^{--}_k \xi^{\alpha\dot\alpha} W_1^{\alpha(k-1)\dot\alpha(k-1)} \nonumber \\ && \qquad \qquad + \lambda (\eta^\alpha{}_\beta W_1^{\alpha(k-1)\beta\dot\alpha(k)} + \eta^{\dot\alpha}{}_{\dot\beta} W_1^{\alpha(k)\dot\alpha(k-1)\dot\beta}) ]\end{aligned}$$ where $$\xi^{\alpha\dot\alpha} = \zeta_1^\alpha \zeta_2^{\dot\alpha} - (1 \leftrightarrow 2), \qquad \eta^{\alpha(2)} = \zeta_1^\alpha \zeta_2^\alpha - (1 \leftrightarrow 2)$$ The results for the second scalar $W_2$ are the same provided $$C_2\tilde{C}_2 = - C_1\tilde{C}_1$$ At last, for the spinor we obtain: $$\begin{aligned} \ [\delta_1, \delta_2] Y^{\alpha(k+1)\dot\alpha(k)} &=& - 2iC_1\tilde{C}_1 [ \xi_{\beta\dot\beta} Y^{\alpha(k+1)\beta\dot\alpha(k)\dot\beta} + \tilde{\beta}^{-+}_k \xi^\alpha{}_{\dot\beta} Y^{\alpha(k)\dot\alpha(k)\dot\beta} + \tilde{\beta}^{--}_k \xi^{\alpha\dot\alpha} Y^{\alpha(k)\dot\alpha(k-1)} \nonumber \\ && \qquad \qquad + \lambda (\eta^\alpha{}_\beta Y^{\alpha(k)\beta\dot\alpha(k)} + \eta^{\dot\alpha}{}_{\dot\beta} Y^{\alpha(k+1)\dot\alpha(k-1)\dot\beta}) ] \end{aligned}$$ Comparison with the initial unfolded equations shows that the supertransformations close on-shell and give $AdS_4$ superalgebra: $$\{ Q^\alpha, Q^{\dot\alpha} \} \sim P^{\alpha\dot\alpha}, \qquad \{ Q^\alpha, Q^\beta \} \sim \lambda M^{\alpha\beta}, \qquad \{ Q^{\dot\alpha}, Q^{\dot\beta} \} \sim \lambda M^{\dot\alpha\dot\beta}$$ Vector supermultiplet --------------------- Let us turn to our second example — vector supermultiplet. We begin with the pair vector-spinor.\ [**Supertransformations for vector**]{} The most general ansatz (taking into account the hermicity conditions) has the form: $$\begin{aligned} \delta W^{\alpha(k+1)\dot\alpha(k-1)} &=& \delta^{-0}_{k,1} Y^{\alpha(k)\dot\alpha(k-1)} \zeta^\alpha - (\delta_{k,1}^{+0})^* Y^{\alpha(k+1)\dot\alpha(k-1)\dot\beta} \zeta_{\dot\beta} \nonumber \\ \delta W^{\alpha(k)\dot\alpha(k)} &=& \delta^{+0}_{k,0} Y^{\alpha(k)\beta\dot\alpha(k)} \zeta_\beta + \delta^{-0}_{k,0} Y^{\alpha(k-1)\dot\alpha(k)} \zeta^\alpha \nonumber \\ && - (\delta^{+0}_{k,0})^* Y^{\alpha(k)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} - (\delta^{-0}_{k,0})^* Y^{\alpha(k)\dot\alpha(k-1)} \zeta^{\dot\alpha} \label{eqv1} \\ \delta W^{\alpha(k-1)\dot\alpha(k+1)} &=& \delta^{+0}_{k,1} Y^{\alpha(k-1)\beta\dot\alpha(k+1)} \zeta_\beta - (\delta^{-0}_{k,1})^* Y^{\alpha(k-1)\dot\alpha(k)} \zeta^{\dot\alpha} \nonumber\end{aligned}$$ where all the coefficients are in general complex. The invariance of the unfolded equations gives: $$\begin{aligned} \delta^{+0}_{k,1} &=& 2\epsilon(\tilde{m}\pm\lambda)C, \qquad \delta^{+0}_{k,0} = C, \qquad C^* = \mp \epsilon C \nonumber \\ \delta^{-0}_{k,1} &=& \frac{2}{(k+1)}[\tilde{m}\pm\lambda] [\tilde{m}\pm(k+1)\lambda] C \label{eqv2} \\ \delta^{-0}_{k,0} &=& \epsilon \frac{(k+2)}{k(k+1)} [\tilde{m}\pm(k+1)\lambda] C \nonumber\end{aligned}$$ [**Supertransformations for spinor**]{} Similarly, we introduce: $$\begin{aligned} \delta Y^{\alpha(k+1)\dot\alpha(k)} &=& \tilde{\delta}^{+0}_{k,1} W^{\alpha(k+1)\beta\dot\alpha(k)} \zeta_\beta + \tilde{\delta}^{-0}_{k,1} W^{\alpha(k)\dot\alpha(k)} \zeta^\alpha \nonumber \\ && + (\tilde{\delta}^{+0}_{k,0})^* W^{\alpha(k+1)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} + (\tilde{\delta}^{-0}_{k,0})^* W^{\alpha(k+1)\dot\alpha(k-1)} \zeta^{\dot\alpha} \nonumber \\ \delta Y^{\alpha(k)\dot\alpha(k+1)} &=& \tilde{\delta}^{+0}_{k,0} W^{\alpha(k)\beta\dot\alpha(k+1)} \zeta_\beta + \tilde{\delta}^{-0}_{k,0} W^{\alpha(k-1)\dot\alpha(k+1)} \zeta^\alpha \label{eqs21} \\ && + (\tilde{\delta}^{+0}_{k,1})^* W^{\alpha(k)\dot\alpha(k+1)\dot\beta} \zeta_{\dot\beta} + (\tilde{\delta}^{-0}_{k,1})^* W^{\alpha(k)\dot\alpha(k)} \zeta^{\dot\alpha} \nonumber\end{aligned}$$ and obtain: $$\begin{aligned} \tilde{\delta}^{+0}_{k,1} &=& \tilde{C}, \qquad \tilde{\delta}^{+0}_{k,0} = \epsilon m_1 \tilde{C}, \qquad \tilde{C}^* = \pm \epsilon \tilde{C} \nonumber \\ \tilde{\delta}^{-0}_{k,1} &=& - \frac{k}{(k+1)(k+2)} \tilde{m}[\tilde{m}\mp(k+1)\lambda]\tilde{C} \label{eqs22} \\ \tilde{\delta}^{-0}_{k,0} &=& - \epsilon \frac{1}{(k+1)} [\tilde{m}\mp(k+1)\lambda]\tilde{C} \nonumber\end{aligned}$$ It is straightforward to check that these supertransformations do not close and the reason is again that we have three physical degrees of freedom for the massive vector and only two — for spinor. So we turn to the complete vector supermultiplet $(1,1/2,1/2,0')$. In this case it is also important that the spin 1 and spin 0 have opposite parities. We assume that the coefficients for the vector field supertransformations are real and choose: $$m_v{}^2 = m_1(m_1+\lambda) = m_2(m_2-\lambda) = m_s{}^2, \qquad \epsilon_1 = - 1, \qquad \epsilon_2 = + 1$$ where $m_{1,2}$ are masses of the two spinors. This leads to the following expressions for the four possible boson-fermion pairs. For the vector and first spinor we have formulas (\[eqv1\]),(\[eqv2\]) with the parameter $C_1$ and (\[eqs21\]),(\[eqs22\]) with the parameter $i\tilde{C}_1$ (all with upper signs), while for the second spinor — the same formulas but with the parameters $C_2$, $i\tilde{C}_2$ (with lower signs). Similarly, for the first spinor and the pseudo-scalar we have formulas (\[eqs11\]),(\[eqs12\]) with the parameter $iC_3$ and (\[eqp1\]),(\[eqp2\]) with the parameter $\tilde{C}_3$ (with upper signs), while for the second spinor — the same with the parameters $iC_4$, $\tilde{C}_4$ (with lower signs). So we have eight (real) parameters $C_{1-4}$, $\tilde{C}_{1-4}$. Let us consider the commutators for these supertransformations. Note that all subsequent formulas are given up to the common multiplier $-2i(m_1+m_2)C_1\tilde{C}_1$. The closure of the superalgebra on the vector field requires: $$C_1\tilde{C}_1 + C_2\tilde{C}_2 = 0, \qquad m_2C_1\tilde{C}_3 + m_1C_2\tilde{C}_4 = 0$$ In this case we obtain: $$\begin{aligned} \ [\delta_1, \delta_2] W^{\alpha(k+1)\dot\alpha(k-1)} &\sim& \xi_{\beta\dot\beta} W^{\alpha(k+1)\beta\dot\alpha(k-1)\dot\beta} + \beta^{-+}_{k,1} \xi^\alpha{}_{\dot\beta} W^{\alpha(k)\dot\alpha(k-1)\dot\beta} + \beta^{--}_{k,1} \xi^{\alpha\dot\alpha} W^{\alpha(k)\dot\alpha(k-2)} \nonumber \\ && + \lambda [ \eta^\alpha{}_\beta W^{\alpha(k)\beta\dot\alpha(k-1)} + \eta^{\dot\alpha}{}_{\dot\beta} W^{\alpha(k+1)\dot\alpha(k-2)\dot\beta}] \\ \ [\delta_1, \delta_2] W^{\alpha(k)\dot\alpha(k)} &\sim& \xi_{\beta\dot\beta} W^{\alpha(k)\beta\dot\alpha(k)\dot\beta} + \beta^{-+}_{k,0} \xi^\alpha{}_{\dot\beta} W^{\alpha(k-1)\dot\alpha(k)\dot\beta} \nonumber \\ && + \beta^{+-}_{k,0} \xi_\beta{}^{\dot\alpha} W^{\alpha(k)\beta\dot\alpha(k-1)} + \beta^{--}_{k,0} \xi^{\alpha\dot\alpha} W^{\alpha(k-1)\dot\alpha(k-1)} \nonumber \\ && + \lambda [\eta^\alpha{}_\beta W^{\alpha(k-1)\beta\dot\alpha(k-1)} + \eta^{\dot\alpha}{}_{\dot\beta} W^{\alpha(k)\dot\alpha(k-1)\dot\beta} ] \end{aligned}$$ For the first spinor we obtain the conditions $$m_1C_1\tilde{C}_1 + C_3\tilde{C}_3 = 0, \qquad m_1C_2\tilde{C}_1 + C_4\tilde{C}_3 = 0$$ leading to $$\begin{aligned} \ [\delta_1, \delta_2] Y^{\alpha(k+1)\dot\alpha(k)} &\sim& \xi_{\beta\dot\beta} Y^{\alpha(k+1)\beta\dot\alpha(k)\dot\beta} + \gamma^{-+}_k \xi^\alpha{}_{\dot\beta} Y^{\alpha(k)\dot\alpha(k)\dot\beta} + \gamma^{--}_k \xi^{\alpha\dot\alpha} Y^{\alpha(k)\dot\alpha(k-1)} \nonumber \\ && + \lambda [\eta^\alpha{}_\beta Y^{\alpha(k)\beta\dot\alpha(k)} + \eta^{\dot\alpha}{}_{\dot\beta} Y^{\alpha(k+1)\dot\alpha(k-1)\dot\beta} ]\end{aligned}$$ The results for the second spinor are the same provided $$m_2C_2\tilde{C}_2 + C_4\tilde{C}_4 = 0, \qquad m_2C_1\tilde{C}_2 + C_3\tilde{C}_4 = 0$$ At last the commutator on the pseudo-scalar closes if $$C_3\tilde{C}_1 + C_4\tilde{C}_2 = 0$$ and gives $$\begin{aligned} \ [\delta_1, \delta_2] \tilde{W}^{\alpha(k)\dot\alpha(k)} &\sim& \xi_{\beta\dot\beta} \tilde{W}^{\alpha(k)\beta\dot\alpha(k)\dot\beta} + \beta^{--}_k \xi^{\alpha\dot\alpha} \tilde{W}^{\alpha(k-1)\dot\alpha(k-1)} \nonumber \\ && + \lambda [\eta^\alpha{}_\beta \tilde{W}^{\alpha(k-1)\beta\dot\alpha(k)} + \eta^{\dot\alpha}{}_{\dot\beta} \tilde{W}^{\alpha(k)\dot\alpha(k-1)\dot\beta} ]\end{aligned}$$ Thus we indeed obtain the correct on-shell superalgebra provided a number of relations on the parameters hold. It is easy to check that these relations are consistent, one of the possible simple solutions being $$C_2 = C_3 = C_4 = C_1, \qquad \tilde{C}_2 = - \tilde{C}_1, \qquad \tilde{C}_3 = - m_1\tilde{C}_1, \qquad \tilde{C}_4 = m_2\tilde{C}_1$$ Massive higher spin supermultiplets =================================== Lagrangian formulation for the massive higher spin $N=1$ supermultiplets in $AdS_4$ has been developed in [@BKhSZ19]. In this section we consider an unfolded formulation for these supermultiplets. First of all we recall the unfolded equations for massive bosonic and fermionic fields developed in [@KhZ19]. Then we consider the pairs of bosonic and fermionic fields which differ in spin by $\iz$ (we call them superblock) and construct the supertransformations transforming bosonic equations into fermionic ones and vice versa. At last we consider two types of massive supermultiplets (with integer and half-integer superspins) and adjust the parameters of their four superblocks so that the superalgebra is closed. Unfolded equations ------------------ Let us recall the unfolded equations developed in [@KhZ19]. ### Bosonic case To describe a massive spin-$s$ boson, one needs gauge one-forms (physical, auxiliary and extra) $\Omega^{\alpha(k+m)\dot\alpha(k-m)}$, $|m|\le k \le s-1$, Stueckelberg zero-forms $W^{\alpha(k+m)\dot\alpha(k-m)}$, $|m|\le k \le s-1$, and gauge invariant zero-forms $W^{\alpha(k+m)\dot\alpha(k-m)}$, $|m|\le s \le k$. We use a convenient normalization of the Stueckelberg zero-forms where their transformations are just shifts: $$\delta W^{\alpha(k+m)\dot\alpha(k-m)} = \eta^{\alpha(k+m)\dot\alpha(k-m)}$$ As for the gauge one-forms, their gauge transformations: $$\begin{aligned} \label{gauge_trans1} \delta\Omega^{\alpha(k+m){\dot{\alpha}}(k-m)} &=& D\eta^{\alpha(k+m)\dot\alpha(k-m)} + \alpha^{--}_{k,m} e^{\alpha\dot\alpha} \eta^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \alpha^{++}_{k,m} e_{\beta\dot\beta} \eta^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \alpha^{-+}_{k,m} e^\alpha{}_{\dot\beta} \eta^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \nonumber \\ && + \alpha^{+-}_{k,m} e_\beta{}^{\dot\alpha} \eta^{\alpha(k+m)\beta\dot\alpha(k-m-1)}, \\ \delta\Omega^{\alpha(2k)} &=& D\eta^{\alpha(2k)} + \alpha^{++}_{k,k} e_{\beta\dot\alpha} \eta^{\alpha(2k)\beta\dot\alpha} + \alpha^{-+}_{k,k} e^\alpha{}_{\dot\alpha} \eta^{\alpha(2k-1)\dot\alpha}, \nonumber\end{aligned}$$ are the modification of the massless ones by the cross terms with the coefficients $\alpha^{+-}$, $\alpha^{-+}$. In what follows we assume that all functions $\alpha$ are real and satisfy the hermiticity conditions: $$\alpha^{+-}_{k,m} = \alpha^{-+}_{k,-m}, \qquad \alpha^{++}_{k,m} = \alpha^{++}_{k,-m}, \qquad \alpha^{--}_{k,m} = \alpha^{--}_{k,-m}$$ All these functions can be expressed in terms of the main one $\alpha^{-+}_m$: $$\begin{aligned} \alpha^{-+}_{k,m} &=& \frac{\alpha^{-+}_{m}} {(k-m+1)(k-m+2)(k+m)(k+m+1)}, \quad m>0,\nonumber \\ \alpha^{++}_{k,m} &=& \frac{\alpha^{++}_{k}}{(k-m+1)(k-m+2)}, \quad m\ge 0, \nonumber\\ \alpha^{--}_{k,m} &=& \frac{\alpha^{--}_{k}}{(k+m)(k+m+1)}, \quad m\ge 0, \\ \alpha^{+-}_{k,m} &=& 1, \quad m \ge 0 \nonumber \\ \alpha^{++}_k{}^2 &=& k(k+1)\alpha^{-+}_{k+2}, \quad k\ge 2, \quad \alpha^{++}_{0}{}^2 = 2\alpha^{-+}_{2}, \quad \alpha^{--}_{k}{}^2 = \frac{\alpha^{-+}_{k+1}}{k(k-1)} \nonumber \end{aligned}$$ For the massive spin-$s$ boson we consider in this subsection the function $\alpha^{-+}_{m}$ is: $$\alpha^{-+}_{m} = (s-m+1)(s+m)[M^2-m(m-1)\lambda^2]$$ As is well known, in the flat space masses of the all members of the same supermultiplet must be equal. As it was shown in [@BKhSZ19], in $AdS_4$ case bosonic $M$ and fermionic $\tilde{M}$ mass parameters must satisfy the relation $M^2 = \tilde{M}[\tilde{M} \pm \lambda]$. In this case the function $\alpha^{-+}_m$ takes the form: $$\alpha^{-+}_m = (s-m+1)(s+m)[\tilde{M} \pm m\lambda] [\tilde{M} \mp (m-1)\lambda]$$ and this factorization appears to be crucial for the construction of the superblocks and hence the supermultiplets. The explicit expressions for all the $\alpha$-functions given above were found [@KhZ19] in the construction of the gauge invariant self consistent two-forms (curvatures) for each gauge one-form $(0 \le m < k)$: $$\begin{aligned} \label{curvatures1} \mathcal{R}^{\alpha(k+m)\dot\alpha(k-m)} &=& D\Omega^{{\alpha(k+m)\dot\alpha(k-m)}} + \alpha^{--}_{k,m} e^{\alpha\dot\alpha} \Omega^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \alpha^{++}_{k,m} e_{\beta\dot\beta} \Omega^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \alpha^{-+}_{k,m} e^\alpha{}_{\dot\beta} \Omega^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \nonumber \\ && + \alpha^{+-}_{k,m} e_\beta{}^{\dot\alpha} \Omega^{\alpha(k+m)\beta\dot\alpha(k-m-1)}, \nonumber \\ \mathcal{R}^{\alpha(2k)} &=& D\Omega^{\alpha(2k)} + \alpha^{++}_{k,k} e_{\beta\dot\alpha} \Omega^{\alpha(2k)\beta\dot\alpha} + \alpha^{-+}_{k,k} e^\alpha{}_{\dot\alpha} \Omega^{\alpha(2k-1)\dot\alpha} \\ && - 2\alpha^{-+}_{k,k}\alpha^{--}_{k} E^{\alpha(2)} W^{\alpha(2k-2)} - 2\alpha^{++}_{k,k} E_{\beta(2)} W^{\alpha(2k)\beta(2)} \nonumber \\ && - \frac{\alpha^{-+}_{k+1}}{k+1} E^\alpha{}_\beta W^{\alpha(2k-1)\beta} \nonumber \\ \mathcal{R} &=& D\Omega + \alpha^{++}_{0,0} e_{\alpha\dot\alpha} \Omega^{\alpha\dot\alpha} -2\alpha^{++}_{0,0} E_{\alpha(2)} W^{\alpha(2)} -2\alpha^{++}_{0,0} E_{\dot\alpha(2)} W^{\dot\alpha(2)}. \nonumber\end{aligned}$$ Due to the simple normalization for the Stueckelberg zero-forms we use, their gauge invariant one-forms are determined by the same $\alpha$-functions: $$\begin{aligned} \mathcal{C}^{\alpha(k+m)\dot\alpha(k-m)} &=& DW^{\alpha(k+m)\dot\alpha(k-m)} - \Omega^{\alpha(k+m)\dot\alpha(k-m)} + \alpha^{--}_{k,m} e^{\alpha\dot\alpha} W^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \alpha^{++}_{k,m} e_{\beta\dot\beta} W^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \alpha^{+-}_{k,m} e_\beta{}^{\dot\alpha} W^{\alpha(k+m)\beta\dot\alpha(k-m-1)} \nonumber \\ && + \alpha^{-+}_{k,m} e^\alpha{}_{\dot\beta} W^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta}. \label{curvatures2} \end{aligned}$$ Now we are ready to present a set of unfolded equations. The whole system can be subdivided into three subsystems. The first subsystem is just the zero curvature conditions for most of the gauge invariant two- and one-forms (except some highest ones, see below): $$\begin{aligned} \label{unf_eq1} 0 &=& \mathcal{R}^{\alpha(k+m)\dot\alpha(k-m)}, \qquad \qquad k<s-1 \nonumber \\ 0 &=& \mathcal{R}^{\alpha(s-1+m)\dot\alpha(s-1-m)}, \qquad |m|<s-1 \\ 0 &=& \mathcal{C}^{\alpha(k+m)\dot\alpha(k-m)}, \qquad \qquad k<s-1 \nonumber\end{aligned}$$ The second one contains these remaining gauge invariant curvatures and gives a connection with the sector of the gauge invariant zero-forms: $$\begin{aligned} \label{unf_eq2} 0 &=& \mathcal{R}^{\alpha(2s-2)}+ 2E_{\beta(2)} W^{\alpha(2s-2)\beta(2)} \nonumber \\ 0 &=& \mathcal{C}^{\alpha(s-1+m)\dot\alpha(s-1-m)}- e_{\beta\dot\beta} W^{\alpha(s-1+m)\beta\alpha(s-1-m)\dot\beta}\end{aligned}$$ Finally, the third one contains the gauge invariant zero-forms only. Its structure reproduces the structure of the unfolded equations for massless components with added cross terms $(m < k)$: $$\begin{aligned} \label{unf_eq3} 0 &=& DW^{\alpha(k+m)\dot\alpha(k-m)} + \beta^{--}_{k,m} e^{\alpha\dot\alpha} W^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \beta^{++}_{k,m} e_{\beta\dot\beta} W^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \beta^{+-}_{k,m} e_\beta{}^{\dot\alpha} W^{\alpha(k+m)\beta\dot\alpha(k-m-1)} \nonumber \\ && + \beta^{-+}_{k,m} e^\alpha{}_{\dot\beta} W^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta}, \\ 0 &=& D W^{\alpha(2k)} + \beta^{++}_{k,k} e_{\beta\dot\alpha} W^{\alpha(2k)\beta\dot\alpha} + \beta^{-+}_{k,k} e^\alpha{}_{\dot\alpha} W^{\alpha(2k-1)\dot\alpha} \nonumber\end{aligned}$$ Here we also assume that all the functions $\beta$ are real and satisfy the hermiticity conditions: $$\beta^{+-}_{k,m} = \beta^{-+}_{k,-m}, \qquad \beta^{++}_{k,m} = \beta^{++}_{k,-m}, \qquad \beta^{--}_{k,m} = \beta^{--}_{k,-m}.$$ The coefficients $\beta^{ij}_{k,m}$ are determined by the self-consistency of these equations (taking into account their connection with the gauge sector). It appears that all of them can be expressed via the very same function $\alpha^{-+}_{m}$: $$\begin{aligned} \beta^{-+}_{k,m} &=& \frac{\beta^{-+}_{m}}{(k+m)(k+m+1)}, \quad m\ge 0, \nonumber \\ \beta^{+-}_{k,m} &=& \frac{\beta^{+-}_{m}}{(k-m)(k-m+1)}, \quad m\ge 0, \nonumber \\ \beta^{--}_{k,m} &=& \frac{\alpha^{-+}_{k+1}} {(k+m)(k+m+1)(k-m)(k-m+1)}, \quad k>s, \quad \beta^{--}_{s,m}=0, \nonumber\\ \beta^{-+}_{m} &=& \frac{\alpha^{-+}_{m}}{(s-m)(s-m+1)}, \quad 1\le m<s, \qquad \beta_{s}^{-+} = \frac{\alpha^{-+}_s}{2}, \nonumber \\ \beta^{+-}_{m} &=& (s-m-1)(s-m), \quad 0\le m<s-1, \qquad \beta^{+-}_{s-1} = 2, \nonumber\end{aligned}$$ ### Fermionic case Similarly to the massive boson, to describe a massive spin-$\tilde{s}=s+\iz$ fermion, one needs one-forms (physical and extra ones) $\Psi^{\alpha(k+m)\dot\alpha(k-m)}$, $|m|\le k \le \tilde{s}-1$, Stueckelberg zero-forms $Y^{\alpha(k+m)\dot\alpha(k-m)}$, $|m|\le k \le \tilde{s}-1$, and gauge invariant zero-forms $Y^{\alpha(k+m)\dot\alpha(k-m)}$, $|m|\le \tilde{s} \le k$; the indices $k,m$ are half-integers now. The ansatz for gauge transformations and gauge invariant curvatures for the fermions has the same form as the corresponding expressions for bosons; but the coefficients $\tilde{\alpha}^{ij}_{k,m}$ are different from the corresponding bosonic ones. The gauge transformations are: $$\begin{aligned} \label{gauge_trans1a} \delta\Psi^{\alpha(k+m){\dot{\alpha}}(k-m)} &=& D\eta^{\alpha(k+m)\dot\alpha(k-m)} + \tilde{\alpha}^{--}_{k,m} e^{\alpha\dot\alpha} \eta^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \tilde{\alpha}^{++}_{k,m} e_{\beta\dot\beta} \eta^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \tilde{\alpha}^{-+}_{k,m} e^\alpha{}_{\dot\beta} \eta^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \nonumber \\ && + \tilde{\alpha}^{+-}_{k,m}e_\beta{}^{\dot\alpha} \eta^{\alpha(k+m)\beta\dot\alpha(k-m-1)}, \\ \delta\Psi^{\alpha(2k)} &=& D\eta^{\alpha(2k)} + \tilde{\alpha}^{++}_{k,k} e_{\beta\dot\alpha} \eta^{\alpha(2k)\beta\dot\alpha} + \tilde{\alpha}^{-+}_{k,k} e^\alpha{}_{\dot\alpha} \eta^{\alpha(2k-1)\dot\alpha}, \nonumber \\ \delta Y^{\alpha(k+m)\dot\alpha(k-m)} &=& \eta^{\alpha(k+m)\dot\alpha(k-m)}, \nonumber\end{aligned}$$ where all the functions $\tilde{\alpha}$ are assumed to be real and satisfying the hermiticity conditions: $$\tilde{\alpha}^{+-}_{k,m} = \alpha^{-+}_{k,-m}, \qquad \tilde{\alpha}^{++}_{k,m} = \alpha^{++}_{k,-m}, \qquad \tilde{\alpha}^{--}_{k,m} = \alpha^{--}_{k,-m}$$ All of them also can be expressed in terms of one main function $\tilde{\alpha}^{-+}_m$: $$\begin{aligned} \tilde{\alpha}^{-+}_{k,m} &=& \frac{\tilde{\alpha}^{-+}_{m}} {(k-m+1)(k-m+2)(k+m)(k+m+1)}, \quad m>\iz, \nonumber \\ \tilde{\alpha}^{-+}_{k,\iz} &=& \frac{\epsilon\sqrt{\tilde{\alpha}^{-+}_{\iz}} } {(k+\iz)(k+\tz)}, \nonumber \\ \tilde{\alpha}^{+-}_{k,m} &=& 1, \quad m\ge \iz, \nonumber \\ \tilde{\alpha}^{++}_{k,m} &=& \frac{\tilde{\alpha}^{++}_{k}}{(k-m+1)(k-m+2)}, \quad m\ge \iz, \\ \tilde{\alpha}^{--}_{k,m} &=& \frac{\tilde{\alpha}^{--}_{k}}{(k+m)(k+m+1)}, \quad m\ge \iz, \nonumber \\ \tilde{\alpha}^{++}_k{}^2 &=& (k+\iz)^2\tilde{\alpha}^{-+}_{k+2}, \qquad \tilde{\alpha}^{--}_{k}{}^2 = \frac{\tilde{\alpha}^{-+}_{k+1}}{(k-\iz)^2}, \nonumber \end{aligned}$$ Here, the function $\tilde{\alpha}^{-+}_{m}$ is: $$\begin{aligned} \tilde{\alpha}^{-+}_{m}&=(\tilde{s}-m+1)(\tilde{s}+m)\left(\tilde{M}^2-(m-\iz)^2\lambda^2\right)\end{aligned}$$ In particular, $\sqrt{\tilde{\alpha}^{-+}_{\iz}}=(\tilde{s}+\iz)\tilde{M}$. One of the essential differences between bosons and fermions is that bosons have the mass-like terms proportional to $M^2$, while fermions — to $\tilde{M}$. And as it was shown in [@BKhSZ19], the sign of the fermionic mass term plays an important role in the construction of the supermultiplets. Namely, the signs for the two fermions entering the supermultiplet must be opposite. Thus in the expressions given above we introduced the parameter $\epsilon=\pm 1$ corresponding to the choice of mass-like terms sign, while we always assume that the parameters $M$ and $\tilde{M}$ are positive. As in the bosonic case, for each gauge one-form one can construct a gauge invariant two-form — curvature $(0 \le m < k)$: $$\begin{aligned} \label{curvatures1a} \mathcal{F}^{\alpha(k+m)\dot\alpha(k-m)} &=& D\Psi^{{\alpha(k+m)\dot\alpha(k-m)}} + \tilde{\alpha}^{--}_{k,m} e^{\alpha\dot\alpha} \Psi^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \tilde{\alpha}^{++}_{k,m} e_{\beta\dot\beta} \Psi^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \tilde{\alpha}^{-+}_{k,m} e^\alpha{}_{\dot\beta} \Psi^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \nonumber \\ && + \tilde{\alpha}^{+-}_{k,m} e_\beta{}^{\dot\alpha} \Psi^{\alpha(k+m)\beta\dot\alpha(k-m-1)}, \\ \mathcal{F}^{\alpha(2k)} &=& D\Psi^{\alpha(2k)} + \tilde{\alpha}^{++}_{k,m} e_{\beta\dot\alpha} \Psi^{\alpha(2k)\beta\dot\alpha} + \tilde{\alpha}^{-+}_{k,k} e^\alpha{}_{\dot\alpha} \Psi^{\alpha(2k-1)\dot\alpha} \nonumber \\ && - 2\tilde{\alpha}^{-+}_{k,k}\tilde{\alpha}^{--}_{k,k-1} E^{\alpha(2)} Y^{\alpha(2k-2)} - 2\tilde{\alpha}^{++}_{k,k} E_{\beta(2)} Y^{\alpha(2k)\beta(2)} \nonumber \\ && - \frac{\tilde{\alpha}^{-+}_{k+1}}{k+1} E^\alpha{}_\beta Y^{\alpha(2k-1)\beta}, \nonumber\end{aligned}$$ as well as a gauge invariant one-form for each Stueckelberg zero-form: $$\begin{aligned} \mathcal{D}^{\alpha(k+m)\dot\alpha(k-m)} &=& DY^{\alpha(k+m)\dot\alpha(k-m)} - \Psi^{\alpha(k+m)\dot\alpha(k-m)} + \tilde{\alpha}^{--}_{k,m} e^{\alpha\dot\alpha} Y^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + \tilde{\alpha}^{++}_{k,m} e_{\beta\dot\beta} Y^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \tilde{\alpha}^{+-}_{k,m} e_\beta{}^{\dot\alpha} Y^{\alpha(k+m)\beta\dot\alpha(k-m-1)} \nonumber \\ && + \tilde{\alpha}^{-+}_{k,m} e^\alpha{}_{\dot\beta} Y^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta}. \label{curvatures2a}\end{aligned}$$ Now let us consider a set of the unfolded equation. Here the whole system also can be subdivided into three subsystems. The first subsystem is just the zero curvature conditions for most of the gauge invariant two- and one-forms: $$\begin{aligned} \label{unf_eq1a} 0 &=& \mathcal{F}^{\alpha(k+m)\dot\alpha(k-m)}, \qquad\qquad k<s-1 \nonumber \\ 0 &=& \mathcal{F}^{\alpha(s-1+m)\dot\alpha(s-1-m)}, \qquad |m|<s-1 \\ 0 &=& \mathcal{D}^{\alpha(k+m)\dot\alpha(k-m)}, \qquad\qquad k<s-1 \nonumber\end{aligned}$$ The second one contains the remaining gauge invariant curvatures and gives a connection with the sector of the gauge invariant zero-forms: $$\begin{aligned} \label{unf_eq2a} 0 &=& \mathcal{F}^{\alpha(2s-2)} + E_{\beta(2)} Y^{\alpha(2s-2)\beta(2)} \nonumber \\ 0 &=& \mathcal{D}^{\alpha(s-1+m)\dot\alpha(s-1-m)} - e_{\beta\dot\beta} Y^{\alpha(s-1+m)\beta\alpha(s-1-m)\dot\beta}\end{aligned}$$ Finally, the third one contains the gauge invariant zero-forms only. Its structure reproduces the structure of the unfolded equations for massless components with added cross terms $(m < k)$: $$\begin{aligned} \label{unf_eq3a} 0 &=& DY^{\alpha(k+m)\dot\alpha(k-m)} + \tilde{\beta}^{--}_{k,m} e^{\alpha\dot\alpha} Y^{\alpha(k+m-1)\dot\alpha(k-m-1)} \nonumber \\ && + e_{\beta\dot\beta} Y^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} + \tilde{\beta}^{+-}_{k,m} e_\beta{}^{\dot\alpha} Y^{\alpha(k+m)\beta\dot\alpha(k-m-1)} \nonumber \\ && + \tilde{\beta}^{-+}_{k,m} e^\alpha{}_{\dot\beta} Y^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \\ 0 &=& D Y^{\alpha(2k)} + e_{\beta\dot\alpha} Y^{\alpha(2k)\beta\dot\alpha} + \tilde{\beta}^{-+}_{k,k} e^\alpha{}_{\dot\alpha} Y^{\alpha(2k-1)\dot\alpha} \nonumber\end{aligned}$$ The coefficients $\tilde{\beta}^{ij}_{k,m}$ (which assumed to be real and satisfying the hermiticity conditions similar to that of $\tilde{\alpha}$) are determined by the self-consistency of these equations (taking into account the connection with the gauge sector). They resemble the corresponding bosonic coefficients, the most significant difference being the behavior of some of the coefficients at $m=\pm\iz$. As in the bosonic case, they all can be expressed via the same main function $\tilde{\alpha}^{-+}_m$: $$\begin{aligned} \tilde{\beta}^{-+}_{k,m} &=& \frac{\tilde{\beta}^{-+}_{m}}{(k+m)(k+m+1)}, \quad m\ge \iz, \nonumber \\ \tilde{\beta}^{+-}_{k,m} &=& \frac{\tilde{\beta}^{+-}_{m}}{(k-m)(k-m+1)}, \quad m\ge \iz, \nonumber \\ \tilde{\beta}^{--}_{k,m} &=& \frac{\tilde{\alpha}^{-+}_{k+1}} {(k+m)(k+m+1)(k-m)(k-m+1)}, \quad k>\tilde{s}, \quad \tilde{\beta}^{--}_{\tilde{s},m}=0, \\ \tilde{\beta}^{-+}_{m} &=& \frac{\tilde{\alpha}^{-+}_{m}}{(\tilde{s}-m)(\tilde{s}-m+1)}, \quad \iz\le m<\tilde{s}, \qquad \tilde{\beta}_{\iz}^{-+} = \epsilon\sqrt{\tilde{\alpha}^{-+}_{\iz}}, \quad \tilde{\beta}_{\tilde{s}}^{-+} = \frac{\tilde{\alpha}^{-+}_{\tilde{s}}}{2}, \nonumber \\ \tilde{\beta}^{+-}_{m} &=& (\tilde{s}-m-1)(\tilde{s}-m), \quad \iz\le m<\tilde{s}-1, \qquad \tilde{\beta}^{+-}_{\tilde{s}-1} = 2, \nonumber\end{aligned}$$ Superblocks ----------- Similarly to the massless supermultiplets, it is possible to construct a system of massive higher spin boson and fermion which is invariant under the supertransformations, which we call a superblock. However, in contrast to the massless case, the algebra of such supertransformations is not closed. To make it closed, one needs four particles — two bosons and two fermions [@BKhSZ19; @BKhSZ19a; @BKhSZ19b]. Each pair of one boson and one fermion forms a superblock with its own transformations, so that each particle enters two such superblocks. Moreover, it is possible to adjust the parameters of these superblocks so that the superalgebra is closed. We begin with the construction of the superblocks. Naturally, supersymmetry requires that the parameters of the particles are connected. First, a well-known relation $\tilde{s}=s\pm\iz$ holds for the spins of fermion and boson. Secondly, as it was shown in [@BKhSZ19], the mass parameters of the particles are also must be connected: $M^2=\tilde{M}(\tilde{M}\pm\lambda)$. At first, we consider the general properties of these superblocks and then provide the explicit solutions for the two possible types with $\tilde{s} = s \pm \iz$. As we have seen, the whole set of unfolded equations both for the bosons as well for the fermions can be subdivided into the three sub-sectors. It is natural to begin with the subsector of the gauge invariant zero-forms since they must form a closed subsystem under the supertransformations as well. The most general ansatz is thus: $$\begin{aligned} \label{ansatz_gi0f} \delta W^{\alpha(k+m)\dot\alpha(k-m)} &=& \delta_{k,m}^{0+} Y^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + \delta_{k,m}^{0-} Y^{\alpha(k+m)\dot\alpha(k-m-1)} \zeta^{\dot\alpha} \nonumber \\ && + \delta_{k,m}^{+0} Y^{\alpha(k+m)\beta\dot\alpha(k-m)}\zeta_{\beta} + \delta_{k,m}^{-0} Y^{\alpha(k+m-1)\dot\alpha(k-m)} \zeta^{\alpha} \nonumber \\ \delta Y^{\alpha(k+m)\dot\alpha(k-m)} &=& {\tilde{\delta}}_{k,m}^{0+} W^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + {\tilde{\delta}}_{k,m}^{0-} W^{\alpha(k+m)\dot\alpha(k-m-1)}\zeta^{\dot\alpha} \\ && + {\tilde{\delta}}_{k,m}^{+0} W^{\alpha(k+m)\beta\dot\alpha(k-m)}\zeta_{\beta} + {\tilde{\delta}}_{k,m}^{-0} W^{\alpha(k+m-1)\dot\alpha(k-m)} \zeta^{\alpha} \nonumber\end{aligned}$$ Here $k,m$ are integers in the first equation and half-integers in the second one. All these functions $\delta$, $\tilde{\delta}$ are in general complex and satisfy the hermiticity conditions: $$\delta^{0+}_{k,-m} = - (\delta^{+0}_{k,m})^*, \qquad \delta^{0-}_{k,-m} = - (\delta^{-0}_{k,m})^*, \qquad \tilde{\delta}^{0+}_{k,-m} = (\tilde{\delta}^{+0}_{k,m})^*, \qquad \tilde{\delta}^{0-}_{k,-m} = (\tilde{\delta}^{-0}_{k,m})^*.$$ For lower $k$, some of the fields $W$ or $Y$ on the right-hand side may turn out to be the Stueckelberg ones. Such terms are forbidden by gauge invariance, so we must impose the following boundary conditions depending on the type of the superblock: $$\begin{aligned} \tilde{\delta}^{-0}_{\tilde{s},m} = 0 && \tilde{s} = s - \iz \nonumber \\ \delta^{-0}_{s,m} = 0 && \tilde{s} = s + \iz\end{aligned}$$ The requirement that the gauge invariant subsector of the unfolded equations is preserved by these supertransformations leads to the number of equations on the functions $\delta$, $\tilde{\delta}$ given in Appendix. These equations completely determine these functions up to the two arbitrary constants. Their explicit solutions given in the two subsequent subsubsections. Note, that the relation $M^2 = \tilde{M}[\tilde{M} \pm \lambda]$ appears already at this level. Then, we consider the supertransformations for the gauge sector. The most general ansatz for the Stueckelberg zero-forms is: $$\begin{aligned} \label{ansatz_g0f} \delta W^{\alpha(k+m)\dot\alpha(k-m)} &=& \gamma_{k,m}^{0+} Y^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + \gamma_{k,m}^{0-} Y^{\alpha(k+m)\dot\alpha(k-m-1)} \zeta^{\dot\alpha} \nonumber \\ && + \gamma_{k,m}^{+0} Y^{\alpha(k+m)\beta\dot\alpha(k-m)}\zeta_{\beta} + \gamma_{k,m}^{-0} Y^{\alpha(k+m-1)\dot\alpha(k-m)}\zeta^{\alpha} \nonumber \\ \delta Y^{\alpha(k+m)\dot\alpha(k-m)} &=& {\tilde{\gamma}}_{k,m}^{0+}W^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + {\tilde{\gamma}}_{k,m}^{0-} W^{\alpha(k+m)\dot\alpha(k-m-1)}\zeta^{\dot\alpha} \\ && + {\tilde{\gamma}}_{k,m}^{+0} W^{\alpha(k+m)\beta\dot\alpha(k-m)}\zeta_{\beta} + {\tilde{\gamma}}_{k,m}^{-0} W^{\alpha(k+m-1)\dot\alpha(k-m)} \zeta^{\alpha} \nonumber\end{aligned}$$ where all functions $\gamma$, $\tilde{\gamma}$ are in general complex and satisfy the hermiticity conditions similar to that for the $\delta$, $\tilde{\delta}$: $$\gamma^{0+}_{k,-m} = - (\gamma^{+0}_{k,m})^*, \qquad \gamma^{0-}_{k,-m} = - (\gamma^{-0}_{k,m})^*, \qquad \tilde{\gamma}^{0+}_{k,-m} = (\tilde{\gamma}^{+0}_{k,m})^*, \qquad \tilde{\gamma}^{0-}_{k,-m} = (\tilde{\gamma}^{-0}_{k,m})^*.$$ Most of the unfolded equations for the Stueckelberg zero-forms are just the zero-curvature conditions. Thus the invariance of these equations under the supertransformations is equivalent to the following transformations for these curvatures: $$\begin{aligned} \delta\mathcal{C}^{\alpha(k+m)\dot\alpha(k-m)} &=& \gamma_{k,m}^{0+}\mathcal{D}^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + \gamma_{k,m}^{0-} \mathcal{D}^{\alpha(k+m)\dot\alpha(k-m-1)}\zeta^{\dot\alpha} \nonumber \\ && + \gamma_{k,m}^{+0} \mathcal{D}^{\alpha(k+m)\beta\dot\alpha(k-m)}\zeta_{\beta} + \gamma_{k,m}^{-0} \mathcal{D}^{\alpha(k+m-1)\dot\alpha(k-m)} \zeta^{\alpha} \nonumber \\ \delta\mathcal{D}^{\alpha(k+m)\dot\alpha(k-m)} &=& {\tilde{\gamma}}_{k,m}^{0+} \mathcal{C}^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + {\tilde{\gamma}}_{k,m}^{0-} \mathcal{C}^{\alpha(k+m)\dot\alpha(k-m-1)} \zeta^{\dot\alpha} \\ && + {\tilde{\gamma}}_{k,m}^{+0} \mathcal{C}^{\alpha(k+m)\beta\dot\alpha(k-m)} \zeta_{\beta} + {\tilde{\gamma}}_{k,m}^{-0} \mathcal{C}^{\alpha(k+m-1)\dot\alpha(k-m)} \zeta^{\alpha} \nonumber\end{aligned}$$ This leads to the number of equations on the functions $\gamma$, $\tilde{\gamma}$ also given in Appendix. Their solutions also determine all the functions $\gamma$, $\tilde{\gamma}$ up to the two arbitrary constants. Note, that the supertransformations for the Stueckelberg zero-forms can (and have to) contain gauge invariant zero forms for highest $k=\max\{s,\tilde{s}\}$ possible. The ansatz (\[ansatz\_g0f\]) has to be modified in a different way for the two types of the superblocks. We will present the modified ansatz in the following subsubsections. At last let us turn to the gauge one-forms. Recall that the general form for the Stueckelberg field curvatures are $\mathcal{C}=DW+\Omega+\ldots$, $\mathcal{D}=DY+\Psi+\ldots$. This fix the supertransformations for the gauge one-forms entirely. Except for the $|m|=k$, the structure and coefficients for the supertransformations of one-forms are the same: $$\begin{aligned} \label{ansatz_g1f} \delta\Omega^{\alpha(k+m)\dot\alpha(k-m)} &=& \gamma_{k,m}^{0+}\Psi^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + \gamma_{k,m}^{0-} \Psi^{\alpha(k+m)\dot\alpha(k-m-1)} \zeta^{\dot\alpha} \nonumber \\ && + \gamma_{k,m}^{+0} \Psi^{\alpha(k+m)\beta\dot\alpha(k-m)}\zeta_{\beta} + \gamma_{k,m}^{-0} \Psi^{\alpha(k+m-1)\dot\alpha(k-m)}\zeta^{\alpha} \nonumber \\ \delta\Psi^{\alpha(k+m)\dot\alpha(k-m)} &=& {\tilde{\gamma}}_{k,m}^{0+} \Omega^{\alpha(k+m)\dot\alpha(k-m)\dot\beta} \zeta_{\dot\beta} + {\tilde{\gamma}}_{k,m}^{0-} \Omega^{\alpha(k+m)\dot\alpha(k-m-1)}\zeta^{\dot\alpha} \\ && + {\tilde{\gamma}}_{k,m}^{+0} \Omega^{\alpha(k+m)\beta\dot\alpha(k-m)} \zeta_{\beta} + {\tilde{\gamma}}_{k,m}^{-0} \Omega^{\alpha(k+m-1)\dot\alpha(k-m)}\zeta^{\alpha} \nonumber\end{aligned}$$ The supertransformations for one-forms $\Omega^{\alpha(2k)}$, $\Psi^{\alpha(2k+1)}$ must contain terms with zero-forms (both Stueckelberg and the gauge invariant ones). Now we consider the two cases $\tilde{s}=s\pm\iz$. ### Superblock $\tilde{s} = s - \iz$ We begin with the ansatz (\[ansatz\_gi0f\]) for the gauge invariant zero-forms. The gauge invariant sector of the unfolded equations system is preserved under the conditions given in Appendix (\[superblock\_eqs1\]). Those conditions require that $M^2=\tilde{M}(\tilde{M}\pm\lambda)$; the explicit expressions for the coefficients $\delta^{ij}_{k,m}$ are $(m \ge 0)$: $$\begin{aligned} \delta^{+0}_{k,m} &=& (s-m)(s-m-1)C_b, \nonumber \\ \delta^{0-}_{k,m} &=& \pm \frac{(k+s+1)(\tilde{M}\pm(k+1)\lambda)}{(k-m)(k-m+1)} \delta^{+0}_{k,m}, \nonumber \\ \delta^{0+}_{k,m} &=& \pm (s+m)(\tilde{M}\pm m\lambda)C_b, \quad m > 0, \\ \delta^{0+}_{k,0} &=& \pm \epsilon s(s-1) C_b \nonumber \\ \delta^{-0}_{k,m} &=& \pm \frac{(k+s+1)(\tilde{M}\pm(k+1)\lambda)}{(k+m)(k+m+1)} \delta^{0+}_{k,m}, \nonumber \end{aligned}$$ while those for the functions $\tilde{\delta}^{ij}_{k,m}$ are $(m \ge \iz)$: $$\begin{aligned} \tilde{\delta}^{+0}_{k+\iz,m+\iz} &=& C_f, \nonumber \\ \tilde{\delta}^{0-}_{k+\iz,m+\iz} &=& \mp \frac{(k-s+1)(\tilde{M}\mp(k+1)\lambda)}{(k-m)(k-m+1)}C_f, \nonumber \\ \tilde{\delta}^{0+}_{k+\iz,m+\iz} &=& \pm \frac{(\tilde{M}\mp m\lambda)}{(s-m-1)}C_f, \\ \tilde{\delta}^{-0}_{k+\iz,m+\iz} &=& \mp \frac{(k-s+1)(\tilde{M}\mp(k+1)\lambda)}{(k+m+1)(k+m+2)} \tilde{\delta}^{0+}_{k+\iz,m+\iz}. \nonumber\end{aligned}$$ The sign choice corresponds to the sign in the relation $M^2=\tilde{M}(\tilde{M}\pm\lambda)$. Note that $\tilde{\delta}^{-0}_{\tilde{s},m} = 0$ as it should be. Thus all the functions $\delta$, $\tilde{\delta}$ are determined up to the two arbitrary complex parameters $C_b$ and $C_f$. Moreover, in $AdS$ case, i.e. when $\lambda \ne 0$, we obtain a pair of additional relations on these constants: $$C^*_b = \mp \epsilon C_b, \qquad C^*_f = \pm \epsilon C_f$$ Now let us turn to the gauge sector. The invariance of the corresponding set of the unfolded equations under the supertransformations (\[ansatz\_g0f\]) leads to a number of equations (\[superblock\_eqs2\]) given in Appendix. These equations determine all the functions $\gamma$ and $\tilde{\gamma}$ also up to the two arbitrary complex constants $C$ and $\tilde{C}$. Explicit expressions for the functions $\gamma$ look like $(m \ge 0)$: $$\begin{aligned} \gamma^{+0}_{k,m} &=& \mp \sqrt{k(s-k-1)(\tilde{M}\mp(k+1)\lambda)} C, \quad k>0, \nonumber \\ \gamma^{+0}_{0,0} &=& \mp \sqrt{2(s-1)(\tilde{M}\mp\lambda)} C, \nonumber \\ \gamma^{0-}_{k,m} &=&- \sqrt{\frac{(s+k+1)(\tilde{M}\pm(k+1)\lambda)}{k}}C, \\ \gamma^{0+}_{k,m} &=& \pm \frac{(s+m)(\tilde{M}\pm m\lambda)}{(k-m+1)(k-m+2)} \gamma^{+0}_{k,m}, \quad m > 0, \nonumber \\ \gamma^{-0}_{k,m} &=& \pm \frac{(s+m)(\tilde{M}\pm m \lambda)}{(k+m)(k+m+1)} \gamma^{0-}_{k,m}, \quad m> 0, \nonumber \end{aligned}$$ while those for the $\tilde{\gamma}$ $(m \ge \iz)$: $$\begin{aligned} \tilde{\gamma}^{+0}_{k+\iz,m+\iz} &=& \mp \sqrt{(k+1)(s+k+2)(\tilde{M}\pm(k+2)\lambda)}\tilde{C}, \nonumber \\ \tilde{\gamma}^{0-}_{k+\iz,m+\iz} &=& - \sqrt{\frac{(s-k-1)(\tilde{M}\mp(k+1)\lambda)}{k}} \tilde{C}, \quad k>\iz, \nonumber \\ \tilde{\gamma}^{0-}_{\iz,\iz} &=& - \sqrt{\frac{(s-1)(\tilde{M}\mp\lambda)}{2}} \tilde{C}, \\ \tilde{\gamma}^{0+}_{k+\iz,m+\iz} &=& \pm \frac{(s-m)(\tilde{M}\mp m \lambda)}{(k-m+1)(k-m+2)} \tilde{\gamma}^{+0}_{k+\iz,m+\iz}, \nonumber \\ \tilde{\gamma}^{-0}_{k+\iz,m+\iz} &=& \pm \frac{(s-m)(\tilde{M}\mp m\lambda)}{(k+m+1)(k+m+2)} \tilde{\gamma}^{0-}_{k+\iz,m+\iz} \nonumber\end{aligned}$$ Similarly to the previous case, for $\lambda \ne 0$ we obtain a pair of additional relations on these constants: $$C^* = \mp \epsilon C, \qquad \tilde{C}^* = \pm \epsilon \tilde{C}$$ Similarly to the case with the gauge invariant two-forms, the supertransformations for one-forms at $m = \pm k$ differ from the general case and have to contain zero-forms: $$\begin{aligned} \label{superblock_diagonal} \delta\Omega^{\alpha(2k)} &=& \gamma^{0+}_{k,k} \Psi^{\alpha(2k)\dot\beta} \zeta_{\dot\beta} + \gamma^{+0}_{k,k} \Psi^{\alpha(2k)\beta} \zeta_{\beta} + \gamma^{-0}_{k,k} \Psi^{\alpha(2k-1)} \zeta^{\alpha} \nonumber \\ && + \gamma^{0-}_{k,k} \frac{\tilde{\alpha}^{-+}_{k+\iz}}{(2k+1)}e^{\alpha}{}_{\dot\alpha} Y^{\alpha(2k-1)} \zeta^{\dot\alpha} + \gamma^{0-}_{k,k} \tilde{\alpha}^{++}_{k-\iz}e_{\beta\dot\alpha}Y^{\alpha(2k)\beta}\zeta^{\dot\alpha}, \quad k>0, \nonumber \\ \delta\Omega &=& \gamma^{+0}_{0,0} \Psi^{\beta} \zeta_{\beta} + a_0 e_{\alpha\dot\alpha} Y^{\alpha} \zeta^{\dot\alpha}+h.c., \\ \delta\Psi^{\alpha(2k)} &=& {\tilde{\gamma}}^{0+}_{k,k}\Omega^{\alpha(2k)\dot\beta} \zeta_{\dot\beta} + {\tilde{\gamma}}^{+0}_{k,k} \Omega^{\alpha(2k)\beta} \zeta_{\beta} + {\tilde{\gamma}}^{-0}_{k,k} \Omega^{\alpha(2k-1)} \zeta^{\alpha} \nonumber \\ && + {\tilde{\gamma}}^{0-}_{k,k} \frac{\alpha^{-+}_{k+\iz}}{(2k+1)}e^{\alpha}{}_{\dot\alpha} W^{\alpha(2k-1)} \zeta^{\dot\alpha} + {\tilde{\gamma}}^{0-}_{k,k} \alpha^{++}_{k-\iz} e_{\beta\dot\alpha}W^{\alpha(2k)\beta}\zeta^{\dot\alpha}. \nonumber \end{aligned}$$ where the coefficient $a_0$ stands for: $$\begin{aligned} a_0=-(s+1)(\tilde{M}\pm\lambda)\sqrt{2(s-1)(\tilde{M}\mp\lambda)}C\end{aligned}$$ At last, we have to consider remaining unfolded equations which connect gauge sector with the sector of the gauge invariant zero-forms. The corresponding supertransformations have the form: $$\begin{aligned} \delta\Omega^{\alpha(2s-2)} &=& \gamma^{0+}_{s-1,s-1}\Psi^{\alpha(2s-2)\dot\beta} \zeta_{\dot\beta} + \gamma^{+0}_{s-1,s-1} \Psi^{\alpha(2s-2)\beta} \zeta_{\beta} + \gamma^{-0}_{s-1,s-1} \Psi^{\alpha(2s-3)} \zeta^{\alpha} \nonumber \\ && + \gamma^{0-}_{s-1,s-1} \frac{\tilde{\alpha}^{-+}_{s-\iz}}{(2s-1)}e^{\alpha}{}_{\dot\alpha} Y^{\alpha(2s-3)} \zeta^{\dot\alpha} + \frac{\gamma^{0-}_{s-1,s-2}}{2} e_{\beta\dot\alpha} Y^{\alpha(2s-2)\beta} \zeta^{\dot\alpha} \nonumber \\ \delta W^{\alpha(s-1+m)\dot\alpha(s-1-m)} &=& \gamma_{s-1,m}^{0-}Y^{\alpha(s+m-1)\dot\alpha(s-m-2)} \zeta^{\dot\alpha} + \gamma_{s,m}^{-0} Y^{\alpha(s+m-2)\dot\alpha(s-1-m)}\zeta^{\alpha} \\ && + \frac{\gamma^{+0}_{s-1,m}}{\tilde{\alpha}^{++}_{s-\tz,m+\iz}}Y^{\alpha(s-1+m)\dot\alpha(s-1-m)\beta} \zeta_{\beta} + \frac{\gamma^{0+}_{s-1,m}}{\tilde{\alpha}^{++}_{s-\tz,m-\iz}} Y^{\alpha(s-1+m)\dot\beta\dot\alpha(s-1-m)} \zeta_{\dot\beta} \nonumber \\ \delta W^{\alpha(2s-2)} &=& \frac{2\gamma^{+0}_{s-1,s-1}}{\tilde{\alpha}^{++}_{s-\tz}} Y^{\alpha(2s-2)\beta} \zeta_{\beta} + \frac{\gamma^{0+}_{s-1,s-1}}{\tilde{\alpha}^{++}_{s-\tz,s-\tz}} Y^{\alpha(2s-2)\dot\beta} \zeta_{\dot\beta} + \gamma_{s-1,s-1}^{-0} Y^{\alpha(2s-3)} \zeta^{\alpha} \nonumber\end{aligned}$$ In particular, this gives us the relations between the constants $C$, $\tilde{C}$ and $C_b$, $C_f$: $$\begin{aligned} C_b = \mp \frac{C}{\sqrt{2s(s-1)(\tilde{M}\pm s)}}, \qquad C_f = \mp \tilde{C}\sqrt{2s(s-1)(\tilde{M}\pm s)}.\end{aligned}$$ The parameters $C$, $\tilde{C}$ are restricted by the hermiticity conditions only. Similarly to the massless case, their product $C\tilde{C}$ is always imaginary. It is possible to restrict them further by requiring the invariance of the sum of the bosonic and fermionic Lagrangians. If one takes the normalization of the Lagrangians as in [@KhZ19], the connection between the parameters is: $$\tilde{C} = 4i\epsilon C$$ One can see that this relation is in agreement with the hermiticity conditions. ### Superblock $\tilde{s} = s + \iz$ Now we repeat the same steps. The ansatz for the supertransformations for the sector of gauge invariant zero-forms as well as the ansatz for the gauge sector are the same as before — (\[ansatz\_gi0f\]) and (\[ansatz\_g0f\]), (\[ansatz\_g1f\]) correspondingly. Hence, the equations on the parameters of the supertransformations are also the same (\[superblock\_eqs1\]), (\[superblock\_eqs2\]). But the fermionic functions $\beta$, $\tilde{\beta}$ are different now and this leads to the essentially different solution. For the sector of the gauge invariant zero-forms we obtain for the bosonic functions $\delta$ $(m \ge 0)$: $$\begin{aligned} \delta^{+0}_{k,m} &=& C_b, \nonumber \\ \delta^{0-}_{k,m} &=& \pm \frac{(k-s)(\tilde{M}\pm(k+1)\lambda)}{(k-m)(k-m+1)}C_b, \nonumber \\ \delta^{0+}_{k,m} &=& \mp \frac{(\tilde{M}\pm m\lambda)}{(s-m)}C_b, \quad m > 0, \quad \delta^{0+}_{k,0} = \mp \epsilon C_b, \\ \delta^{-0}_{k,m} &=& \pm \frac{(k-s)(\tilde{M}\pm(k+1)\lambda)}{(k+m)(k+m+1)} \delta^{0+}_{k,m}, \nonumber \end{aligned}$$ and for the fermionic functions $\tilde{\delta}$ $(m \ge \iz)$: $$\begin{aligned} \tilde{\delta}^{+0}_{k+\iz,m+\iz} &=& (s-m)(s-m-1) C_f, \nonumber \\ \tilde{\delta}^{0-}_{k+\iz,m+\iz} &=& \mp \frac{(k+s+2)(\tilde{M}\mp(k+1)\lambda)}{(k-m)(k-m+1)} \tilde{\delta}^{+0}_{k+\iz,m+\iz}, \nonumber \\ \tilde{\delta}^{0+}_{k+\iz,m+\iz} &=& \mp(s+m+1) (\tilde{M}\mp m\lambda)C_f, \\ \tilde{\delta}^{-0}_{k+\iz,m+\iz} &=& \mp \frac{(k+s+2)(\tilde{M}\mp(k+1)\lambda)}{(k+m+1)(k+m+2)} \tilde{\delta}^{0+}_{k+\iz,m+\iz}. \nonumber \end{aligned}$$ Note that in this case $\delta^{0-}_{s,m} = 0$ as it should be. As in the previous case, for $\lambda \ne 0$ we obtain a pair of additional relations on the two arbitrary constants: $$C^*_b = \pm \epsilon C_b, \qquad C^*_f = \mp \epsilon C_f$$ For the gauge sector supertransformation parameters $\gamma$ we obtain $(m \ge 0)$: $$\begin{aligned} \gamma^{+0}_{k,m} &=& \pm \sqrt{k(s+k+2)(\tilde{M}\mp(k+1)\lambda)} C, \quad k>0, \nonumber \\ \gamma^{+0}_{0,0} &=& \pm \sqrt{2(s+2)(\tilde{M}\mp\lambda)} C, \nonumber \\ \gamma^{0-}_{k,m} &=& - \sqrt{\frac{(s-k)(\tilde{M}\pm(k+1)\lambda)}{k}}C, \\ \gamma^{0+}_{k,m} &=& \mp \frac{(s-m+1)(\tilde{M}\pm m\lambda)}{(k-m+1)(k-m+2)} \gamma^{+0}_{k,m}, \quad m > 0, \nonumber \\ \gamma^{-0}_{k,m} &=& \mp \frac{(s-m+1)(\tilde{M}\pm m \lambda)}{(k+m)(k+m+1)}\gamma^{0-}_{k,m}, \quad m> 0, \nonumber\end{aligned}$$ while for the parameters $\tilde{\gamma}$, correspondingly $(m \ge \iz)$: $$\begin{aligned} \tilde{\gamma}^{+0}_{k+\iz,m+\iz} &=& \pm \sqrt{(k+1)(s-k-1)(\tilde{M}\pm(k+2)\lambda)}\tilde{C} , \nonumber \\ \tilde{\gamma}^{0-}_{k+\iz,m+\iz} &=& - \sqrt{\frac{(s+k+2)(\tilde{M}\mp(k+1)\lambda)}{k}}\tilde{C}, \quad k>\iz, \nonumber \\ \tilde{\gamma}^{0-}_{\iz,\iz} &=& - \sqrt{\frac{(s+2)(\tilde{M}\mp\lambda)}{2}} \tilde{C}, \\ \tilde{\gamma}^{0+}_{k+\iz,m+\iz} &=& \mp \frac{(s+m+1)(\tilde{M}\mp m \lambda)}{(k-m+1)(k-m+2)} \tilde{\gamma}^{+0}_{k+\iz,m+\iz}, \nonumber \\ \tilde{\gamma}^{-0}_{k+\iz,m+\iz} &=& \mp \frac{(s+m+1)(\tilde{M}\mp m\lambda)}{(k+m+1)(k+m+2)} \tilde{\gamma}^{0-}_{k+\iz,m+\iz}. \nonumber \end{aligned}$$ In the flat space $C$ and $\tilde{C}$ are the two arbitrary complex constants while in $AdS$ $(\lambda \ne 0)$ they must satisfy the relations similar to that of $C_b$ and $C_f$: $$C^* = \pm \epsilon C, \qquad \tilde{C}^* = \mp \epsilon \tilde{C}$$ The supertransformations for the one-forms with $m=\pm k$ have to contain zero-forms as well. The expressions for their supertransformations are still given by (\[superblock\_diagonal\]), but the expression for the coefficient $a_0$ is now: $$a_0=-s(\tilde{M}\pm\lambda)\sqrt{2(s+2)(\tilde{M}\mp\lambda)}C$$ At last let us turn to the remaining unfolded equations connecting two sectors. In this case, it is fermionic fields supertransformations which have to be modified: $$\begin{aligned} \delta\Psi^{\alpha(2{\tilde{s}}-2)} &=& \tilde{\gamma}^{0+}_{{\tilde{s}}-1,{\tilde{s}}-1}\Omega^{\alpha(2{\tilde{s}}-2)\dot\beta} \zeta_{\dot\beta} + \tilde{\gamma}^{+0}_{{\tilde{s}}-1,{\tilde{s}}-1}\Omega^{\alpha(2{\tilde{s}}-2)\beta} \zeta_{\beta} + \tilde{\gamma}^{-0}_{{\tilde{s}}-1,{\tilde{s}}-1}\Psi^{\alpha(2{\tilde{s}}-3)} \zeta^{\alpha} \nonumber \\ && + \tilde{\gamma}^{0-}_{{\tilde{s}}-1,{\tilde{s}}-1} \frac{\alpha^{-+}_{{\tilde{s}}-\iz}}{(2{\tilde{s}}-1)} e^{\alpha}{}_{\dot\alpha} W^{\alpha(2s-3)} \zeta^{\dot\alpha} + \frac{\tilde{\gamma}^{0-}_{{\tilde{s}}-1,{\tilde{s}}-2}}{2} e_{\alpha\dot\alpha} W^{\alpha(2{\tilde{s}}-1)} \zeta^{\dot\alpha} \nonumber \\ \delta Y^{\alpha({\tilde{s}}-1+m)\dot\alpha({\tilde{s}}-1-m)} &=& \tilde{\gamma}_{{\tilde{s}}-1,m}^{-0} W^{\alpha({\tilde{s}}+m-2)\dot\alpha({\tilde{s}}-1-m)} \zeta^{\alpha} + \tilde{\gamma}_{{\tilde{s}}-1,m}^{0-} W^{\alpha({\tilde{s}}+m-1)\dot\alpha({\tilde{s}}-m-2)} \zeta^{\dot\alpha} \\ && + \frac{\tilde{\gamma}^{0+}_{{\tilde{s}}-1,m}}{\alpha^{++}_{{\tilde{s}}-\tz,m-\iz}} W^{\alpha({\tilde{s}}-1+m)\beta\dot\alpha({\tilde{s}}-1-m)}\zeta_{\beta} + \frac{\tilde{\gamma}^{+0}_{{\tilde{s}}-1,m}}{\alpha^{++}_{{\tilde{s}}-\tz,m+\iz}} W^{\alpha({\tilde{s}}-1+m)\dot\alpha({\tilde{s}}-1-m)\dot\beta}\zeta_{\dot\beta} \nonumber \\ \delta Y^{\alpha(2{\tilde{s}}-2)} &=& \frac{2\tilde{\gamma}^{+0}_{{\tilde{s}}-1,{\tilde{s}}-1}}{\alpha^{++}_{{\tilde{s}}-\tz}} W^{\alpha(2{\tilde{s}}-2)\beta} \zeta_{\beta} + \frac{\tilde{\gamma}^{0+}_{{\tilde{s}}-1,{\tilde{s}}-1}}{\alpha^{++}_{{\tilde{s}}-\tz,{\tilde{s}}-\tz}} W^{\alpha(2{\tilde{s}}-2)\dot\beta} \zeta_{\dot\beta} + \tilde{\gamma}_{{\tilde{s}}-1,{\tilde{s}}-1}^{-0}W^{\alpha(2{\tilde{s}}-3)} \zeta^{\alpha} \nonumber\end{aligned}$$ For the consistency the constants $C$, $\tilde{C}$ have to be connected with the constants $C_b$, $C_f$ as follows: $$C_b = \pm C\sqrt{(s-1)(2s+1)(\tilde{M}\mp s)}, \qquad C_f = \pm\frac{\tilde{C}}{\sqrt{(s-1)(2s+1)(\tilde{M}\mp s)}}$$ Apart from the hermiticity conditions, the constants $C$ and $\tilde{C}$ are arbitrary. If the sum of the Lagrangians is required to be invariant, these constants turn out to be connected: $$\tilde{C} = 4i\epsilon C$$ Again, this relation is in agreement with the hermiticity conditions. Supermultiplets --------------- We build the supermultiplets now. A massive supermultiplet contains two bosons and two fermions; each pair of one boson and one fermion forms a superblock. It was shown in [@BKhSZ19] that the bosons have the opposite parity and the fermions have opposite mass terms sign. This leaves four possible structures of the supermultiplet, as shown in the Figure \[fig:hsm\_structure\]. (0,-2.3) node(s)\[fblock,text width=2.5cm,align=center\] [$s-\iz,\tilde{M}$\ $\epsilon=\mp1$]{} (-2.3,0) node(s1)\[bblock,text width=2.5cm,align=center\] [$s,M_+$\ $P=\pm1$]{} (2.3,0) node(s2)\[bblock,text width=2.5cm,align=center\] [$s,M_-$\ $P=\mp1$]{} (0,2.3) node(s3)\[fblock,text width=2.5cm,align=center\] [$s+\iz,\tilde{M}$\ $\epsilon=\pm1$]{}; (s.west) -| (s1.south) node \[below right,midway\] [$\tilde{C}_1$]{}; (s1.315) – (s.135) node \[midway,below,sloped\] [$C_1$]{}; (s.east) -| (s2.south) node \[below left,midway\] [$\tilde{C}_4$]{}; (s2.225) – (s.45) node \[midway,below,sloped\] [$C_4$]{}; (s3.west) -| (s1.north) node \[above right,midway\] [$\tilde{C}_2$]{}; (s1.45) – (s3.225) node \[midway,above,sloped\] [$C_2$]{}; (s3.east) -| (s2.north) node \[above left,midway\] [$\tilde{C}_3$]{}; (s2.135) – (s3.315) node \[midway,above,sloped\] [$C_3$]{}; (0,-2.3) node(s)\[bblock,text width=2.5cm,align=center\] [$s,M$\ $P=\pm1$]{} (-2.3,0) node(s1)\[fblock,text width=2.5cm,align=center\] [$s+\iz,\tilde{M}_+$\ $\epsilon=\pm1$]{} (2.3,0) node(s2)\[fblock,text width=2.5cm,align=center\] [$s+\iz,\tilde{M}_-$\ $\epsilon=\mp1$]{} (0,2.3) node(s3)\[bblock,text width=2.5cm,align=center\] [$s+1,M$\ $P=\mp1$]{}; (s.west) -| (s1.south) node \[below right,midway\] [$C_1$]{}; (s1.315) – (s.135) node \[midway,below,sloped\] [$\tilde{C}_1$]{}; (s.east) -| (s2.south) node \[below left,midway\] [$C_4$]{}; (s2.225) – (s.45) node \[midway,below,sloped\] [$\tilde{C}_4$]{}; (s3.west) -| (s1.north) node \[above right,midway\] [$C_2$]{}; (s1.45) – (s3.225) node \[midway,above,sloped\] [$\tilde{C}_2$]{}; (s3.east) -| (s2.north) node \[above left,midway\] [$C_3$]{}; (s2.135) – (s3.315) node \[midway,above,sloped\] [$\tilde{C}_3$]{}; Each pair of fields connected by a pair of arrows forms a superblock. One can see that the commutator of two supertransformations transforms a field into a combination of two fields and one of these fields corresponds to another particle. The coefficients $C_i$ and $\tilde{C}_i$ have to be tuned to get rid of such terms. This gives certain equalities for the products $C_i\tilde{C}_i$. The rest of the terms must form the transformations of the $AdS$ algebra. Again, we consider integer and half-integer superspin (i.e. average spin of the supermultiplet $\langle s \rangle$) cases separately. ### Integer superspin case In case of integer superspin, the coefficients $C_i$ and $\tilde{C}_i$ mus satisfy: $$\label{coeff_products} C_1\tilde{C}_1 = - C_2\tilde{C}_2 = C_3\tilde{C}_3 = - C_4\tilde{C}_4 = iC^2, \qquad C_1C_3 = C_2C_4, \qquad \tilde{C}_1\tilde{C}_3 =\tilde{C}_2\tilde{C}_4$$ If one also requires the invariance of the sum of the Lagrangians for all four members, the coefficients become fixed up to a single scale factor $C$. If the highest-spin fermion has $\epsilon=1$, the constants are: $$\begin{aligned} C_1 &=& \frac{C}{2}, \qquad C_2=\frac{C}{2}, \qquad C_3 = i\frac{C}{2}, \qquad C_4 = i\frac{C}{2}, \nonumber \\ \tilde{C}_1 &=& 2iC, \qquad \tilde{C}_2 = -2iC, \qquad \tilde{C}_3 = 2C, \qquad \tilde{C}_4 = -2C.\end{aligned}$$ If the highest-spin fermion has $\epsilon=-1$, the constants are: $$\begin{aligned} C_1 &=& -i\frac{C}{2}, \qquad C_2 = -i\frac{C}{2}, \qquad C_3 = \frac{C}{2}, \qquad C_4 = \frac{C}{2}, \nonumber \\ \tilde{C}_1 &=& -2C, \qquad \tilde{C}_2 = 2C, \qquad \tilde{C}_3 = 2iC, \qquad \tilde{C}_4 = -2iC.\end{aligned}$$ We give the resulting expression for the commutator for the bosonic field $\Omega^{\alpha(k+m)\dot\alpha(k-m)}$ as an example: $$\begin{aligned} [\delta_1,\delta_2]\Omega^{\alpha(k+m)\dot\alpha(k-m)} &=& 4 i C^2 \tilde{M} (\langle s \rangle+\iz) \nonumber \\ &=& \bigg[ \lambda \Omega^{\alpha(k+m)\dot\alpha(k-m-1)\dot\beta} \eta_{\dot\beta}{}^{\dot\alpha} + \lambda \Omega^{\alpha(k+m-1)\beta\dot\alpha(k-m)} \eta_{\beta}{}^{\alpha} \nonumber \\ && + \alpha^{-+}_{k,m} \Omega^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \xi^\alpha{}_{\dot\beta} + \Omega^{\alpha(k+m)\beta\dot\alpha(k-m-1)}\xi_\beta{}^{\dot\alpha} \nonumber \\ && + \alpha^{--}_{k,m} \Omega^{\alpha(k+m-1)\dot\alpha(k-m-1)}\xi^{\alpha\dot\alpha} + \alpha^{++}_{k,m} \Omega^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} \xi_{\beta\dot\beta} \bigg]\end{aligned}$$ Recall that $$\eta^{\alpha(2)} = 2{\zeta_1}^\alpha{\zeta_2}^\alpha, \qquad \eta^{\dot\alpha(2)} = 2{\zeta_1}^{\dot\alpha}{\zeta_2}^{\dot\alpha},\qquad \xi^{\alpha\dot\alpha} = {\zeta_1}^\alpha {\zeta_2}^{\dot\alpha} - {\zeta_1}^{\dot\alpha}{\zeta_2}^\alpha$$ The factor $4iC^2\tilde{M}(\langle s \rangle+\iz)$ is the same for all fields. The coefficients $\alpha^{ij}_{k,m}$ correspond to the same particle as the field $\Omega^{\alpha(k+m)\dot\alpha(k-m)}$. By comparing the expression with then unfolded equations, one can see that it is indeed a combination of pseudotranslations and Lorentz transformations. ### Half-integer integer superspin case In case of half-integer superspin, the products of the coefficients $C_i$ and $\tilde{C}_i$ are fixed by the same relations (\[coeff\_products\]). The requirement of the invariance for the sum of the Lagrangians fixes the coefficients up to the single scale factor. In case of even-parity highest-spin boson, the coefficients are: $$\begin{aligned} C_1 &=& i\frac{C}{2}, \qquad C_2 = \frac{C}{2}, \qquad C_3 = \frac{C}{2}, \qquad C_4 = i\frac{C}{2}, \nonumber \\ \tilde{C}_1 &=& 2C, \qquad \tilde{C}_2 = -2iC, \qquad \tilde{C}_3 = 2iC, \qquad \tilde{C}_4 = -2C.\end{aligned}$$ If the highest-spin boson is parity-odd, the constants are: $$\begin{aligned} C_1 &=& \frac{C}{2}, \qquad C_2 = -i\frac{C}{2}, \qquad C_3 = -i\frac{C}{2}, \qquad C_4 = \frac{C}{2}, \nonumber \\ \tilde{C}_1 &=& 2iC, \qquad \tilde{C}_2 = 2C, \qquad \tilde{C}_3 = -2C, \qquad \tilde{C}_4 = -2iC.\end{aligned}$$ Again, we present a commutator of the supertransformations for the field $\Omega^{\alpha(k+m)\dot\alpha(k-m)}$ as an example: $$\begin{aligned} [\delta_1,\delta_2]\Omega^{\alpha(k+m)\dot\alpha(k-m)} &=& 2 i C^2 (\tilde{M}_++\tilde{M}_-) (\langle s \rangle+\iz) \nonumber \\ &=& \bigg[ \lambda \Omega^{\alpha(k+m)\dot\alpha(k-m-1)\dot\beta}\eta_{\dot\beta}{}^{\dot\alpha} + \lambda \Omega^{\alpha(k+m-1)\beta\dot\alpha(k-m)} \eta_{\beta}{}^{\alpha} \nonumber \\\ && + \alpha^{-+}_{k,m} \Omega^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta}\xi^\alpha{}_{\dot\beta} + \Omega^{\alpha(k+m)\beta\dot\alpha(k-m-1)}\xi_\beta{}^{\dot\alpha} \nonumber \\ && + \alpha^{--}_{k,m} \Omega^{\alpha(k+m-1)\dot\alpha(k-m-1)}\xi^{\alpha\dot\alpha} + \alpha^{++}_{k,m} \Omega^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} \xi_{\beta\dot\beta} \bigg]\end{aligned}$$ One can see that the structure of the commutator is the same as in the previous case. The factor $2 i C^2 (\tilde{M}_++\tilde{M}_-) (\langle s \rangle+\iz)$ is slightly different now. Again, it is the same for all fields. The coefficients $\alpha^{ij}_{k,m}$ correspond to the same particle as the field $\Omega^{\alpha(k+m)\dot\alpha(k-m)}$. Infinite spin supermultiplets ============================= Recently it became clear that the gauge invariant formalism we use for the description of massive higher spin fields nicely works for the infinite spin limit as well [@Met16; @Met17; @Zin17; @KhZ17; @Met18; @KhZ19]. Moreover, the first examples of the infinite spin supermultiplets in the flat space were constructed [@Zin17; @BKhSZ19b] (see also recent paper [@Naj19]). In this section we consider unfolded formulation of the infinite spin supermultiplets both in the flat and $AdS_4$ spaces. These two cases turns out to be rather different, so we consider them separately in the two subsequent subsections. Let us begin with the general considerations. In the infinite spin limit the gauge invariant formulation does not contain any gauge invariant zero-forms so we have the gauge one-forms $\Omega$, $\Psi$ and Stueckelberg zero-forms $W$, $Y$ only. In this, the unfolded equations is just the infinite set of the zero-curvature conditions: $$\begin{aligned} \mathcal{R}^{\alpha(k+m)\dot\alpha(k-m)} &=& 0, \qquad \mathcal{C}^{\alpha(k+m)\dot\alpha(k-m)} = 0 \nonumber \\ \mathcal{F}^{\alpha(k+m)\dot\alpha(k-m)} &=& 0, \qquad \mathcal{D}^{\alpha(k+m)\dot\alpha(k-m)} = 0\end{aligned}$$ The expressions for the bosonic curvatures $\mathcal{R}$ and $\mathcal{C}$ are still given by (\[curvatures1\]), (\[curvatures2\]), while the fermionic ones are still defined by (\[curvatures1a\]), (\[curvatures2a\]) but with different functions $\alpha$, $\tilde{\alpha}$ (see below). Similarly, the general ansatz for the supertransformations for the Stueckelberg zero-forms is still (\[ansatz\_g0f\]) and for the one-forms is still (\[ansatz\_g1f\]) and (\[superblock\_diagonal\]). Flat space ---------- In the infinite spin limit the gauge invariant formalism leads to the massless and tachyonic solutions for bosons and only massless ones for fermions (because the tachyonic ones are non unitary) [@Met16; @Met17; @KhZ19]. This leaves us the only possibility — a massless infinite spin supermultiplet in agreement with the classification in [@BKRX02]. For the massless infinite spin boson the functions $\alpha$ have a rather simple form: $$\begin{aligned} \alpha^{++}_{k,m} &=& \frac{\sqrt{k(k+1)}\mu}{(k-m+1)(k-m+2)} \nonumber \\ \alpha^{-+}_{k,m} &=& \frac{\mu^2}{(k+m)(k+m+1)(k-m+1)(k-m+2)} \nonumber \\ \alpha^{-+}_{k,0} &=& 1 \\ \alpha^{--}_{k,m} &=& \frac{\mu}{(k+m)(k+m+1)\sqrt{k(k-1)}} \nonumber\end{aligned}$$ where $\mu$ is a dimensionful parameter related with the eigenvalue of the second Casimir operator of Poincare group. Similarly, for the massless infinite spin fermions we have: $$\begin{aligned} \tilde{\alpha}^{++}_{k,m} &=& \frac{(k+1)\tilde{\mu}}{(k-m+1)(k-m+2)} \nonumber \\ \tilde{\alpha}^{-+}_{k,m} &=& \frac{\tilde{\mu}^2}{(k-m+1)(k-m+2)(k+m+1)(k+m+2)} \nonumber \\ \tilde{\alpha}^{-+}_{k,0} &=& \epsilon \frac{\tilde{\mu}}{(k+1)(k+2)}, \qquad \epsilon = \pm 1 \\ \tilde{\alpha}^{--}_{k,m} &=& \frac{\tilde{\mu}}{(k+m+1)(k+m+2)k} \nonumber\end{aligned}$$ [**Superblock**]{} Let us consider a superblock containing one such boson and one fermion. First of all, supersymmetry requires that their dimensionfull parameters must be equal $\mu = \tilde{\mu}$. Then we obtain the following expressions for the parameters of the supertransformations for the boson: $$\begin{aligned} \gamma^{+0}_{k,m} &=& \sqrt{k} C \nonumber \\ \gamma^{-0}_{k,m} &=& \frac{\mu}{(k+m)(k+m+1)\sqrt{k}} C \nonumber \\ \gamma^{0+}_{k,m} &=& - \epsilon \frac{\sqrt{k}\mu}{(k-m+1)(k-m+2)} C^* \\ \gamma^{0-}_{k,m} &=& - \epsilon \frac{1}{\sqrt{k}} C^* \nonumber\end{aligned}$$ and for the fermion: $$\begin{aligned} \tilde{\gamma}^{+0}_{k,m} &=& \sqrt{(k+1)} \tilde{C} \nonumber \\ \tilde{\gamma}^{-0}_{k,m} &=& \frac{\mu}{(k+m+1)(k+m+2)\sqrt{k}} \tilde{C} \nonumber \\ \tilde{\gamma}^{0+}_{k,m} &=& \epsilon \frac{\sqrt{(k+1)}\mu}{(k-m+1)(k-m+2)} \tilde{C}^* \\ \tilde{\gamma}^{0-}_{k,m} &=& \epsilon \frac{1}{\sqrt{k}} \tilde{C}^* \nonumber\end{aligned}$$ Here $C$ and $\tilde{C}$ are two arbitrary complex constants. It is easy to check that the algebra of these supertransformations is not closed so to construct a supermultiplet we have to consider a pair of bosons and a pair of fermions.\ [**Supermultiplet**]{} In the flat space, there exists only one infinite spin supermultiplet, with its structure shown in the Figure \[fig:fs\_issm\]. (0,-2.3) node(s)\[fblock,text width=2.5cm,align=center\] [$\epsilon=\mp1$]{} (-2.3,0) node(s1)\[bblock,text width=2.5cm,align=center\] [$P=\pm1$]{} (2.3,0) node(s2)\[bblock,text width=2.5cm,align=center\] [$P=\mp1$]{} (0,2.3) node(s3)\[fblock,text width=2.5cm,align=center\] [$\epsilon=\pm1$]{}; (s.west) -| (s1.south) node \[below right,midway\] [$\tilde{C}_1$]{} node \[above right,midway\] [$-$]{}; (s1.315) – (s.135) node \[midway,below,sloped\] [$C_1$]{}; (s.east) -| (s2.south) node \[below left,midway\] [$\tilde{C}_4$]{} node \[above left,midway\] [$+$]{}; (s2.225) – (s.45) node \[midway,below,sloped\] [$C_4$]{}; (s3.west) -| (s1.north) node \[above right,midway\] [$\tilde{C}_2$]{} node \[below right,midway\] [$+$]{}; (s1.45) – (s3.225) node \[midway,above,sloped\] [$C_2$]{}; (s3.east) -| (s2.north) node \[above left,midway\] [$\tilde{C}_3$]{} node \[below left,midway\] [$-$]{}; (s2.135) – (s3.315) node \[midway,above,sloped\] [$C_3$]{}; As in the Lagrangian formulation [@BKhSZ19b], we have found that the two bosons must have opposite parity, while the two fermions must have opposite signs of the mass-like terms $\epsilon_2 = - \epsilon_1$. Moreover, all the products $C_i\tilde{C}_i$, $i=1,2,3,4$ must be imaginary and satisfy the following relations: $$\begin{aligned} C_1\tilde{C}_1 &=& - C_2\tilde{C}_2 = C_3\tilde{C}_3 = - C_4\tilde{C}_4 \nonumber \\ C_2\tilde{C}_3 &=& - C_1\tilde{C}_4, \qquad C_3\tilde{C}_4 = - C_2\tilde{C}_1.\end{aligned}$$ For definiteness, we assume that the first boson is parity-even, and the first fermion has $\epsilon_1 = 1$. If we also require that not only unfolded equations but also the sum of the four Lagrangians is invariant under the supertransformations we obtain $$\begin{aligned} C_1 &=& \frac{C}{2}, \qquad C_2 = \frac{C}{2}, \qquad C_3 = i\frac{C}{2}, \qquad C_4 = i\frac{C}{2}, \nonumber \\ \tilde{C}_1 &=& 2iC, \qquad \tilde{C}_2 = -2iC, \qquad \tilde{C}_3 = 2C, \qquad \tilde{C}_4 = -2C.\end{aligned}$$ Once again, we provided as an example the explicit expressions for the commutator of the two supertransformations on the one-form $\Omega$: $$\begin{aligned} [\delta_1,\delta_2]\Omega^{\alpha(k+m)\dot\alpha(k-m)} &=& 2 i C^2 \bigg[ \alpha^{-+}_{k,m} \Omega^{\alpha(k+m-1)\dot\alpha(k-m)\dot\beta} \xi^\alpha{}_{\dot\beta} + \Omega^{\alpha(k+m)\beta\dot\alpha(k-m-1)}\xi_\beta{}^{\dot\alpha} \nonumber \\ && + \alpha^{--}_{k,m} \Omega^{\alpha(k+m-1)\dot\alpha(k-m-1)}\xi^{\alpha\dot\alpha} + \alpha^{++}_{k,m} \Omega^{\alpha(k+m)\beta\dot\alpha(k-m)\dot\beta} \xi_{\beta\dot\beta} \bigg]\end{aligned}$$ $AdS_4$ space ------------- In this case for the infinite spin limit the gauge invariant formalism provides a whole range of the unitary solutions both for the bosons as well as for the fermions [@Met16; @Met17; @KhZ19]. But as we have already noted for the construction of the supermultiplets it is crucial to have a factorization of the main functions $\alpha^{-+}$ and $\tilde{\alpha}^{-+}$. The only such possibility we have found — so called “partially massless” infinite spin particles when the spectrum of helicities is $s \le |h| < \infty$, where integer or half-integer $s$ denotes the lowest helicity. In this case the main functions look very similar to the massive finite spin case: $$\begin{aligned} \alpha^{-+}_m &=& (m-s-1)(m+s) [m(m-1)\lambda^2 - M^2] \nonumber \\ \tilde{\alpha}^{-+}_m &=& (m-\tilde{s}-1)(m+\tilde{s}) [(m-\iz)^2\lambda^2 - \tilde{M}^2]\end{aligned}$$ Moreover, it appears that the bosonic and fermionic mass parameters must still satisfy the same relation $M^2 = \tilde{M}[\tilde{M} \pm\lambda]$. As a result, we obtain: $$\alpha^{-+}_m = (m-s-1)(m+s)[m\lambda \pm \tilde{M}] [(m-1)\lambda \mp \tilde{M}]$$ As in the massive case, we begin with the construction of two possible superblocks with $\tilde{s} = s \pm \iz$.\ [**Superblock $\tilde{s} = s - \iz$**]{} For the bosonic functions $\gamma$ we obtain ($k \ge s$, $m \ge 0$): $$\begin{aligned} \gamma^{+0}_{k,m} &=& \sqrt{k(k+1-s)((k+1)\lambda\mp\tilde{M})} C, \nonumber \\ \gamma^{0-}_{k,m} &=& \sqrt{\frac{(k+s+1)((k+1)\lambda\pm\tilde{M})}{k}}C, \\ \gamma^{0+}_{k,m} &=& \frac{(s+m)(\tilde{M}\pm m\lambda)}{(k-m+1)(k-m+2)}\gamma^{+0}_{k,m}, \qquad m > 0, \nonumber \\ \gamma^{-0}_{k,m} &=& \frac{(s+m)(\tilde{M}\pm m \lambda)}{(k+m)(k+m+1)} \gamma^{0-}_{k,m}, \qquad m> 0, \nonumber \end{aligned}$$ while for the fermionic functions $\tilde{\gamma}$ ($k \ge \tilde{s}$, $m \ge \iz$): $$\begin{aligned} \tilde{\gamma}^{+0}_{k+\iz,m+\iz} &=& \sqrt{(k+1)(s+k+2)((k+2)\lambda\pm\tilde{M})} \tilde{C} \nonumber \\ \tilde{\gamma}^{0-}_{k+\iz,m+\iz} &=& \sqrt{\frac{(k+1-s)((k+1)\lambda\mp\tilde{M})}{k}} \tilde{C} \nonumber \\ \tilde{\gamma}^{0+}_{k+\iz,m+\iz} &=& \frac{(s-m)(\tilde{M}\mp m \lambda)}{(k-m+1)(k-m+2)} \tilde{\gamma}^{+0}_{k+\iz,m+\iz}, \\ \tilde{\gamma}^{-0}_{k+\iz,m+\iz} &=& \frac{(s-m)(\tilde{M}\mp m\lambda)}{(k+m+1)(k+m+2)} \tilde{\gamma}^{0-}_{k+\iz,m+\iz}, \nonumber\end{aligned}$$ Since $\lambda \ne 0$, we obtain also a pair of relations on these two parameters $C$ and $\tilde{C}$: $$C^* = \mp \epsilon C, \qquad \tilde{C}^* = \pm \epsilon \tilde{C}$$ At the same time, the relation between $c$ and $\tilde{C}$ from the invariance for the sum of the two Lagrangians appears to be different form the massive case: $$\tilde{C} = \pm 4i\epsilon C$$ and this turns out to be important (see below). [**Superblock $\tilde{s} = s + \iz$**]{} In this case the bosonic functions $\gamma^{ij}_{k,m}$ are ($k \ge s$, $m \ge 0$): $$\begin{aligned} \gamma^{+0}_{k,m} &=& \sqrt{k(s+k+2)((k+1)\lambda\mp\tilde{M})} C, \nonumber \\ \gamma^{0-}_{k,m} &=& \sqrt{\frac{(k-s)((k+1)\lambda\pm\tilde{M})}{k}}C \\ \gamma^{0+}_{k,m} &=& \frac{(s-m+1)(\tilde{M}\pm m\lambda)}{(k-m+1)(k-m+2)} \gamma^{+0}_{k,m}, \qquad m > 0, \nonumber \\ \gamma^{-0}_{k,m} &=& \frac{(s-m+1)(\tilde{M}\pm m \lambda)}{(k+m)(k+m+1)} \gamma^{0-}_{k,m}, \qquad m> 0, \nonumber \end{aligned}$$ and for the fermionic ones $\tilde{\gamma}$ ($k \ge \tilde{s}$, $m \ge \iz$): $$\begin{aligned} \tilde{\gamma}^{+0}_{k+\iz,m+\iz} &=& \sqrt{(k+1)(k+1-s)((k+2)\lambda\pm\tilde{M})} \tilde{C}, \nonumber \\ \tilde{\gamma}^{0-}_{k+\iz,m+\iz} &=& \pm \sqrt{\frac{(s+k+2)((k+1)\lambda\mp\tilde{M})}{k}} \tilde{C}, \nonumber \\ \tilde{\gamma}^{0+}_{k+\iz,m+\iz} &=& \mp \frac{(s+m+1)(\tilde{M}\mp m \lambda)}{(k-m+1)(k-m+2)} \tilde{\gamma}^{+0}_{k+\iz,m+\iz} \\ \tilde{\gamma}^{-0}_{k+\iz,m+\iz} &=& \mp \frac{(s+m+1)(\tilde{M}\mp m\lambda)}{(k+m+1)(k+m+2)} \tilde{\gamma}^{0-}_{k+\iz,m+\iz}, \nonumber \end{aligned}$$ In this case we also obtain $$C^* = \pm \epsilon C, \qquad \tilde{C}^* = \mp \epsilon \tilde{C}$$ Again, the relation between $C$ and $\tilde{C}$ which follows from the Lagrangians invariance, is slightly different: $$\tilde{C} = \mp 4i\epsilon C$$ [**Supermultiplets**]{} Similarly to the massive case, there exist two different solutions for the infinite spin supermultiplet in $AdS_4$, which resemble those with the integer superspin and half integer superspin. Their structure is the same as in the massive case (see Figure \[fig:hsm\_structure\]). The coefficients $C_i,\tilde{C}_i$ are restricted by the same conditions as in (\[coeff\_products\]): $$C_1\tilde{C}_1 = - C_2\tilde{C}_2 = C_3\tilde{C}_3 = C_4\tilde{C}_4 = iC^2, \qquad C_1C_3 = C_2C_4, \qquad \tilde{C}_1\tilde{C}_3 = \tilde{C}_2\tilde{C}_4$$ The expressions for the commutators are also the same as in the massive supermultiplet case. However, the restrictions following from the Lagrangian invariance cannot be satisfied, as they require, for instance, the bosons to have the same parity. A possible way to restore the invariance is to change the sign of one bosonic and one fermionic Lagrangians so that the connection between $C_i$ and $\tilde{C}_i$ becomes $\tilde{C}_i=4i\epsilon C$ as in in the massive case. But this spoils the unitarity of the theory and this resembles the situation with the non-unitary partially massless finite spin supermultiplets constructed in [@BKhSZ19a]. Conclusion {#conclusion .unnumbered} ========== In this paper we have constructed the unfolded formulation for the massive higher spin $N=1$ supermultiplets in $AdS_4$. Our results are in complete agreement with the results of [@BKhSZ19] where the Lagrangian formulation of such supermultiplets were developed. We also consider an infinite spin limit for these supermultiplets with the results also consistent with that of [@BKhSZ19a]. Acknowledgements {#acknowledgements .unnumbered} ================ Authors are grateful to the I. L. Buchbinder and T. V. Snegirev for collaboration. M.Kh. is grateful to Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS” for their support of the work. Notations and conventions ========================= In the paper, we adopt the “condensed notation” of the indices. Namely, if an expression contains $n$ consecutive indices, denoted by the same letter with different indices (e.g. $\alpha_1,\alpha_2,\ldots \alpha_n$) and is symmetric on them, we simply write the letter, with the number $n$ in parentheses if $n>1$ (e.g. $\alpha(n)$). For example: $$\Phi^{\alpha_1,\alpha_2,\alpha_3} = \Phi^{\alpha(3)}, \qquad \zeta^{\alpha_1} \Omega^{\alpha_2\alpha_3} = \zeta^\alpha \Omega^{\alpha(2)}$$ We define symmetrization over indices as the sum of the minimal number of terms necessary without normalization multiplier. We use the multispinor formalism in four dimensions as in the paper [@DS14]. Every vector index is transformed into a pair of spinor indices: $V^\mu\sim V^{\alpha,\dot{\alpha}}$, where $\alpha,\dot{\alpha}=1,2$. Dotted and undotted indices are transformed into one another under the hermitian conjugation: $$\left(\Omega^{\alpha{\dot{\alpha}(2)}}\right)^\dagger=\Omega^{\alpha(2){\dot{\alpha}}}$$ The spin-tensors, i.e. fields with odd number of indices, are Grassmannian. For example, $$A^{\alpha(2)\dot\alpha} \eta^{\alpha} = - \eta^{\alpha} A^{\alpha(2)\dot\alpha}$$ Under the hermitian conjugation, the order of fields is reversed: $$\left(A^{\alpha(2)\dot\alpha}\eta^{\alpha}\right)^\dagger = \eta^{\alpha} A^{\alpha(2)\dot\alpha} = - A^{\alpha(2)\dot\alpha}\eta^{\alpha}$$ The metrics for the spinor indices is an antisymmetric bispinor: $$\epsilon_{\alpha\beta} \xi^\beta = - \xi_\alpha, \qquad \epsilon^{\alpha\beta} \xi_\beta = \xi^\alpha,$$ similarly for dotted indices. Hence, symmetry over a set on indices implies tracelessness. This feature greatly simplifies the work with traceless mixed symmetry tensors and spin-tensors. The mixed symmetry tensor $ \Phi^{\mu(k),\nu(l)}$ which corresponds to the two-row Young tableaux $Y(k,l)$ [@BB06] is described by a pair of multispinors $\Phi^{\alpha(k+l)\dot\alpha(k-l)}$, $\Phi^{\alpha(k-l)\dot\alpha(k+l)}$ in multispinor formalism; if the tensor $\Phi^{\mu(k),\nu(l)}$ is real then: $$\left(\Phi^{\alpha(k+l)\dot\alpha(k-l)}\right)^\dagger = \Phi^{\alpha(k-l)\dot\alpha(k+l)}.$$ Similarly, the mixed symmetry spin-tensor $ \Psi^{\mu(k),\nu(l)}$ which corresponds to the Young tableaux $Y(k+\iz,l+\iz)$ is described by a pair of multispinors $\Psi^{\alpha(k+l+1)\dot\alpha(k-l)}$, $\Psi^{\alpha(k-l+1)\dot\alpha(k+l)}$; if the spin-tensor $\Psi^{\mu(k),\nu(l)}$ is Majorana one then $$\left(\Psi^{\alpha(k+l+1)\dot\alpha(k-l)}\right)^\dagger = \Psi^{\alpha(k-l)\dot\alpha(k+l+1)}.$$ In the frame-like formalism, two bases, namely the world one and the local one are used. We denote the local basis vectors as $e^{\alpha\dot\alpha}$; the world indices are omitted, and all the fields are assumed differential forms with respect to them. Similarly all the products are exterior with respect to the world indices. In the paper, we use basis forms, i.e. antisymmetrized products of basis vectors $e^{\alpha\dot\alpha}$. The forms are 2-form $E^{\alpha(2)}+h.c.$, 3-form $E^{\alpha\dot\alpha}$ and 4-form $E$. The transformation law of the forms under the hermitian conjugation is: $$(e^{\alpha\dot\alpha})^\dagger=e^{\alpha\dot\alpha} \qquad (E^{\alpha(2)})^\dagger=E^{\dot\alpha(2)} \qquad (E^{\alpha\dot\alpha})^\dagger=-E^{\alpha\dot\alpha} \qquad (E)^\dagger=-E$$ Equations on the parameters of superblock ========================================= Here we provide a complete set of equations which follows from the requirement that unfolded equations be invariant under the supertransformations. For the supertransformations of the bosonic sector of gauge invariant zero-forms we obtain: $$\begin{aligned} \label{superblock_eqs1} \frac{\delta_{k,m}^{0+}\tilde{\beta}^{i,-}_{k+\iz,m-\iz} +\beta^{i,+}_{k,m}\delta_{k+\iz(1+i),m-\iz(1-i)}^{0-} -\lambda\delta_{k,m}^{i0}}{k-m} &=& \beta^{i,-}_{k,m}\delta_{k-\iz(1-i),m+\iz(1+i)}^{0+} -\delta_{k,m}^{0+}\tilde{\beta}^{i,-}_{k+\iz,m-\iz} \nonumber \\&=& \delta_{k,m}^{0-}\tilde{\beta}^{i,+}_{k-\iz,m+\iz}-\beta^{i,+}_{k,m}\delta_{k+(1+i)\iz,m-\iz(1-i)}^{0-} , \nonumber \\ \frac{\delta_{k,m}^{+0}\tilde{\beta}^{-,i}_{k+\iz,m+\iz} +\beta^{+,i}_{k,m}\delta_{k+\iz(1+i),m+\iz(1-i)}^{-0} -\lambda\delta_{k,m}^{0i}}{k+m} &=& \beta^{-,i}_{k,m}\delta_{k-(1-i)\iz,m-(1+i)\iz}^{+0}-\delta_{k,m}^{+0}\tilde{\beta}^{-,i}_{k+\iz,m+\iz} \nonumber \\&=& \delta_{k,m}^{-0}\tilde{\beta}^{+i}_{k-\iz,m-\iz}-\beta^{+i}_{k,m}\delta_{k+\iz(1+i),m+(1+i)\iz}^{-0}, \nonumber \\ \beta^{ij}_{k,m}\delta_{k+\iz(i+j),m+\iz(i-j)}^{i0}&=&\delta_{k,m}^{i0}\tilde{\beta}^{ij}_{k+\iz i,m+\iz i}, \nonumber \\ \beta^{ij}_{k,m}\delta_{k+\iz(i+j),m+\iz(i-j)}^{0j}&=&\delta_{k,m}^{0j}\tilde{\beta}^{ij}_{k+\iz j,m-\iz j}\end{aligned}$$ and similar conditions with inverted tildes (i.e. tilde is added above the coefficients which do not possess one and removed from those which have one) for the fermionic sector with half-integer $k,m$. Here $i,j$ are numbers $\pm 1$; when written as upper indices of the coefficients, they stand for $+$ and $-$ respectively. Similarly, for the gauge sector supertransformation parameters we get: $$\begin{aligned} \label{superblock_eqs2} \frac{\gamma_{k,m}^{0+}\tilde{\alpha}^{i,-}_{k+\iz,m-\iz} +\alpha^{i,+}_{k,m}\gamma_{k+\iz(1+i),m-\iz(1-i)}^{0-} -\lambda\gamma_{k,m}^{i0}}{k-m} &=& \alpha^{i,-}_{k,m}\gamma_{k-\iz(1-i),m+\iz(1+i)}^{0+} -\gamma_{k,m}^{0+}\tilde{\alpha}^{i,-}_{k+\iz,m-\iz} \nonumber \\&=& \gamma_{k,m}^{0-}\tilde{\alpha}^{i,+}_{k-\iz,m+\iz}-\alpha^{i,+}_{k,m}\gamma_{k+(1+i)\iz,m-\iz(1-i)}^{0-} , \nonumber \\ \frac{\gamma_{k,m}^{+0}\tilde{\alpha}^{-,i}_{k+\iz,m+\iz} +\alpha^{+,i}_{k,m}\gamma_{k+\iz(1+i),m+\iz(1-i)}^{-0} -\lambda\gamma_{k,m}^{0i}}{k+m} &=& \alpha^{-,i}_{k,m}\gamma_{k-(1-i)\iz,m-(1+i)\iz}^{+0}-\gamma_{k,m}^{+0}\tilde{\alpha}^{-,i}_{k+\iz,m+\iz} \nonumber \\&=& \gamma_{k,m}^{-0}\tilde{\alpha}^{+i}_{k-\iz,m-\iz}-\alpha^{+i}_{k,m}\gamma_{k+\iz(1+i),m+(1-i)\iz}^{-0}, \nonumber \\ \alpha^{ij}_{k,m}\gamma_{k+\iz(i+j),m+\iz(i-j)}^{i0}&=&\gamma_{k,m}^{i0}\tilde{\alpha}^{ij}_{k+\iz i,m+\iz i}, \nonumber \\ \alpha^{ij}_{k,m}\gamma_{k+\iz(i+j),m+\iz(i-j)}^{0j}&=&\gamma_{k,m}^{0j}\tilde{\alpha}^{ij}_{k+\iz j,m-\iz j}\end{aligned}$$ The relations for $\tilde{\gamma}^{ij}_{k,m}$ are obtained by inverting tildes. [10]{} T. Curtright [*“Massless field supermultiplets with arbitrary spin”,*]{} Phys. Lett. [**B85**]{} (1979) 219. M. A. Vasiliev [*“’Gauge’ form of description of massless fields with arbitrary spin”,*]{} Sov. J. Nucl. Phys. [**32**]{} (1980) 439. S. M. Kuzenko, A. G. Sibiryakov, V. V. Postnikov [*“Massless gauge superfields of higher half integer superspins”,*]{} JETP Lett. [**57**]{} (1993) 534. S. M. Kuzenko, A. G. Sibiryakov [*“Massless gauge superfields of higher integer superspins”,*]{} JETP Lett. [**57**]{} (1993) 539. S. M. Kuzenko, A. G. Sibiryakov [*“Free massless higher superspin superfields on the anti-de Sitter superspace”,*]{} Phys. Atom. Nucl. [**57**]{} (1994) 1257, arXiv:1112.4612. I.L. Buchbinder, Jr. S. J. Gates, J. Phillips, W. D. Linch [*“New 4D, N = 1 Superfield Theory: Model of Free Massive Superspin-3/2 Multiplet”,*]{} Phys. Lett. [**B535**]{} (2002) 280-288, arXiv:hep-th/0201096. I.L. Buchbinder, S. J. Gates Jr, W.D. Linch III, J. Phillips [*“Dynamical Superfield Theory of Free Massive Superspin-1 Multiplet”,*]{} Phys. Lett. [**B549**]{} (2002) 229-236, arXiv:hep-th/0207243. Yu. M. Zinoviev [*“Massive N=1 supermultiplets with arbitrary superspins”,*]{} Nucl. Phys. [**B785**]{} (2007) 98-114, arXiv:0704.1535. Yu. M. Zinoviev [*“On Massive High Spin Particles in (A)dS”,*]{} arXiv:hep-th/0108192. R. R. Metsaev [*“Gauge invariant formulation of massive totally symmetric fermionic fields in (A)dS space”,*]{} Phys. Lett. [**B643**]{} (2006) 205-212, arXiv:hep-th/0609029. I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian formulation of the massive higher spin supermultiplets in three dimensional space-time”,*]{} JHEP [**10**]{} (2015) 148, arXiv:1508.02829. I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Gauge invariant Lagrangian formulation of massive higher spin fields in $(A)dS_3$ space”,*]{} Phys. Lett. [**B716**]{} (2012) 243-248, arXiv:1207.1215. I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Frame-like gauge invariant Lagrangian formulation of massive fermionic higher spin fields in $AdS_3$ space”,*]{} Phys. Lett. [**B738**]{} (2014) 258, arXiv:1407.3918. I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Unfolded equations for massive higher spin supermultiplets in $AdS_3$”,*]{} JHEP [**08**]{} (2016) 075, arXiv:1606.02475. Yu. M. Zinoviev [*“Massive higher spins in d=3 unfolded”,*]{} J. Phys. A [**49**]{} (2016) 095401, arXiv:1509.00968. I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian description of massive higher spin supermultiplets in $AdS_3$ space”,*]{} JHEP [**08**]{} (2017) 021, arXiv:1705.06163. I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Supersymmetric higher spin models in three dimensional spaces”,*]{} Symmetry [**10**]{} (2018) 9, arXiv:1711.11450. I. L. Buchbinder, M.V. Khabarov, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian formulation of the massive higher spin $N=1$ supermultiplets in $AdS_4$ space”,*]{} Nucl. Phys. [**B942**]{} (2019) 1-29, arXiv:1901.09637. Yu. M. Zinoviev [*“Frame-like gauge invariant formulation for massive high spin particles”,*]{} Nucl. Phys. [**B808**]{} (2009) 185, arXiv:0808.1778. Sebastian Garcia-Saenz, Kurt Hinterbichler, Rachel A. Rosen [*“Supersymmetric Partially Massless Fields and Non-Unitary Superconformal Representations”,*]{} JHEP [**11**]{} (2018) 166, arXiv:1810.01881. I. L. Buchbinder, M. V. Khabarov, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian description of the partially massless higher spin N=1 supermultiplets in $AdS_4$ space”,*]{} JHEP [**08**]{} (2019) 116, arXiv:1904.01959. Yu. M. Zinoviev [*“Infinite spin fields in d = 3 and beyond”,*]{} Universe [**3**]{} (2017) 63, arXiv:1707.08832. I. L. Buchbinder, M. V. Khabarov, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian formulation for the infinite spin N=1 supermultiplets in d=4”,*]{} Nucl. Phys. [**B946**]{} (2019) 114717, arXiv:1904.05580. Mojtaba Najafizadeh [*“Supersymmetric Continuous Spin Gauge Theory”,*]{} arXiv:1912.12310. R.R. Metsaev [*“Continuous spin gauge field in (A)dS space”,*]{} Phys. Lett. [**B767**]{} (2017) 458, arXiv:1610.00657. R.R. Metsaev [*“Fermionic continuous spin gauge field in (A)dS space”,*]{} Phys. Lett. [**B773**]{} (2017) 135, arXiv:1703.05780. M. V. Khabarov, Yu. M. Zinoviev [*“Infinite (continuous) spin fields in the frame-like formalism”,*]{} Nucl. Phys. [**B928**]{} (2018) 182, arXiv:1711.08223. R.R. Metsaev [*“BRST-BV approach to continuous-spin field”,*]{} Phys. Lett. [**B781**]{} (2018) 568, arXiv:1803.08421. M.V. Khabarov, Yu. M. Zinoviev [*“Massive higher spin fields in the frame-like multispinor formalism”,*]{} Nucl. Phys. [**B948**]{} (2019) 114773, arXiv:1906.03438. D. S. Ponomarev, M. A. Vasiliev [*“Frame-Like Action and Unfolded Formulation for Massive Higher-Spin Fields”,*]{} Nucl. Phys. [**B839**]{} (2010) 466, arXiv:1001.0062. D.S. Ponomarev, M.A. Vasiliev [*“Unfolded Scalar Supermultiplet”,*]{} JHEP [**1012**]{} (2012) 152, arXiv:1012.2903. N. G. Misuna, M. A. Vasiliev [*“Off-Shell Scalar Supermultiplet in the Unfolded Dynamics Approach”,*]{} JHEP [**05**]{} (2014) 140, arXiv:1301.2230. V. E. Didenko, E. D. Skvortsov [*“Elements of Vasiliev theory”,*]{} arXiv:1401.2975. Lars Brink, Abu M. Khan, Pierre Ramond, Xiaozhen Xiong [*“Continuous Spin Representations of the Poincare and Super-Poincare Groups”,*]{} J.Math.Phys. [**43**]{} (2002) 6279, arXiv:hep-th/0205145. Xavier Bekaert, Nicolas Boulanger [*“The unitary representations of the Poincare group in any spacetime dimension”,*]{} arXiv:hep-th/0611263. [^1]: maksim.khabarov@ihep.ru [^2]: Yurii.Zinoviev@ihep.ru
--- abstract: 'Gene regulatory network (GRN) modeling is a well-established theoretical framework for the study of cell-fate specification during developmental processes. Recently, dynamical models of GRNs have been taken as a basis for formalizing the metaphorical model of Waddington’s epigenetic landscape, providing a natural extension for the general protocol of GRN modeling. In this contribution we present in a coherent framework a novel implementation of two previously proposed general frameworks for modeling the [*Epigenetic Attractors Landscape*]{} associated with boolean GRNs: the [*inter-attractor*]{} and [*inter-state*]{} transition approaches. We implement novel algorithms for estimating inter-attractor transition probabilities without necessarily depending on intensive single-event simulations. We analyze the performance and sensibility to parameter choices of the algorithms for estimating inter-attractor transition probabilities using three real GRN models. Additionally, we present a side-by-side analysis of downstream analysis tools such as the attractors’ temporal and global ordering in the EAL. Overall, we show how the methods complement each other using a real case study: a cellular-level GRN model for epithelial carcinogenesis. We expect the toolkit and comparative analyses put forward here to be a valuable additional resource for the systems biology community interested in modeling cellular differentiation and reprogramming both in normal and pathological developmental processes.' author: - | Jose Davila-Velderrain^1,2,\*^, Luis Juarez-Ramiro^3^\ Juan C. Martinez-Garcia^3^, Elena R. Alvarez-Buylla^1,2,\*^\ bibliography: - 'sample.bib' title: Methods for Characterizing the Epigenetic Attractors Landscape Associated with Boolean Gene Regulatory Networks --- [ **[1]{} Instituto de Ecología, Universidad Nacional Autónoma de México, Cd. Universitaria, México, D.F. 04510, México\ **[2]{} Centro de Ciencias de la Complejidad (C3), Universidad Nacional Autónoma de México, Cd. Universitaria, México, D.F. 04510, México\ **[3]{} Departamento de Control Automático, Instituto Politécnico Nacional, A. P. 14-740, 07300 México, DF, México****** ]{} Introduction {#introduction .unnumbered} ============ The postulation of experimentally grounded gene regulatory network (GRN) dynamical models, their qualitative analysis and dynamical characterization in terms of control parameters, and the validation of GRN predictions against experimental observations has become a well-established framework in systems biology – see, for example: [@mendoza1998dynamics; @espinosa2004gene; @huang2007bifurcation; @davila2015descriptive]. There are multiple tools available for the straightforward implementation and analysis of dynamical models of GRNs [@Azpeitia2014FlowerDev]. These models are well-suited for the study of cell-fate specification during developmental processes. More recently, dynamical models of GRNs have been taken as a basis for formalizing a century-old developmental metaphor: Waddington’s epigenetic landscape [@waddington1957strategy; @alvarez2008floral; @huang2012molecular; @Villarreal2012; @davila2015reshaping]. The present authors recently introduced the term [*Epigenetic Attractors Landscape (EAL)*]{} in order to distinguish this modern view of the EL from its metaphorical counterpart (see [@davila2015modeling]). Accordingly, here we will refer as EAL to a group of dynamical models grounded in dynamical systems theory and which operationally define an underlying EL associated with GRN dynamics. In this contribution we focus on the EAL associated with the discrete-time boolean description of GRNs grounded on experimental data.\ Despite growing interest in modeling the EAL, as evidenced by recent model proposals in the study of stem cell differentiation [@li2013quantifying] and reprogramming [@wang2014epigenetic], as well as the study of carcinogenesis [@wang2014quantitative; @zhu2015endogenous] and cancer therapeutics [@choi2012attractor; @wang2013therapeutic]; unlike the case of GRNs, there are no available tools for the straightforward implementation of EAL models. Furthermore, different EAL models have not been compared directly through side-by-side analysis of the same biological system. This has arguably precluded the wide-spread applicability of EALs.\ One of the first methodological frameworks proposed to explore the EAL associated with a Boolean GRN was presented by Alvarez-Buylla and collaborators [@alvarez2008floral]. Briefly, in its original form this framework rests on three steps: (1) introducing stochasticity into the boolean dynamics by means of the so-called stochasticity in nodes model (SIN), (2) estimating an [*inter-attractor*]{} transition probability matrix by simulation, and (3) analyzing the temporal evolution of the probability distribution over attractor states (see methods). For the purpose of this contribution, we refer to such framework as the [*inter-attractor*]{} transition approach (IAT). Recently, a related framework was presented by Zhou and his collaborators [@Zhou2014Discrete]. The main differences between this and the former method are: the latter (1) precludes simulation by introducing stochasticity directly into a deterministic transition matrix, and (2) it is based on the estimation of a [*inter-state*]{} transition probability matrix. We refer to this latter framework as the [*inter-state*]{} transition approach (IST). Additionally, Zhou and collaborator introduced the idea of a global ordering of attractors in the EAL defined by analyzing the relative stability of attractor states [@zhou2014relative], where stability is quantified in terms of the strength of the attractors (phenotypes) to endure stochastic disturbances.\ In this contribution we present in a coherent framework a novel implementation of the two methodologies, as well as associated analysis tools such as the global ordering of the attractors based on relative stabilities, the computation of a quasi-potential landscape based on an stationary probability distribution, and additional tools for downstream analyses and plotting. We use the popular R statistical programming environment (www.R-project.org). For the first framework (IAT), we implement novel algorithms for estimating [*inter-attractor*]{} transition probabilities without necessarily depending on intensive single-event simulations. For both frameworks (IAT and IST) we exploit the vector-based programming capability of the R language. We analyze the performance and sensibility to parameter choices of the algorithms for estimating [*inter-attractor*]{} transition probabilities using three GRN models: the Arabidopsis (1) root stem cell niche [@azpeitia2010single] and (2) early flower development [@davila2015reshaping] GRNs; and (3) a cellular-level GRN model for epithelial carcinogenesis. Additionally, for the latter model we present for the first time, a side-by-side analysis of the two frameworks and show how the methods complement each other. Importantly, we show that the attractor time-ordered transitions obtained by directly estimating an inter-attractor transition matrix are consistent with the global ordering of the attractors obtained by means of their corresponding relative stabilities. All the necessary codes for applying the methods and examples showed herein are made publicly available (see methods below); we expect this toolkit to be a valuable additional resource for the systems biology community. Results {#results .unnumbered} ======= Characterizing the Epigenetic Attractors Landscape {#characterizing-the-epigenetic-attractors-landscape .unnumbered} -------------------------------------------------- In this work we organize previously existing, yet dispersed, mathematical analyses into a coherent framework for the characterization of EAL associated with discrete-time boolean description of GRNs grounded on experimental data. Figure 1 schematically represents a general work flow for such characterization. The work flow is supposed to be applicable to an already available and validated experimentally grounded Boolean GRN model (see [@Azpeitia2014FlowerDev]). The first necessary step (Fig. 1a) consists of characterizing the state-space associated with the GRN in terms of the attained attractors and their basins, a standard practice in the dynamical analysis of Boolean GRNs (see methods). The second main step consists on estimating either a inter-attractor or inter-state transition probability matrix (or both) (Fig. 1b). The former is the main mathematical structure for the IAT aproach, and the latter for the IST approach (see methods). Downstream analyses of the underlying EAL such as the temporal-order of attractor attainment, the attractor relative stability and global ordering, and the construction of a probabilistic landscape are based on the transition matrices and can be applied afterwards (Fig. 1c). Inter-attractor Transitions {#inter-attractor-transitions .unnumbered} --------------------------- A first necessary step in order to explore the EAL associated with a Boolean GRN using the IAT approach is to calculate the probabilities of transition from one attractor to another. In this contribution we present two algorithms for such task (see methods). Algorithm 1 implements what we will refer to as an intuitive mapping-guided random walk in state space. The reasoning is as follows. An initial state is taken at random, which is then mapped to a next state using the stochastic mapping in Equation (3). The basins corresponding to the two states are recorded in order. Subsequently, another state is picked at random from the latter basin, and the mapping procedure is repeated. The procedure is repeated $Nsteps$ number of times, each time taking at random a state from the present basin, and the goal is to record a stochastic realization of the transitions from one basin to another. Algorithm 2, on the other hand, considers all the possible states, repeats them $Nreps$ number of times in a single data structure, and maps them using Equation (3) as well (for details, see methods). An important technical issue is then how to select the parameters $Nsteps$ and $Nreps$, respectively. Specially because this type of simulation approaches have been qualified as requiring large number of time-consuming sampling [@Zhou2014Discrete].\ For each algorithm we tested how the estimate of the inter-attractor transition matrix changes as the parameter value increases. We used three real GRN models for testing: [*Arabidopsis*]{} single-cell root stem cell niche GRN (root-GRN) [@azpeitia2010single], [*Arabidopsis*]{} floral organ determination GRN (flower-GRN) [@Azpeitia2014FlowerDev], and a cellular-level GRN model for epithelial carcinogenesis (cancer-GRN). We found that for models of size common to GRN developmental modules (i.e., $8-15$ genes) the estimation obtained with small values of the parameter rapidly converges to that obtained by using large values (e.g., $\approx 10^6$). Figure 2 shows how the distance between the estimate obtained using a value $Nsteps(Nreps) = i$ and that obtained using $Nsteps=10^6 $ and $Nreps=10^3$ for Algorithms 1 and 2, respectively. These results correspond to the three GRN models: root (Fig. 2a-b), cancer (Fig. 2c-d), and flower (Fig. 2e-f). Additionally, we show that the estimate obtained with one of the algorithms also rapidly converges to that obtained with the other algorithm. Figure 3 shows how the distance between the estimate obtained using one algorithm with a parameter value $i$ and that obtained using the other algorithm with a large parameter value decreases as $i$ increases. Based on this latter analysis we conclude that, for GRNs of sizes $8-15$ genes, using a value of the order of $Nsteps = 10^4$ for algorithm 1 and $Nreps=10^2$ would be sufficient to achieve an accuracy similar to that achieved using large values (i.e, $10^6$ and $10^3$, respectively), decreasing then the involved computational cost. Characterizing the EAL {#characterizing-the-eal .unnumbered} ---------------------- In this section we provide as an example the analysis of the EAL underlying a cellular-level GRN model for epithelial carcinogenesis. The details of the construction and validation of such network model are being published by the authors elsewhere. The GRN comprises 9 main regulators of epithelial carcinogenesis (Fig. 4), and its dynamical characterization uncovers 3 fixed-point attractor corresponding to the epithelial, senescent, and mesenchymal stem-like cellular phenotypes. We applied the two approaches (IAT and IST) to the cancer-GRN, and for the IAT approach we applied the two algorithms proposed herein. Accordingly, we estimated two inter-attractor transition matrices and one inter-state transition matrix. For simplicity in all cases we kept fixed a single value for the error parameter $\xi = 0.05$. Using the estimated matrices, we applied the downstream analyses depicted in Figure 1c. Figure 5 shows two graphs plotting the temporal evolution of the occupation probability distribution over attractor states epithelial (black), senescent (red) and mesenchymal (green) – conditioned on an initial distribution where all the cellular population is in the epithelial attractor state. The uncovered attractor time-order is indicated by sequential vertical lines: the order is epithelial $\rightarrow$ senescent $\rightarrow$ mesenchymal. Importantly, the two algorithms give the same qualitative result.\ Subsequently, we uncovered the global ordering of attractors by calculating the relative stabilities and net transition rates between pairs of attractors using the two inter-attractor transitions estimated with the two algorithms (for details, see methods). Figure 6 shows the plot of two graphs where an arrow appears in color red if the calculated transition rate between the attractor is positive in the indicated direction. The global ordering corresponds to the path comprised by directed arrows passing by the three attractors, here: epithelial $\rightarrow$ senescent $\rightarrow$ mesenchymal. Thus, the global ordering is consistent with the attractor time-order, as long as the latter is conditioned on having the total probability mass in the epithelial attractor as initial state. Again, the two algorithms produce the same qualitative result.\ Finally, we used the estimated inter-state transition matrix obtained with the IST approach to derive a graphical probabilistic landscape (see methods). The landscape is based on the stationary probability distribution $\mathbf{u}_{ss}$ obtained by numerical simulation (see methods). Figure 7 and 8 show a 3D-surface and a contour plot respectively. The graphical landscape was derived by first mapping all the state vectors in the sate-space into a low dimensional space by the dimensionality reduction technique principal component analysis. The first two component are taken as the coordinates in the 3D plot, where the z-coordinate corresponds to the values $-log(\mathbf{u}_{ss})$. The surface is inferred by interpolating the spaced data points using the technique of thin plate spline regression [@furrer2009fields]. The 3D-surface plot nicely shows the relative stability of the states by means of their probability, the lower states display a higher level of relative stability than the states initially located at higher places. The route from the attractors of less stability to that with the highest consists with the global ordering uncovered above. However, in the case of the IST transition and the probabilistic landscape we have additional information concerning the relative stability of all the transitory states in state space. Discussion {#discussion .unnumbered} ========== Boolean GRN models are well-established tools for the mechanistic study of the establishment of cellular phenotypes during developmental dynamics. Their simplicity and deterministic nature are well-suited for answering questions regarding the sufficiency of molecular players and interactions necessary to explain observed cellular phenotypes. In the present contribution we present methods to study an extended Boolean GRN model which takes stochasticity into consideration, necessary for studying cell-state transition events.\ In the case of the stochastic Boolean GRNs, the model of interest involves random samples with a non-trivial dependence structure. In such cases, efficient simulation algorithms are needed in order to explore and characterize the underlying structure and to understand the behavioral (dynamical) consequences of the constrains imposed by such structure. Accordingly, we propose two algorithms of general applicability, and show how these can be used to estimate transitions probabilities in an efficient way from moderate size GRNs similar to those proposed as developmental modules driving developmental processes. Although we show that the two algorithms generate consistent estimates, one or the order may be preferred depending on the GRN in question, as well as the computational resources at hand. Algorithm 1 is likely to be preferred in the case of larger GRNs, as it is not constrained by the size of the GRN per se, but the number of steps chosen in the simulation. On the other hand, given the declarative representation used in Algorithm 2, its performance is constrained by the memory available. Algorithm 2, however, may be preferred for fast estimates in small to moderate size GRNs ($< 15$ genes). Importantly, although we tested the performance of the algorithms in terms of the number of steps chosen for the simulations, the results should not be generalized without caution given that we only used three real GRNs, and the results may vary either for larger GRNs or state spaces with more complex structures.\ For illustrative purposes we applied all the methods and downstream analyses presented herein to a specific GRN: a cellular-level GRN model for the description of the phenotypic transitions involved in epithelial carcinogenesis. We show that for this case, the uncovered temporal-order of attractor attainment is consistent with the global ordering based on the exploration of the dynamics of the relative stability of the uncovered attractors, both calculated from a inter-attractor transition probability matrix. The result of the former is conditioned on the initial occupation probability taken. An interesting open problem would be to generalize this relationship using GRNs with divers structures, for example to ask if the global ordering of attractors is robust enough as to drive most initial distributions into a consistent temporal ordering. An additional interesting questions would be, what does this relationship tells us about the structural constraints imposed by the GRN. The tools and implementation presented here may prove useful for such theoretical studies.\ Finally, we present tools for deriving a probabilistic landscape from an estimated inter-state transition matrix in terms of the stationary probability distribution over state space. This latter analysis and the associated graphical tools can be applied to systematically study how the system responds to perturbations resulting in a reshaped EAL. Structural alterations of the EAL may predict the induction of preferential cell-state transitions such as the case of reprogramming strategies [@zhou2011understanding] or therapeutic interventions against the stabilization of a cancer attractor [@huang2013escape; @wang2013therapeutic].\ Overall, in this contribution we present in a coherent framework a novel implementation of general frameworks for modeling the [*Epigenetic Attractors Landscape*]{} associated with boolean GRNs. We provide analysis of the method performance and show how they can be applied to real case GRNs. We expect the toolkit and comparative analyses put forward here to be a valuable additional resource for the systems biology community interested in modeling cellular differentiation and reprogramming both in normal and pathological developmental processes. Materials and Methods {#materials-and-methods .unnumbered} ===================== Boolean Gene Regulatory Networks {#boolean-gene-regulatory-networks .unnumbered} -------------------------------- A Boolean network models a dynamical system assuming both discrete time and discrete state variables. This is expressed formally with the mapping: $$x_i(t+1) = F_i(x_1(t),x_2(t),...,x_k(t)),$$ where the set of functions $F_i$ are logical propositions (or truth tables) expressing the relationship between the genes that share regulatory interactions with the gene $i$, and where the state variables $x_i(t)$ can take the discrete values $1$ or $0$ indicating whether the gene $i$ is expressed or not at a certain time $t$, respectively.\ A completely specified Boolean GRN model is analyzed by either of two methods: (1) by exhaustive computational characterization of the state space in terms of attained attractors and their basins of attractions (used in IAT), or (2) by defining a matrix explicitly encoding the mapping in Equation (1) (used in IST). Specifically, for the latter method, following [@zhou2014relative] the mapping in Equation (1) is used to define a single-step $2^n \times 2^n$ transition matrix $\mathbf{T}$ with elements $t_{i,j}$, where: $$t_{i,j} = \left\{ \begin{aligned} & 1, && \mathbf{x}_j = \mathbf{F}(\mathbf{x}_i) \\ & 0, && Otherwise. \end{aligned} \right.$$ Here $\mathbf{x}_i$ is the network state $i$ from the state-space of size $2^n$ corresponding to a network of $n$ genes, and $\mathbf{F}$ represents the vector of $n$ functions represented element-wise in Equation (1). Given the deterministic character of the mapping in Equation (1), the matrix $\mathbf{T}$ is sparse, each row $i$ having only one element where $t_{i,j}=1$. The matrix $\mathbf{T}$ constitutes a declarative representation which includes the complete information of the mapping in Equation (1): the matrix $\mathbf{T}$ assign to each of the states $\mathbf{x}_k$, where $k \in \{1,...,2^n\}$, its corresponding state in time $t+1$. Inter-Attractor Transition Approach {#inter-attractor-transition-approach .unnumbered} ----------------------------------- ### Including Stochasticity {#including-stochasticity .unnumbered} Following [@alvarez2008floral; @Azpeitia2014FlowerDev; @davila2015modeling], a Boolean GRN is extended into a discrete stochastic model by means of the so–called stochasticity in nodes (SIN) model. In this model, a constant probability of error $\xi$ is introduced for the deterministic Boolean functions as follows: $$\begin{aligned} & P_{x_i(t+1)}[F_i(\mathbf{x}_{reg_i}(t))] = 1- \xi, \\ & P_{x_i(t+1)}[1 - F_i(\mathbf{x}_{reg_i}(t))] = \xi. \end{aligned}$$ It is assumed that the probability that the value of the random variable $x_i(t+1)$ (a gene) is determined or not by its associated logical function $F_i(\mathbf{x}_{reg_i}(t))$ is $1- \xi$ or $\xi$, respectively. The probability $\xi$ is a scalar constant parameter acting independently per gene. The vector $\mathbf{x}_{reg_i}$ represents the regulators of gene $i$. ### Inter-Attractor Transition Probability Estimation {#inter-attractor-transition-probability-estimation .unnumbered} An attractor transition probability matrix $\Pi$ with components: $$\pi_{ij} = P(A_{t+1}=j|A_t=i),$$ representing the probability that an attractor $j$ is reached from an attractor $i$ is estimated by either of two simulation-based algorithms proposed herein (see results). storage\[[*Nsteps*]{}\] from $state \, space=\{1,...,2^n\}$ pick randomly initial state $\mathbf{x}_i$ storage\[1\] $\leftarrow$ basin $k$ $\leftarrow$ map $\leftarrow$ $\mathbf{x}_i$ state $\mathbf{x}_j$ $\leftarrow$ stochastic mapping Eq(2) $\leftarrow$ state $\mathbf{x}_i$ storage\[stepN\] $\leftarrow$ basin $k$ $\leftarrow$ map $\leftarrow$ $\mathbf{x}_j$ from $sub \, space = \{basin \, k\}$ pick randomly state $\mathbf{x}_i$ storage storage $j \times j$ matrix $\Pi$, $j \in \{1, ..., n_{attractors} \}$ Generate $state \, space=\{\mathbf{x}_1,...,\mathbf{x}_{2^n}\}$ Generate set $\mathbf{X_{t+1}} = \mathbf{F}(state \, space)$ $\mathbf{X_{t+1}^{pert}}$ $\leftarrow$ repeat $\mathbf{X_{t+1}}$ element-wise $Nsteps$ times Generate perturbation indicator vector $\mathbf{piv}$: $\, \, \,$ $\mathbf{piv}$ $\leftarrow$ simulate $Nsteps \times n \times 2^n$ observations from $Bin(n=1,\xi)$ Apply error in $\mathbf{X_{t+1}^{pert}}[i]$ , $i \in \{1,...,Nsteps \times n \times 2^n \}$ $\mathbf{X^{pert}} \leftarrow$ split $\mathbf{X_{t+1}^{pert}}$ in $n$-size state vectors $\mathbf{x}_k, k \in \{1,...,Nsteps \, \times \, 2^n\}$ basin $j$ $\leftarrow$ map $\mathbf{x}_i \,$ basin $j$ $\leftarrow$ map $\mathbf{x}_k$ update $\pi_{j,j}$ storage matrix In Algorithm 2, $Bin(n=1,\xi)$ refers to a binomial distribution given by $Bin(k|n,\xi) = \binom {n} {k} \ \xi^k(1-\xi)^{n-k}$. In the special case used here (with $n=1$) the distribution corresponds to a Bernoulli distribution. Thus, what we call [*perturbation indicator vector*]{} effectively simulates tossing a biased coin $Nsteps \, \times \, n \, \times \, 2^n$ times. Each outcome $x=1$ indicates the position where an error in the mapping has occurred, according to Equation (3).\ The elements $\pi_{ij}$ of the matrix $\Pi$ are obtained as maximum likelihood estimates based on the empirical transition probability resulting from the simulations from either algorithm 1 or 2. Inter-State Transition Probability Approach {#inter-state-transition-probability-approach .unnumbered} ------------------------------------------- ### Including Stochasticity {#including-stochasticity-1 .unnumbered} For the IST approach, following [@Zhou2014Discrete; @zhou2014relative], stochasticity is introduced in a declaractive manner (i.e., by means of a single structure representation) using a binomial distribution. Specifically, the effect of noise on each possible single-state transition is represented by introducing a noise matrix $\mathbf{N}$ with elements $$N_{i,j} = \left\{ \begin{aligned} & \binom {n} {d_{ij}} \ \xi^{d_{ij}}(1-\xi)^{n-d_{ij}}, && i \neq j \\ & 0, && i = j \end{aligned} \right.$$ where $d_{ij}$ is the Hamming distance between the states $i$ and $j$ (i.e., $d_{ij} = \lVert \mathbf{x}_i - \mathbf{x}_j \rVert_H$ ). This representation formalizes an intuitive notion: the effect of noise on the system is more (less) likely to produce a state less (more) similar to the initial state.\ ### Inter-State Transition Probability Estimation {#inter-state-transition-probability-estimation .unnumbered} A single object including both stochastic perturbations and deterministic mapping is obtained by adding the noise matrix $\mathbf{N}$ and the deterministic single-step transition matrix $\mathbf{T}$ (see Equation 2) as follows $$\mathbf{\Pi} = (1-\xi)^n \mathbf{T} + \mathbf{N}$$ After normalizing a transition probability matrix $\Pi$ is obtained with components $$\pi_{ij} = P(\mathbf{x}_{t+1}=j|\mathbf{x}_t=i).$$ The components $\pi_{ij}$ represent the probability that a state $j$ is reached from a state $i$, where $i,j \in \{1, ... 2^n \}$. Temporal Evolution of States/Attractors Probability {#temporal-evolution-of-statesattractors-probability .unnumbered} --------------------------------------------------- In both approaches (IAT and IST) a sequence of random variables $\{C_t : t \in \mathbb{N}\}$ is considered a Markov chain (MC). In IAT (IST) $C_T$ takes as values the different attractors (states), the elements $\pi_{i,j}$ representing inter-attactor(states) transition probabilities, and the matrix $\Pi$ the (one-step) transition probability matrix. As the probabilities do not depend on time, the MC is homogeneous.\ The occupation probability distribution $P(C_t = j)$ – i.e., the probability that the chain is in state (attractor or state) $j$ at a given time $t$ – is denoted by the row vector $\mathbf{u}(t)$. The probabilities temporally evolve according to the dynamic equation $$\mathbf{u}(t+1) = \mathbf{u}(t) \mathbf{\Pi}.$$ Taking $\mathbf{u}(0)$ as the initial distribution of the MC, the equation reads $ \mathbf{u}(1) = \mathbf{u}(0) \mathbf{\Pi}.$ By linking the occupation probabilities iteratively we get $ \mathbf{u}(t) = \mathbf{u}(0) \mathbf{\Pi}^t$: the occupation probability distribution at time $t$ can be obtained directly by matrix exponentiation. EAL Analyses {#eal-analyses .unnumbered} ------------ ### Temporal-order of Attractor Attainment {#temporal-order-of-attractor-attainment .unnumbered} Having obtained the temporal evolution of the occupation probability distribution $\mathbf{u}(t)$ given an initial distribution $\mathbf{u}(0)$ by numerically solving Equation (8); following [@alvarez2008floral], it is assumed that the most likely time for an attractor to be reached is when the probability of reaching that particular attractor is maximal. Therefore, the temporal sequence in which attractors are attained is obtained by determining the sequence in which their maximum probabilities are reached using $\mathbf{u}(t)$. ### Probabilistic Landscape {#probabilistic-landscape .unnumbered} A stationary probability distribution of a MC is a distribution $\mathbf{u}_{ss}$ which satisfies the steady state equation $\mathbf{u}_{ss} = \mathbf{u}_{ss} \mathbf{\Pi}$. The stationary probability distribution, if exists, is calculated either by solving the equation $\mathbf{u}_{ss}(\mathbf{I}-\mathbf{\Pi})=0$, where $\mathbf{I}$ is the $n \, \times \, n$ identity matrix [@wilkinson2011stochastic]; or by numerically solving Equation (8), as $\mathbf{u}_{ss}$ corresponds to the [*long-run distribution*]{} of the MC: $\mathbf{u}_{ss} = \lim_{t \to \infty} \mathbf{u}(t)$ [@bolstad2011understanding]. A probabilistic landscape $U$ – also called a quasi-potential – can be obtaining by mapping the distribution $\mathbf{u}_{ss}$ using $-ln(\mathbf{u}_{ss})$. Such landscape reflects the probability of states and it provides a global characterization and a stability measure of the GRN system [@wang2015landscape]. ### Attractor Relative Stability and Global Ordering Analyses {#attractor-relative-stability-and-global-ordering-analyses .unnumbered} A relative stability matrix $\mathbf{M}$ is calculated which reflects the transition barrier between any two states based on the mean first passage time (MFPT). The transition barrier in the EAL epitomizes the ease for transitioning from one attractor to another. The ease of transitions, in turn, offers a notion of relative stability. Zhou and collaborators recently proposed that a GRN has a consistent global ordering of all of the attractors which can be uncovered by considering their relative stabilities [@Zhou2014Discrete; @zhou2014relative]. A net transition rate between attractor $i$ and $j$ is defined in terms of the MFPT as follows: $$d_{i,j} = \frac{1}{MFPT_{i,j}} - \frac{1}{MFPT_{j,i}}$$ The consistent global ordering of the attractors is defined based on the formula proposed in [@zhou2014relative]. Briefly, the consistent global ordering of the attractors is given by the attractor permutation in which all transitory net transition rates from an initial attractor to a final attractor are positive. The MFPTs are calculated either by implementing the matrix-based algorithm proposed in [@sheskin1995computing] or by means of numerical simulation. Implementation {#implementation .unnumbered} -------------- All the methods presented here were implemented using the [*R*]{} statistical programming environment (www.R-project.org). The code relies on the following packages: [*BoolNet*]{}, for the dynamical analysis of Boolean networks [@mussel2010boolnet]; [*expm*]{}, for matrix computations [@goulet2013expm]; [*igraph*]{}, for network (graph) analyses [@csardi2006igraph]; [*markovchain*]{} for MC analysis and inference; and [*fields*]{}, for surface plotting [@furrer2009fields]. The code, including tutorial and examples, is publicly available at [*https://github.com/JoseDDesoj/Epigenetic-Attractors-Landscape-R*]{}. Figure legends {#figure-legends .unnumbered} ============== **Fig 1. Schematic representations of the general work flow for characterizing the EAL.** a) The starting point is the dynamical characterization of an experimentally grounded GRN Boolean model in terms of attained attractors and corresponding basins. b) Depending on the downstream analyses of interest, one may proceed by calculating an inter-attractor (IAT) or inter-state (IST) transition matrix, or both. c) Using the calculated transition matrix as input, downstream analysis tools can be applied: the attractor time- and global order from the IAT matrix, and the probabilistic landscape from the IST matrix.\ **Fig. 2 Distance between estimates as a function of parameters $Nsteps$ and $Nreps$.** The plots show the euclidean distance (y axis) between the estimated transition probabilities using a value $i$ of $Nsteps$ for Algorithm 1 and of $Nreps$ for Algorithm 2 (x axis) and the corresponding estimates using a value of $Nsteps=10^6 $ and $Nreps=10^3$. Plots show calculations for the three GRNs used: root (Fig. 2a-b), cancer (Fig. 2c-d), and flower (Fig. 2e-f).\ **Fig. 3 Distance between estimates obtained with Algorithms 1 and 2.** The plots a, c, and e show the euclidean distance between the estimated transition probabilities obtained using Algorithm 1 with a value $i$ of $Nsteps$ (x axis) and the corresponding estimates obtained using Algorithm 2 with a value of $Nreps=10^3$. Plots b, d, and f show the euclidean distance between the estimated transition probabilities obtained using Algorithm 2 with a value $i$ of $Nreps$ (x axis) and the corresponding estimates obtained using Algorithm 1 with a value of $Nsteps=10^6$. Vertical, dotted lines indicate a tentative minimal value for the corresponding parameter ($Nsteps$ or $Nreps$) able to provide estimates comparable with those obtained using large values. Plots show calculations for the three GRNs used: root (Fig. 2a-b), cancer (Fig. 2c-d), and flower (Fig. 2e-f).\ **Fig 4. Gene regulatory network for epithelial carcinogenesis.** Nodes represent genes, and arrows represent experimentally characterized interactions. The nature of the interaction (activation or inhibition) is not specified, given that this information is implicit in the logical rules specifying the boolean dynamical model.\ **Fig 5. Temporal sequence of cell–fate attainment pattern under the stochastic Boolean GRN model during epithelial carcinogenesis.** The plots show the maximum probability $P$ of attaining each attractor, as a function of time (in iteration steps). Vertical lines mark the time when maximal probability of each attractor occurs. The most probable sequence of cell attainment is: epithelial(E) $\rightarrow$ senescent(S) $\rightarrow$ mesenchymal. Both algorithms uncover the same time-order pattern.\ **Fig 6. Graph-based representation of attractors transitions.** Attractor transitions having a positive net transition rate are connected by arrows, which indicate the directionality of the transitions. The global ordering corresponds to the path comprised by directed arrows passing by the three attractors, here: epithelial $\rightarrow$ senescent $\rightarrow$ mesenchymal, resulting in a global probability flow across the EAL.\ **Fig. 7 3D-surface (a) and a contour plot (b) representation of the probabilistic landscape.** The landscape is based on the stationary probability distribution $\mathbf{u}_{ss}$ and was derived by mapping the sate-space into a low dimensional space using principal component analysis. The first two component are taken as x-y coordinates with the corresponding $-log(\mathbf{u}_{ss})$ values as the z-coordinate. The surface is inferred by interpolation. ![[Schematic representations of the general work flow for characterizing the EAL.]{}[]{data-label="fig:hgscores"}](./Fig1.pdf){width="150mm"} ![[Distance between estimates as a function of parameters $Nsteps$ and $Nreps$.]{}[]{data-label="fig:hgscores"}](./Fig2.pdf){width="150mm"} ![[Distance between estimates obtained with Algorithms 1 and 2.]{}[]{data-label="fig:hgscores"}](./Fig3.pdf){width="150mm"} ![[Gene regulatory network for epithelial carcinogenesis.]{}[]{data-label="fig:hgscores"}](./Fig4.pdf){width="150mm"} ![[Temporal sequence of cell–fate attainment pattern under the stochastic Boolean GRN model during epithelial carcinogenesis.]{}[]{data-label="fig:hgscores"}](./Fig5.pdf){width="150mm"} ![[Graph-based representation of attractors transitions.]{}[]{data-label="fig:hgscores"}](./Fig6.pdf){width="150mm"} ![[3D-surface representation of the probabilistic landscape.]{}[]{data-label="fig:hgscores"}](./Fig7.pdf){width="150mm"} ![[Contour plot representation of the probabilistic landscape.]{}[]{data-label="fig:hgscores"}](./Fig8.pdf){width="150mm"}
--- abstract: 'Planar topological superconductors with power-law-decaying pairing display different kinds of topological phase transitions where quasiparticles dubbed nonlocal-massive Dirac fermions emerge. These exotic particles form through long-range interactions between distant Majorana modes at the boundary of the system. We show how these propagating-massive Dirac fermions neither mix with bulk states nor Anderson-localize up to large amounts of static disorder despite being finite energy. Analyzing the density of states (DOS) and the band spectrum of the long-range topological superconductor, we identify the formation of an edge gap and a surprising double peak structure in the DOS which can be linked to a twisting of energy bands with nontrivial topology. Our findings are amenable to experimental verification in the near future using atom arrays on conventional superconductors, planar Josephson junctions on two-dimensional electron gases, and Floquet driving of topological superconductors.' author: - 'T. O. Puel' - 'O. Viyuela' bibliography: - 'refs.bib' title: 'Band twisting and resilience to disorder in long-range topological superconductors' --- Introduction ============ Symmetry-protected topological (SPT) orders are quantum phases of matter characterized by nonlocal order parameters (topological invariants) and protected edge states at the boundary [@rmp1; @rmp2]. SPT phases with particle-hole symmetry give rise to topological superconductors [@Read_et_al00; @LibroBernevig] with unconventional pairing and gapless edge states, dubbed Majorana zero modes (MZMs). MZMs are nonabelian anyons, which can be braided to perform topological quantum computation and are protected against thermal fluctuations by a superconducting gap [@rmp3; @rmp4; @Alicea_et_al_11; @Baranov_et_al_13; @Mazza_et_al13]. These unpaired Majorana particles were first shown to arise at the ends of a chain of fermions with $p$-wave superconducting pairing [@Kitaev01]. However, the impracticality of $p$-wave pairing in nature was initially believed to be a roadblock, until proximity induced superconductivity schemes have proven to be a way to circumvent this obstacle [@Fu_Kane_2008]. In recent years, different experiments have shown Majorana physics by means of a conventional superconductor proximitized to the surface of a topological insulator [@Fu_Kane_2008; @bib:Xu2015; @bib:Sun2016], semiconductor nanowires with strong spin-orbit coupling and subject to Zeeman fields [@Sau_et_al_2010; @Alicea_2010; @Sau2_et_al_2010; @Sau3_et_al_2010; @Oreg_et_al_2010; @Mourik_et_al12; @Deng_et_al12; @Das_et_al12; @Wang_et_al12; @bib:Rokhinson2012; @bib:Deng2013; @He_et_al14; @Albrecht_et_al16; @bib:Sun2016; @bib:Deng2016; @He_et_al17; @bib:Lutchyn2018], quantum anomalous Hall insulator-superconductor structures [@He_et_al17], and atomic arrays on superconducting substrates [@Pientka2013; @bib:Braunecker2013; @Nadj_et_al13; @Klinovaja_et_al13; @Pientka2014; @Li_et_al_16; @Kaladzhyan_et_al16; @Kaladzhyan_et_al16B; @Nadj_et_al14; @bib:Ruby2015; @Pawlak_et_al16; @Ruby_et_al17; @Menard_et_al15; @Menard_et_al17; @Heinrich_et_al17; @Ronty_et_al_15; @Li_et_al_16_2D]. In particular, one-dimensional arrays of magnetic impurities [@Pascual_et_al16; @Ruby_et_al17], where the length of the chain is relatively small compared to the coherence length of the host superconductor [@Pientka2013], generates an effective $p$-wave Hamiltonian with long-range pairing [@Nadj_et_al13; @Pientka2013; @Klinovaja_et_al13; @Pientka2014; @Li_et_al_16; @Kaladzhyan_et_al16; @Kaladzhyan_et_al16B]. Floquet driving a $p$-wave superconductor [@Benito_et_al14] and planar Josephson junctions proximitized to a 2D electron gas (2DEG) with spin-orbit coupling and Zeeman field [@PhysRevX.7.021032; @Liu_et_al18; @Fornieri:2019aa] also give rise to effective models of topological superconductivity with long-range couplings. Inspired by these recent experimental developments, $p$-wave Hamiltonians with long-range couplings have been throughly studied [@Niui_12; @DeGottardi_13; @Vodola_et_al14; @Tudela_15; @Vodola_et_al16; @Viyuela_et_al16; @Gong2016_1; @Gong2016_2; @Pachos_17; @Lepori_17; @Alecce_17; @Vodola_et_al17; @Dutta_17; @Cats_et_al18; @Giuliano_18; @Viyuela_et_al18; @Lepori_18]. A long-range extension of the Kitaev chain with power-law-decaying hopping and pairing amplitudes give rise to a combined exponential and algebraic decay of correlations, breakdown of conformal symmetry and violation of the area law of entropy [@Vodola_et_al14; @Vodola_et_al16]. The topological nature of this new model has been also unveiled [@Viyuela_et_al16], demonstrating the existence of fractional topological numbers associated to nonlocal-massive Dirac fermions [@Viyuela_et_al16; @Lepori_17; @Alecce_17]. These particles are fermions with a highly nonlocal extension, as they are formed out of the long-range interaction of distant Majorana particles at the edge, and their localization properties are indeed robust to weak static disorder [@Viyuela_et_al16]. Interestingly, a staircase of higher-order topological phase transitions can be induced by tuning the exponent of the power-law-decaying pairing amplitude [@Cats_et_al18]. Generalizations of the long-range Kitaev chain to two-dimensions have been constructed [@Viyuela_et_al18; @Lepori_18], where the $p$-wave character of the superconductor is preserved while including power-law-decaying couplings that extend over the plane. In these systems, topological phases holding propagating Majorana edge states with different chiralities get significantly enhanced by long-range couplings. In one of the topological phases, propagating Majorana fermions at each edge pair nonlocally and become gapped for sufficiently long-range interactions, while remaining topological and localized at the boundary [@Viyuela_et_al18]. However, the robustness of these new chiral edge states with respect to general static disorder was unclear and the effects of the long-range couplings in the band spectrum of the topological superconductor were not explored. In this article, we study how propagating Majorana states, which become gapped by the effect of long-range interactions, are affected by the inclusion of static disorder. We show how the localization at the edge is preserved even for very strong disorder, demonstrating that the propagating massive Dirac fermions at the edge are not pushed to the bulk nor get delocalized. This is one of the characteristic features of all topologically protected edge states. Moreover, we study how the band spectrum of a planar $p$-wave topological superconductor is modified by the effect of long-range couplings. We prove how a characteristic (and previously unnoticed) double peak structure in the density of states (DOS) of the topological superconductor is enhanced by the inclusion of power-law-decaying amplitudes. Associated with that effect we find a band twisting in the energy spectrum provided the phase is topologically nontrivial. The paper is structured as follows. In Sec. \[sec:II\], we introduce the 2D $p$-wave Hamiltonian with long-range couplings and perform a detailed study of the band structure and the density of states as function of the decay exponents. In Sec. \[sec:III\] we demonstrate the robustness of the nonlocal-massive Dirac fermions due to disorder and compare it to the case with unpaired Majoranas through the spatial distribution of those nonlocal-massive Dirac fermions. Sec. \[sec:IV\] is devoted to conclusions. In the Appendix we perform a finite size scaling of ingap states and their dependance on the decay exponent $\alpha$, and analyze the robustness of the system with respect to different types of static disorder. Band Structure $\&$ Density of States {#sec:II} ===================================== The model studied in this paper is that of a two-dimensional spinless $p$-wave superconductor with long-range hopping and long-range superconducting coupling. In real space the Hamiltonian can be written as $$\begin{aligned} H & =-\left(\mu-4t\right)\sum_{\boldsymbol{r}=1}^{N}\left(c_{\boldsymbol{r}}^{\dagger}c_{\boldsymbol{r}}-c_{\boldsymbol{r}}c_{\boldsymbol{r}}^{\dagger}\right)\nonumber \\ & -\sum_{\boldsymbol{r}}\sum_{\boldsymbol{r}'\neq\boldsymbol{r}}\frac{t}{R^{\beta}}\left(c_{\boldsymbol{r}'}^{\dagger}c_{\boldsymbol{r}}+c_{\boldsymbol{r}}^{\dagger}c_{\boldsymbol{r}'}\right)\nonumber \\ & -\sum_{\boldsymbol{r}}\sum_{\boldsymbol{r}'\neq\boldsymbol{r}}\frac{\Delta}{R^{\alpha+1}}\left[\left(R_{x}+iR_{y}\right)c_{\boldsymbol{r}'}^{\dagger}c_{\boldsymbol{r}}^{\dagger}+\left(R_{x}-iR_{y}\right)c_{\boldsymbol{r}}c_{\boldsymbol{r}'}\right],\nonumber \\ \label{eq:Hamiltonian real space}\end{aligned}$$ where both $\boldsymbol{r}$ and $\boldsymbol{r}'$ run over all sites of a square lattice labelled from $1$ to $N$, where $N$ is the total number of sites. We have defined $\boldsymbol{R}=\left(R_{x},R_{y}\right)\equiv\boldsymbol{r}-\boldsymbol{r}'$ and $\left|\boldsymbol{R}\right|=\sqrt{R_{x}^{2}+R_{y}^{2}}\equiv R$. The band width is represented by $t$ and the coupling strength is represented by $\Delta$. The exponents $\alpha$ and $\beta$ control the decay of superconducting coupling range and hopping range, respectively. The chemical potential $\mu$ eventually drives the system to phase transitions, for example in the regime of fast decay (large values of the decaying exponents) we find a transition from a trivial superconducting phase (SC) to a topological superconducting phase characterized by Majorana fermions ($\cal M$). Interestingly, it is known that long-range superconducting couplings give rise to new topological phases characterized by massive Dirac fermions ($\cal D$). This phase transition happens at the critical value $\alpha=2$ and only exists for one of the two topological phases [@Viyuela_et_al18]. This differs from the semi-2D Hamiltonian[@Lepori_18], where the long-range terms appear only in $x$ and $y$ directions. The phase transition then occurs at $\alpha = 1$ and is present in both topological phases. A phase diagram illustrating the former case is depicted in Fig.\[fig1\]a. Unless explicitly mentioned, we have used $t=0.5$ as reference parameter, $\Delta=0.5$ following Ref. \[\], and $\beta=10$, i.e. fast-decay hopping. For instance we have verified that whatever $\beta, \alpha \geq 20$ gives the same energy spectrum of the pure short-range hopping, with next-nearest neighbors hopping. Massive Dirac fermions ---------------------- The first step is to identify the differences and similarities between the Majorana phase and the massive Dirac phase. For that, the edge-state excitations will be analysed. By exactly diagonalization $H\left|\psi_{n}\right\rangle =E_{n}\left|\psi_{n}\right\rangle $ we obtained the Bogoliubov energy spectrum, $E_{n}$ with $n=1,\ldots,2N$, of a finite (squared) system with $L^{2}\equiv N$ lattice sites. The results are depicted in Fig.\[fig1\], in which we exemplified the two different topological phases $\cal M$ and $\cal D$. The parameters are indicated in the phase diagram, panel (a), by the geometric figures in diamond shape, namely we set $\alpha=1.6$ and $\alpha=3$, with $\mu=1$. In both phases, the superconducting gap (stated here as bulk-gap) is easily noticed from either the energy spectrum in panel (b) or its respective density of states (DOS) in panel (c). The topological properties are manifested as ingap states, in particular the inset of panel (c) explicits the difference between the two topological phases[^1]. While the Majorana states manifest as a finite DOS over the entire gap, the massive Dirac states let opened a smaller gap (stated here as edge-gap since it is the energy difference between edge-state excitations). One may also look at the localization of massive Dirac states plotting the probability of occupancy related to the $n$-th wave-vector (corresponding to energy $E_{n}$ inside the bulk-gap) on each site, i.e. ${\cal P}_{n}\left(\boldsymbol{r}\right)\equiv a_{n}\left(\boldsymbol{r}\right)a_{n}^{*}\left(\boldsymbol{r}\right)$, where the amplitude $a_{n}\left(\boldsymbol{r}\right)$ is obtained from $\left|\psi_{n}\right\rangle =\sum_{\boldsymbol{r}}a_{n}\left(\boldsymbol{r}\right)\left|\psi_{n}\left(\boldsymbol{r}\right)\right\rangle $, and the normalization implies $\sum_{\boldsymbol{r}}{\cal P}_{n}\left(\boldsymbol{r}\right)=1$. Figs.\[fig1\]d and \[fig1\]e exemplify this probability for an energy inside the bulk and for the smallest finite energy inside the bulk-gap, respectively. The probability amplitude of occupancy is better analysed if log scaled, thus for convenience we have defined a normalized logarithmic localization $\Phi \equiv 1 - \log{\cal P}_{n}\left(\boldsymbol{r}\right) / \log {\cal P}_{\text{min}}$, where $\Phi = 1$ if ${\cal P}_{n}\left(\boldsymbol{r}\right) = 1$ and $\Phi = 0$ if ${\cal P}_{n}\left(\boldsymbol{r}\right) = {\cal P}_{\text{min}}$. ${\cal P}_{\text{min}}$ is the global-minimum probability ${\cal P}_{n}\left(\boldsymbol{r}\right)$, i.e. among all energies $E_n$ and all sites $\boldsymbol{r}$. Equivalent to the Majorana excitations in the planar topological superconductor, the massive Dirac states are confined to the edges, see Fig.\[fig1\]e, which form propagating modes protected by particle-hole symmetry. Technically speaking, the system still belongs to class D of topological superconductors [@Schnyder2009] with $\mathbb Z$ topological invariant [^2]. In Fig.\[fig1\]d we see the bulk energy excitations remaining spread over the sample. A thorough study of the robustness of the massive Dirac states is one of the main goals of this work and will be discussed in section \[sec:Localization-and-Robustness\]. ![Panel (a) presents the phase diagram for a range of chemical potential, $\mu$, and long-range superconducting coupling, $\alpha$, parameters. Three different phases can be identified: (i) a trivial superconducting phase, SC; (ii) a topological superconducting phase with Majorana fermions, $\cal M$; (iii) a topological superconducting phase with massive Dirac fermions, $\cal D$. We note that the two different phases $\cal M$ have opposite chiralities. Plots (b) and (c) show the energy spectrum and the DOS, respectively, for the two topological phases (signalled in the phase diagram, using the same color code) in a system of size $N = 1681$. Panels (d) and (e) show the probability of occupancy associated to the $n$-th energy of the 2D finite-squared system (top view) in the $\cal D$ phase, as described in the the previous panels. In particular, a representative probability of occupancy for a bulk energy is plotted in panel (d), while the lowest finite energy inside the gap is plotted in panel (e). Note that the probability of occupancy is plotted in log scale, thus written in terms of $\Phi$ as defined in the main text.[]{data-label="fig1"}](Fig1){width="0.65\columnwidth"} Twisted bands and double peak structure --------------------------------------- We discovered that the band spectrum and the DOS of our long-range topological superconductor provide valuable informations regarding the energy distribution of the different eigenstates (see Fig.\[fig1\]c). In addition, we may extract useful quantities such as the magnitude of the superconducting gap, the group velocity and the band dispersion. For convenience, we consider a semi-infinite system, finite in $x$ direction and periodic in the $y$ direction. As an example, let’s take two points in the phases $\cal M$ of the phase diagram with different chiral edge states, namely $\mu=1$ and $\mu=3$, with $\alpha=3$. Fig.\[fig2\]a shows the DOS of these two points, while Figs.\[fig2\]b and \[fig2\]c show their respective band spectrum for a semi-infinite system. From these figures we highlight the following: (i) associated to the peak structures we notice an unusual band twisting (highlighted by the arrows), and (b) there is a significative bands overlap as consequence of this band twisting. The double peak structure in the DOS is a measurable consequence of band inversion in topological superconductors. For instance, if the two particle-hole symmetric bands overlapping for small values of $\Delta$, as we enlarge the superconducting amplitude a gap is opened and a band inversion is formed. Such a band inversion does not happen in the trivial phases. Most notably, in the long-range system with slow-decaying coupling strength, the band twist (or band inversion) occurs even when the particle-hole bands do not overlap in the limit $\Delta \rightarrow 0$. This behavior leads to a higher concentration of density of states around two areas where the twisting of bands occurs, which in turn generates a double peak structure in the DOS. Next we observe that longer range superconducting couplings are responsible for the enhancement of the peak’s structure, in particular within the massive Dirac phase $\cal D$. Figs.\[fig2\]d-f show the results for smaller values of the superconducting coupling exponent already inside the phase $\cal D$, i.e. $\alpha=1.6$. We clearly see a more pronounced structure of the peaks, more precisely they split into two peaks that comes along with an enlargement of the bands overlap. We further note that the two peaks structure is present in both topological phases, and that it is enhanced by decreasing $\alpha$, however they do not appear in the trivial superconducting phase (not shown in this figure). The superconducting coupling strength is also responsible for changing the peak structure. In particular, decreasing $\Delta$ also makes the peak split into two, as shown in Fig.\[fig3\]a. Associated with that, from the semi-infinite system band spectrum shown in Figs.\[fig3\]b and \[fig3\]c, we again notice an enlargement of the bands overlap. Indeed, we checked that lowering $\Delta$ (but finite) the two peaks structure can always be retrieved in all topological phases. The two peaks structure is not a unique long-range feature. In Figs.\[fig3\]d-f we show the presence of the two peaks even in the fast-decaying limit ($\left(\beta,\alpha\right)\gg1$). And we have verified that these results match those from a system with short-range hopping. In short, both topological phases present in this work ($\cal M$ and $\cal D$) present a double peak structure in their DOS which is associated to a band twisting, which in turn leads to a band overlap. This association is highlighted by the colored arrows in Figs. \[fig2\] and \[fig3\]. Surprisingly the DOS double peak structure only appears within the topological phases. It is always achieved for finite-small values of the superconducting coupling strength and is enhanced by longer-range couplings. Therefore, within the limitations of the present model these double peak structures witness nontrivial band topology, due to the effect of band twisting. These results may help us distinguish more easily between different topologically trivial and nontrivial phases in experiments. Physical relevance of long-range couplings ------------------------------------------ As already mentioned in the introduction, $p$-wave superconductors with long-range couplings naturally appear in different experimental realizations of these materials. A 2D sublattice of magnetic impurities, deposit on the surface of a conventional superconductor, leads to effective long-range pairing and hopping terms with a $1/\sqrt{r}$ decay [@Menard_et_al17]. In particular, Mn adatoms deposit on top of Pb (001) have been shown to present long-range oscillations of up to 7-8 nm [@Heinrich_et_al17], which proves the relevance of long-range interactions in these experiments. We can also consider a different construction, where proximitizing planar Josephson junction to a 2DEG with Rashba spin-orbit coupling and Zeeman field produces an effective one-dimensional (1D) Kitaev chain with long-range pairing and hopping terms [@PhysRevX.7.021032; @Liu_et_al18; @Fornieri:2019aa]. The couplings of the effective 1D system can be tuned by varying the superconducting phase difference of the junction $\phi$, the inplane magnetic field $B$ and the chemical potential $\mu$. The emerging long-range couplings can be intuitively understood as arising from integrating out closely spaced modes residing along the transverse direction of the 2DEG. A similar construction could be used so that the integration of a 3D structure leads to effective long-range couplings in 2D. Finally, periodically driving a short-range topological insulator produces interesting effective models of 1D $p$-wave superconductors where long-range superconductivity arise [@Benito_et_al14]. Analogously, Floquet driving a planar $p$-wave superconductor would allow the tuning of effective long-range couplings. In conclusion, we have identified several experimentally relevant situations where the inclusion of long-range coupling terms is needed and where the physics of the topological superconductors described in this paper can be potentially tested. ![Panel (a) shows the DOS of a finite squared system for the two phases $\cal M$ with different chiralities, namely $\mu = 1$ and $\mu = 3$, and the inset is a zoom to the ingap states. Panels (b) and (c), respectively, show the corresponding band spectrum for a semi-infinite system, i.e. periodic in the $y$ direction. The many different colors represent different energy levels. The arrows indicate the two peaks structure on the DOS, and their associated band twist in the band spectrum. Panels (d)-(f) show equivalent results for longer range couplings, in particular note that for $\mu=1$ the system is in the phase $\cal D$.[]{data-label="fig2"}](Fig2){width="0.95\columnwidth"} ![Here we present analogous results as in Fig.\[fig2\]. Panels (a)-(c) show different values of the superconducting coupling strength $\Delta$, inside the phase $\cal M$ with $\mu = 1$. Panels (d)-(f) show different values of both the superconducting coupling range ($\alpha$) and the hopping range ($\beta$), for $\mu=1$ and $\Delta = 0.3$. Note that the change of $\beta$ is not represented in the phase diagram of Fig.\[fig1\]a, but for all the parameter’s values shown here the system remains in the phase $\cal M$.[]{data-label="fig3"}](Fig3){width="0.95\columnwidth"} Robustness of the massive edge states against disorder\[sec:Localization-and-Robustness\] {#sec:III} ========================================================================================= Here we discuss the effect of static disorder in the presence of massive Dirac states. We first analyse the normalized DOS computed for a finite 2D system with different disorder strengths. The disorder is added to the Hamiltonian as $$H_{\text{disorder}}=\nu\sum_{\boldsymbol{r}=1}^{N}D_{\boldsymbol{r}}\left(c_{\boldsymbol{r}}^{\dagger}c_{\boldsymbol{r}}-c_{\boldsymbol{r}}c_{\boldsymbol{r}}^{\dagger}\right),$$ where $\nu$ is the disorder strength and $\left|D_{\boldsymbol{r}}\right|\le1$ is equally distributed over the sites’ positions $\boldsymbol{r}$. Other realistic disorder distributions, such as a Gaussian peaked at $\mu$, would be less detrimental to our system and would serve as a less effective test of robustness for the edge states[^3]. Fig.\[fig4\] analyse the results for a representative point within the phase $\cal D$ (namely $\mu=1$, $\alpha=1.6$, and system size $N=1681$). Fig.\[fig4\]a shows the DOS for different disorder strengths. First, we clearly observe how the DOS peak decreases with this disorder. Second, we show that the bulk-gap shrinks faster than the edge-gap. In addition, the plateau formed by the massive Dirac edge states (i.e. the finite energies between the bulk-gap and edge-gap) persists quantitatively the same even for large values of disorder, i.e. when compared to the superconducting gap size, which provides an indication of the robustness of the new massive edge states. One may also look at the Anderson localization effect from the participation ratio (PR), which gives the degree of localization of each state after one disorder realization, such that $$\text{PR}\equiv\frac{1}{N}\frac{1}{\sum_{\boldsymbol{r}}{\cal P}_{n}^{2}\left(\boldsymbol{r}\right)}.$$ For instance, for a completely delocalized state where all sites are equally probable to be occupied one finds $\text{PR}=1$, while for a completely localized state where only one site is probable to be occupied one finds $\text{PR}=1/N$, which goes to zero at the thermodynamic limit. Moreover, for an edge state perfectly localized at the boundary, i.e. equally distributed along the edge sites of the 2D system, one finds $\text{PR}=4/\sqrt{N}$. Fig.\[fig4\]c shows the histogram of the participation ratio (which here we call density of participation ratio, DOPR) with respect to the energy index ($n$) for different strengths $\nu$. Note that our results consider $100$ disorder realizations, and the results are an average over it. Thus, in this figure one easily notice that DOPR is concentrating near to $\text{PR}=1$, instead of $\text{PR}\sim10^{-3}$ for this particular system size, which signals that the bulk states are delocalised. In addition, we notice that they continue to be delocalized even for large disorder strength, i.e. we have considered a maximum disorder of $0.5$ while the bulk-gap is nearly $1.0$ (in units of hopping $t$) and the edge-gap is even smaller. From the edges states we expect a peak near to $\text{PR}\approx0.1$ for this system size, since they are not localized at one point but spread all over the boundary. Thus the inset shows a zoom to the DOPR near $\text{PR}=0.1$. The existing peaks are clear and they are shifting towards the left when increasing disorder strength, which reflects a trend of the edge states to be more and more localized along the edges. The spatial localization over all the states is quantified by the mean participation ratio (MPR), namely $$\text{MPR}=\left\langle \frac{1}{2N}\sum_{n=1}^{2N}\text{PR}\right\rangle ,$$ where the average $\left\langle \cdots\right\rangle $ is over disorder realizations. Thus, Fig.\[fig4\]d shows the decreasing of MPR, roughly from $0.6$ to $0.4$ with $\nu=0$ to $\nu=0.5$, respectively. This shows a trend of the whole system to become more localized, besides still orders of magnitude higher than the completely localized value, typically $\text{PR}\approx6\times10^{-4}$ for this system size. ![This picture illustrates the behavior of a nonlocal-massive Dirac state (precisely for $\mu=1$ and $\alpha=1.6$, for a system size $N=1681$) in the presence of disorder. Panel (a) shows the DOS while the inset is a zoom in to the ingap states. Panel (b) is the legend which holds true to all other panels. Panel (c) shows the DOPR as function of PR for different disorder strengths, in which the inset gives a zoom in to the peak coming from the edge states. Panel (d) shows the MPR for a range of disorder strength.[]{data-label="fig4"}](Fig4){width="0.95\columnwidth"} Spatial distribution of states {#sec: spatial distribution} ------------------------------ Here we analyse the spatial distribution of states subject to static disorder both for the massive Dirac and Majorana phases. Each row of Fig.\[fig5\] depicts representative states associated to different energy levels. We have considered $100$ disorder realizations, and the average was made after sort the energy spectrum and take equivalent energy levels, for instance the minimum energy, labelled as $E_1$, was computed as $E_1 = \left< \text{min}(E_{n}) \right>$, where $\left< \cdots \right>$ is the average over disorder realizations and $\text{min}(E_{n})$ takes the minimum energy value among all the energy levels. The columns of the plot stand for different disorder strengths. We notice that the energies $E_1$ to $E_4$ are not the four smallest energies from the energy spectrum, but rather energies which correspond to the following behaviors: $E_1$ is the smallest finite energy inside the gap; $E_2$ is a finite energy inside the gap, and will merge to the bulk after including enough disorder. $E_3$ and $E_4$ are two different energies inside the bulk. Remarkably, the topological robustness of the massive Dirac phase is indeed very similar to the Majorana phase. The topological energy states inside the gap display clear localization along the edges with a short tail towards the bulk. We have checked that the tail is shortened after including disorder, adding some degree of additional stability to the boundary of the system. The increase in edge localization through disorder was already noticed in the inset of Fig.\[fig4\]c where the peak moves to the left (i.e. towards more localized). Moreover, Fig.\[fig4\]a shows that the bulk-gap is shrinking faster than the edge-gap, which means that edge states with higher energies are merging with the bulk. This behavior is illustrated in Fig.\[fig5\] by the frames with energy $E_2$, in which more localised states (like clusters of probability density) are formed inside the bulk. One may notice the formation of those clusters for $\nu \geq 0.25$. Finally, the bulk states ($E_3$ and $E_4$) remain fairly delocalized after incorporating disorder. However, for strong disorder we notice the formation of clusters of probability density inside the bulk. ![Spatial distribution of states within the phases ${\cal D}$ and ${\cal M}$. We plotted the probability of occupancy, $\Phi$, associated to the $n$-th energy for a 2D finite-squared system (top view) as described in Fig.\[fig1\]e-d. Each row correspond to a different representative quantum state with energy $E_n$, such that: $E_1$ is the smallest-finite energy inside the gap; $E_2$ illustrates the finite energies inside the gap which goes to the bulk with strong enough disorder. $E_3$ and $E_4$ represent two different bulk energies. For each of the phases we show what happens to those states after including different values of disorder strength $\nu$.[]{data-label="fig5"}](Fig5){width="\columnwidth"} Long range disorder {#sec: long range disorder} ------------------- ![Panels (a)-(c) show the DOS in the $\cal{D}$ phase and illustrate the effect of different types of long-range disorder, where we used the same parameters as described in Fig. \[fig4\]. Panel (a) considers disorder on the hopping strengths, (b) considers disorder on the superconducting coupling strengths, (c) considers both previous cases plus chemical potential disorder. In the second row we plotted the probability of occupancy, $\Phi$, associated to the $n$-th energy for a 2D finite-squared system (top view) as described in Fig.\[fig1\]e-d. In each case, we show the smallest-finite energy inside the gap, $E_1$, for two different values of disorder strength, namely $\nu = 0.10$ and $\nu = 0.50$.[]{data-label="fig6"}](Fig6){width="0.9\columnwidth"} Some experimental realizations of topological superconductors with long-range couplings may also introduce disorder in the hopping and pairing terms. Therefore, in order to complete the stability analysis of the topological phase, we also introduce disorder perturbations in the hopping and superconducting coupling strengths, and compare their relative robustness. The disorder is introduced by replacing $ t \rightarrow t +\nu D_{\boldsymbol{r}}\left(R\right)$ and $\Delta\rightarrow\Delta+\nu D_{\boldsymbol{r}}\left(R\right)$ in Eq. (\[eq:Hamiltonian real space\]), with $\nu$ setting the disorder strength and $\left|D_{\boldsymbol{r}}\left(R\right)\right|\leq1$ is a random number equally distributed over the sites positions $\boldsymbol{r}$ and long-range parameter $R$. In Fig. \[fig6\] we depict three different situations: (a) the disorder is included only in the hopping strength; (b) the disorder is considered only in the superconducting coupling strength; (c) the disorder is included in all couplings, the hopping, the pairing and the chemical potential. From panel (a) we notice that long-range disorder affects the edge states more than short-range disorder, however the massive Dirac edge modes are clearly robust against weak and moderate disorder, i.e. the ingap states are present even for $\nu=0.25$, which is already large compared with the size of the bulk gap. On the other hand, disorder in the superconducting coupling strength is even less harmful. In panel (b) we see a lowering of the gap’s peak with enhancing disorder strength but the bulk-gap is nearly constant. When compared with Fig. \[fig4\]a we see that long-range disorder in the superconducting coupling strength affects the system even less than the chemical potential disorder. Finally, from panel (c) we see that even after including all possible disorder types the largest contribution comes from the hopping, since panels (a) and (c) are very similar. On the second row of Fig. \[fig6\] we depict the fate of the massive Dirac modes after including long-range disorder in each case described above. Remarkably, even after including a considerable amount of disorder in all couplings, the edge states are still robust and localized. This is explained by the topological nature of the edge states even with long-range couplings. Discussions {#sec:IV} =========== We have studied the robustness and localization properties of nonlocal-massive Dirac fermions that appear as exotic energy quasiparticles in 2D topological superconductors with long-range interactions. Analyzing the density of states (DOS) and the energy spectrum, we identify how these topological subgap states at finite energy remain bound to the edge and propagating even for large static disorder. By means of the ingap states we compute the phase diagram for different chemical potentials and long-range couplings. The propagating massive Dirac fermion is identified from a subgap in the superconducting phase. Looking at the probability of occupancy of the energy spectrum, we can clearly identify the localization properties of massive Dirac fermions along the edges of a 2D square lattice. The robustness of these quasiparticles is tested including chemical potential disorder and long-range disorder. The DOS analysis indicates a strong resistance from the ingap states to disorder, which is confirmed using a participation ratio analysis of all quantum states in the system. The massive Dirac modes are surprisingly resistant against weak and moderate disorder in the hopping strength, while is practically insensible to disorder in the superconducting coupling strength. Remarkably, the stability in the probability of occupation for the edge states shows that the robustness of the massive Dirac fermions are analogous to the Majorana states. Complementarily, for a semi-infinite-periodic system, we notice that a band twisting in the band structure is always accompanied by a double peak in the DOS. We show that this behavior also appears for purely short-range interactions, however, we notice it is an exclusive [feature]{} of topological phases and can be possibly used as a probe to identify nontrivial topology. In addition, we show that long-range couplings and small pairing strengths strongly enhance the double peak structure. This enhancement can be potentially used to experimentally detect topological phases using STM measurements [@Nadj_et_al14]. Acknowledgments =============== We thank Pablo San-Jose for providing the MathQ package online, and Tilen Cadez and Liang Fu for useful discussions. This work was supported by Chinese Agency NSFC under grant numbers 11750110429 and U1530401, Chinese Research Center CSRC, Fundación Ramón Areces, and RCC Harvard. Finite size scaling of ingap states {#app: finite size scaling} =================================== ![Here, we present the finite size scaling analysis of the ingap states. Panels (b)-(d) show the $\text{DOS}/N$ for three different system sizes and two representative points in the phase diagram, namely $\mu=1$ and $\mu=3$. Each panel (b)-(d) is computed for a different $\alpha$ value, and the insets are a zoom to the ingap states. Panel (a) shows the scaling behavior of the ingap value $\text{DOS}/4L$. The color code represents different $\mu$ and $\alpha$ and follows the legends of panels (b)-(d), in particular note that the results corresponding to those parameters used in panel (b) are degenerated. The inset show the dependance of the ingap states value with the parameter $\alpha$, for the two different values of $\mu$ (the solid lines in the inset are guide to the eyes).[]{data-label="figA1"}](FigA1){width="0.95\columnwidth"} In the main text, Fig.\[fig1\]c, we show the finite DOS inside the superconducting gap. Since the bulk states and the ingap states are expected to have different finite size scalings, here we give them a detailed analysis. In Fig.\[figA1\]b-d we show the DOS for different system sizes and superconducting couplings (controlled by $\alpha$). In particular, we have used three different system sizes $N=441,961,1681$, and show results for two representative points inside the phase diagram, namely $\mu=1$ and $\mu=3$. The insets show a zoom in to the ingap states. We must notice that here, as well as in the main text, the DOS is normalized by the system size, i.e. $\text{DOS}\rightarrow\text{DOS}/N$, which explains why they lay on top of each other for different system sizes. Thus now we choose to write this denominator explicitly. On the other hand, the DOS inside the gap (due to the presence of edges states) are expected to scale with the perimeter ($4\sqrt{N}\equiv4L$) of the finite system, i.e. rewriting $\text{DOS}/N\rightarrow\text{DOS}/4L$ one finds ingap states independent of system size, as shown in Fig.\[figA1\]a. Finally, we noticed that the values of the ingap states are dependent on $\alpha$. The inset in Fig.\[figA1\]a shows how the exponent $\alpha$ influence the ingap states. Notice that in the case of $\mu=1$ we have a phase transition, which is accompanied by the change of $\text{DOS}/4L$ behavior. Gaussian disorder {#app: Gaussian disorder} ================= Here we compare two different types of static disorder. Beyond the random-distributed disorder discussed in the main text, we also make the analysis of Gaussian-distributed disorder, which it is added to the Hamiltonian in Eq. (\[eq:Hamiltonian real space\]) as $$H_{\text{disorder}}^{\text{G}}=\nu\sum_{\boldsymbol{r}=1}^{N}D_{\boldsymbol{r}}^{\text{G}}\left(c_{\boldsymbol{r}}^{\dagger}c_{\boldsymbol{r}}-c_{\boldsymbol{r}}c_{\boldsymbol{r}}^{\dagger}\right),$$ with $\nu$ setting the disorder strength, and $D_{\boldsymbol{r}}^{G}$ ($\equiv x(\xi)$ in the following) is a random number weighted by the Gaussian distribution with mean value $\mu=0$ and standard deviation $\sigma=0.25$, for each sites’ position $\boldsymbol{r}$. Namely, from a random number $\xi$ generated in the range $\xi\in(0,1)$ we can generate a corresponding $x\left(\xi\right)\in(-\infty,+\infty)$ weighted by Gaussian distribution through the equation $x\left(\xi\right)=\mu+\sigma\sqrt{2}\text{err}^{-1}\left(2\xi-1\right)$, where $\text{err}^{-1}$ is the inverse of Error function. This last expression is obtained from the inverse of the cumulant of the Gaussian function. In principle, the cumulant of any normalized distribution can be associated to the random variable $\xi$, in particular for the Gaussian distribution we have $\xi=\int_{-\infty}^{x}\text{e}^{-\left(x'-\mu\right)^{2}/\left(2\sigma^{2}\right)}/\left(\sigma\sqrt{2\pi}\right)dx'$. As we can see in Fig. \[figA2\], Gaussian disorder is less harmful to the system than equally-spaced disorder, which is the case considered throughout the paper as a benchmark for robustness. ![This picture illustrates the DOS in the massive Dirac phase (same parameters as in Fig. \[fig4\]) for different types of static disorder, namely equally-distributed versus Gaussian-distributed disorders in the chemical potential, both computed for $\nu = 0.50$. We also plot the nondisordered case for reference, $\nu = 0$, and the inset shows a zoom to the ingap states.[]{data-label="figA2"}](FigA2){width="0.6\columnwidth"} [^1]: See Appendix \[app: finite size scaling\] for a finite size scaling analysis of the ingap states and their dependance on the decay exponent $\alpha$. [^2]: For instance, in $k$-space the Hamiltonian assumes the form $H = \text{even($k$)}\sigma_z + \text{odd($k$)} (\sigma_x + i \sigma_y)$, where $\sigma$ acts on the Nambu basis. Thus, the particle-hole operator is ${\cal P} \equiv \sigma_x K$, which satisfies the relation $H_k = - {\cal P} H_{-k} {\cal P}$, or the relation $H = - {\cal P} H^T {\cal P}$ in real space. It also has inversion symmetry, whose operator is ${\cal I} \equiv \sigma_z$ and respects the relation $H_k = {\cal I} H_{-k} {\cal I}$, or the relation $H_{\boldsymbol{R}} = {\cal I} H_{-\boldsymbol{R}} {\cal I}$ in real space. [^3]: See Appendix \[app: Gaussian disorder\] where we compare different types of disorder.
--- abstract: 'We demonstrate resonances due to coherent population pumping in a bright state (CBS), using magnetic sublevels of the closed $ F_g = 2 \rightarrow F_e = 3 $ transition in $^{87}$Rb. The experiments are performed at room temperature vapor in two kinds of cells—one that is pure and the second that contains a buffer gas of Ne at 20 torr. We also present the effect of pump power variation on the CBS linewidth, and explain the behavior by using a power-dependent scattering rate. The experimentally observed CBS resonances are supported by a density-matrix analysis of the system.' author: - Sumanta Khan - Vineet Bharti - Vasant Natarajan title: Coherent population pumping in a bright state --- Introduction {#introduction .unnumbered} ============ Coherent population trapping (CPT) is a well studied phenomenon in many atoms. It is a phenomenon in which atoms get optically pumped into a dark non-absorbing state by two phase-coherent beams. Once they are pumped, the atoms get trapped in the dark state and cannot fluoresce because the coupling of this superposition state to the excited state cancels [@ARI96]. The easiest way to observe this experimentally is to use magnetic sublevels of a degenerate transition. The required phase coherence is then achieved by deriving both beams from the same laser. A narrow absorption dip then appears at line center when one of the two beams is scanned; the line center being the point at which the two-photon Raman resonance condition is satisfied. The linewidth of the dip is much smaller than the natural linewidth of the excited state, and is limited by decoherence among the ground sublevels. A similar arrangement with two phase-coherent beams can be used to create a bright superposition state. The result is enhanced absorption at line center, exactly opposite to the dip seen in CPT. The linewidth is similar to that obtained in CPT, and is again limited by decoherence among the magnetic sublevels of the ground state. However, unlike in CPT, the population does not get trapped in this state because it can decay by coupling to the excited state. The conditions for observing this in a $F_g \rightarrow F_e $ transition are: - It is a closed transition, so that there is no decay out of the system. - $ F_e = F_g + 1 $, so that the correct superposition state can be formed. - $ F_g \neq 0 $, so that there are multiple magnetic sublevels in the ground state. All these conditions are met for the $ F_g = 3 \rightarrow F_e = 4 $ transition in $^{85}\rm Rb $, which was therefore used for the first observation of such increased absorption in Ref. [@LBA99; @LBL99]. The authors called the phenomenon electromagnetically induced absorption (EIA), in order to highlight the fact that there was increased absorption at line center. However, we feel that a more appropriate term would be CBS standing for coherent population pumping in a bright state, while the term EIA is better used for enhanced absorption of a weak probe beam in the presence of two or more strong pump beams in a multilevel system [@GWR04; @KTW07; @BMW09; @CPN12; @BWN16]. In this work, we study CBS resonance satisfying the above conditions but in the other isotope of Rb, namely the $ F_g = 2 \rightarrow F_e = 3 $ transition in $^{87}$Rb. We experimentally study these resonances in two kinds of vapor cells—one that is pure and contains both isotopes in their natural abundances, and the second that contains only $^{87}$Rb and has a buffer gas of Ne at 20 torr. The presence of the buffer gas is advantageous because it increases the coherence time among the magnetic sublevels, and hence results in a smaller linewidth for the resonance. The explanation of enhanced absorption at line center for this transition is borne out by a numerical density-matrix analysis, which takes into account Doppler averaging in room temperature vapor. We also study the effect of power variation on the linewidth of the CBS resonances, and find that it follows the power-dependent scattering rate from the excited state. Experimental details ==================== The experimental setup is shown schematically in Fig. \[cbsschematic\]. The probe and pump beams are derived from the same laser to achieve the required phase coherence. The laser consists of a grating stabilized diode laser system, as described in reference [@MRS15]. The linewidth of the laser after stabilization is 1 MHz. The size of the output beam is 3 mm $\times$ 4 mm. The power in the beams is controlled using $\lambda/2$ waveplates in front of the respective PBSs. ![(Color online) Experimental setup for CBS experiment. The required phase coherence is achieved by deriving both beams from a single laser. The probe beam is locked while the pump beam is scanned by scanning the frequency of the double-passed AOM. Figure key: $\lambda/2$ – half wave retardation plate; $\lambda/4$ – quarter wave retardation plate; PBS – polarizing beam splitter cube; AOM – acousto-optic modulator; PD – photodiode.[]{data-label="cbsschematic"}](cbsschematic.eps){width=".8\textwidth"} The two beams are made to have orthogonal linear polarizations so that they can be mixed on a PBS. The experiment requires them to have circular polarizations, which is achieved by using a $ \lambda/4 $ waveplate before entering the cell. The laser is locked to the $ F_g = 2 \rightarrow F_e = 3 $ transition using a saturated absorption (SAS) signal from another vapor cell. The orthogonal circular polarizations for the two beams means that the probe beam couples $ m_{F_g} \rightarrow m_{F_g} + 1 $ transitions, while the pump beams couples $ m_{F_g} \rightarrow m_{F_g} - 1 $ transitions. As mentioned before, the probe beam frequency is fixed while that of the pump beam is scanned. This scanning is achieved by using two AOMs in its path—one with a downshift of 180 MHz, and the other compensating for this shift by a double-passed AOM with an upshift of 90 MHz. The double passing ensures that the direction of the beam does not change when the frequency is scanned. The frequency of the AOM driver is set using a commercial function generator. Two kinds of vapor cells were used for the experiment—one pure and the second with a buffer gas of Ne (at a pressure of 20 torr). Both cells are cylindrical with dimensions of 25 mm diameter $ \times $ 50 mm length. The cell is inside a 3-layer $ \upmu $-metal magnetic shield. The shield reduces stray external fields to less than 1 mG. The polarizations after the cell are made linear using a second $\lambda/4$ waveplate, and the beams are separated using another PBS. The probe beam alone is detected using a photodiode; therefore, the photodiode signal is proportional to probe transmission. Since the SAS signal used for locking corresponds to absorption by zero-velocity atoms, detecting the non-scanning probe beam allows us to have a flat Doppler-free background for the CBS signal. CBS in a pure cell ================== Experimental results -------------------- An experimental spectrum for CBS in the $F_g = 2 \rightarrow F_e = 3 $ transition obtained in a pure cell is shown in Fig. \[cbspure\]. Probe transmission as a function of detuning of the pump beam shows a dip—the CBS resonance—at line center; the photodiode signal is scaled so that the percentage absorption is about 8%. This behavior is opposite to the CPT resonance seen in the $F_g = 1 \rightarrow F_e = 1 $ transition in the same isotope [@KKB17]. The difference is because the $ 1 \rightarrow 1 $ transition does not satisfy the requirements for a CBS resonance (mentioned earlier). ![(Color online) CBS resonance obtained in a pure cell.[]{data-label="cbspure"}](cbspure.eps){width=".5\textwidth"} Theoretical analysis -------------------- The experimental spectrum can be explained from a detailed density-matrix analysis of the sublevel structure for this transition. The calculations were carried out using the atomic density matrix (ADM) package written by Simon Rochester [@ROCadm]. It solves numerically the following time evolution equation for the density-matrix elements involved: $$\dot{\rho} = - \dfrac{i}{\hbar}[H,\rho] - \dfrac{1}{2}\{\Gamma, \rho \} + \text{repopulation terms}$$ where $ \Gamma $ is the relaxation matrix—its diagonal terms gives the total decay rate (radiative and non-radiative) of the respective populations, and its off-diagonal terms represent the decoherence between states $ \ket{i} $ and $ \ket{j} $, such that $$\Gamma_{ij} = \dfrac{\Gamma_{ii} + \Gamma_{jj}}{2}$$ The repopulation terms take care of decay of atoms from the excited state to the ground state. The magnetic sublevel structure for the transition is shown in Fig. \[cbslevels\]. The pump beam is $ \sigma^- $ polarized—hence it couples sublevels with the selection rule $ \Delta m = -1 $. The probe beam is $ \sigma^+ $ polarized and couples sublevels with the selection rule $ \Delta m = +1 $. The probe beam has no detuning for zero-velocity atoms while the pump beam has a detuning for the same atoms, but the actual detuning seen in the atom’s frame depends on its velocity. The following parameters are input to the calculation: - The $ F $ values for the ground and excited state of the transition. - The proper polarizations for the probe and pump beams. - A uniform intensity, equal for both beams. - A decay rate among ground sublevels of 10 kHz. - A decay rate from an excited sublevel to a ground sublevel of 6 MHz. - A repopulation term for a particular ground sublevel equal to the 6 MHz decay rate multiplied by the appropriate branching ratio. The probe transition spectrum is Doppler averaged over atomic velocities corresponding to the Maxwell-Boltzmann distribution for Rb atom at room temperature. The results of the simulation are shown in Fig. \[cbssim\]. The calculated spectrum reproduces the experimental one quite well, in terms of linewidth. The only difference is that the calculation assumes a constant intensity of 21 W/cm$^2$, which only appears in the wings of the Gaussian distribution for the 30 W power used in the experiment. ![(Color online) Magnetic sublevels of $F_g=2 \rightarrow F_e=3$ transition in the D$_2$ line of $^{87}$Rb.[]{data-label="cbslevels"}](cbslevels.eps){width=".8\textwidth"} ![(Color online) Simulated probe transmission spectrum versus Raman detuning for $F_g=2 \rightarrow F_e=3$ transition with probe and pump beam intensity 21 W/cm$^2$. []{data-label="cbssim"}](cbssim.eps){width=".5\textwidth"} Effect of pump power -------------------- In a CBS experiment—like in CPT—the pump beam causes decoherence through the excited state. The scattering rate is intensity dependent, and is given by $$R = \dfrac{\Gamma}{2} \dfrac{I/I_s}{1 + I/I_s}$$ where $ \Gamma $ is the natural linewidth of the state, $ I $ is the intensity, and $ I_s $ is the saturation intensity (the intensity at which the transition gets power broadened by a factor of $ \sqrt{2} $). Since the intensity is directly proportional to the power through a geometric factor, $ I=g P $, the scattering rate can be rewritten as $$R = \dfrac{\Gamma}{2} \dfrac{gP/I_s}{1 + gP/I_s} \label{scrate}$$ This equation shows that the scattering rate will increase initially but asymptote to a saturation value at high powers. Thus the linewidth of the CBS resonance will also show the same behavior. The results are shown in Fig. \[cbspowervar\]. The solid line is a fit to Eq. , with an offset to account for linewidth from experimental noise. The fit describes the experimental results quite well. ![(Color online) Effect of pump power on the linewidth of the CBS resonance showing increase in linewidth due to increased decoherence through the upper level. The solid line is a fit to the scattering-rate expression in Eq. []{data-label="cbspowervar"}](cbspowervar.eps){width=".5\textwidth"} CBS in a buffer cell ==================== Before concluding, we turn to experimental results in a buffer cell—one filled with 20 torr of Ne as buffer gas. The role of the buffer gas is to increase the coherence time among magnetic sublevels of the ground state. This will result in a smaller linewidth for the CBS resonance. The results shown in Fig. \[cbsbuffer\] bear out this expectation—the linewidth reduces to 9 kHz in such a cell. In this case, the absorption is a factor of 2 lower than that in a pure cell, and the photodiode signal is scaled to reflect this. ![(Color online) CBS resonance obtained in a buffer gas filled cell.[]{data-label="cbsbuffer"}](cbsbuffer.eps){width=".5\textwidth"} Conclusions =========== In summary, we have studied an enhanced absorption or CBS resonance in a closed transition in room-temperature vapor of $^{87}$Rb atoms. The observation requires the proper superposition state to be formed, which is achieved by using magnetic sublevels of the ground state and a phase coherence between the probe and pump beams by deriving them from the same laser. The observed linewidth is limited by decoherence among the magnetic sublevels. This explanation is borne out by a density-matrix analysis of the sublevels involved in the transition. The calculation takes into account Doppler averaging in room temperature $^{87}$Rb vapor. We study the effect of pump power on the CBS linewidth, and find that the behavior follows a power-dependent scattering rate from the excited state. We also study the same CBS resonance in a buffer-gas filled cell, and find that it reduces the linewidth because it increases the coherence time among the magnetic sublevels. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Department of Science and Technology, India. S K acknowledges financial support from INSPIRE Fellowship, Department of Science and Technology, India. The authors thank S Raghuveer for help with the manuscript preparation; and Harish Ravi and Mangesh Bhattarai for helpful discussions. [11]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty in @noop [**]{}, Vol. ,  (, , ) pp.  [****,  ()](\doibase 10.1103/PhysRevA.59.4732) [****,  ()](\doibase 10.1103/PhysRevA.61.013801) [****,  ()](\doibase 10.1103/PhysRevA.69.053818) [****, ()](\doibase http://doi.org/10.1016/j.optcom.2006.08.017) [****,  ()](http://stacks.iop.org/0953-4075/42/i=7/a=075503) [****, ()](http://stacks.iop.org/0295-5075/98/i=4/a=44009) [****,  ()](\doibase http://doi.org/10.1016/j.physleta.2016.05.038) @noop [****,  ()]{} [****,  ()](\doibase 10.1140/epjd/e2017-70676-x)
--- author: - 'Allison Lewko[^1] and Mark Lewko' title: Estimates for the Square Variation of Partial Sums of Fourier Series and their Rearrangements --- Introduction ============ Let $\T:=[0,1]$ denote the unit interval with Lebesgue measure $dx$ and let $\Phi:=\{\phi_{n}\}_{n \in \N}$ denote an orthonormal system (ONS) of real or complex valued functions on $\T$. By an ONS, we will always mean the set of orthonormal functions $\{\phi_n\}_{n \in \N}$ *and* the ordering inherited from the index set $\N$. For $f \in L^2$, we let $a_{n}= \left<f, \phi_n \right>$ denote the Fourier coefficients of $f$ with respect to the system $\Phi$. Associated to an ONS is the maximal partial sum operator $$\mathcal{M}f(x) := \sup_{N}\left|\sum_{n=1 }^{N} a_{n}\phi_{n}(x)\right|.$$ It is well known that the $L^2$ boundedness of the operator $\mathcal{M}$ implies the almost everwhere convergence of the partial sums of the expansion of $f \in L^2$ in terms of the ONS $\Phi$. Almost everywhere convergence is known to fail for some ONS, hence the maximal function $\mathcal{M}$ is known to be an unbounded operator on $L^2$ for some ONS. There is an optimal estimate known for general ONS. \[RM\]*(Rademacher-Menshov)* Let $\{\phi_{n}\}_{n\in\N}=\Phi$ and $f \in L^2$ be as above. Then, $$||\mathcal{M}f||_{L^2} \ll \left(\sum_{n=1}^{\infty} |a_{n}|^2\ln^2(n+1)\right)^{\frac{1}{2}}$$ where the implied constant is absolute. Moreover, the function $\ln^2(n+1)$ cannot be replaced with any function that is $o(ln^2(n+1))$. This last claim is quite deep and is due solely to Menshov. While this estimate is optimal in general, it can be improved for many specific systems. For instance, the inequality $||\mathcal{M}f||_{L^2} \ll ||f||_{L^2}$ is known to hold when $\Phi$ is taken to be the trigonometric, Rademacher, or Haar systems. We recall the definitions of these systems in the next section. Recently, variational norm refinements of the maximal function results stated above have been investigated. To state these results, we first need to introduce some notation. Let $a=\{a_{n}\}_{n=1}^{\infty}$ be a sequence of complex numbers. Then we define the $r$-variation as: $$||a||_{V^{r}}:= \lim_{K\rightarrow \infty} \sup_{\mathcal{P}_{K}} \left( \sum_{I \in \mathcal{P}_{K}}\left|\sum_{n \in I} a_n \right|^{r} \right)^{1/r},$$ where the supremum is taken over all partitions $\mathcal{P}_K$ of $[K]$ (i.e. all ways of dividing $[K]$ into disjoint subintervals). When $a$ is a finite sequence of length $K$, the quantity is defined by dropping the $\lim_{K\rightarrow \infty}$. One can easily verify that this is a norm and is nondecreasing as $r$ decreases. Now we will denote the sequence $\{a_n \phi_n(x)\}_{n=1}^{\infty}$ by $S[f](x)$. (Note that this is slightly different than the notation used in [@OSTTW].) When we write $||S[f]||_{V^r}(x)$, we mean the function on $\T$ whose value at $x\in\T$ is obtained by assigning the $r$-th variation of the sequence $S[f](x)$. Furthermore, $||S[f]||_{L^p(V^{r})}$ is the $L^p$ norm of this function. Alternately, we have $$||f||_{V^{2}}(x) = \sup_{K} \sup_{n_{0}<\ldots<n_{K}}\left(\sum_{l=1}^{K}|S_{n_{l}}[f](x) - S_{n_{l-1}}[f](x)|^{2} \right)^{1/2},$$ where $S_{n_l}[f](x) = \sum_{n=1}^{n_l}a_n\phi_n(x)$ is the $n_l$-th partial sum. We note that the function $||S[f]||_{V^\infty}(x)$ is essentially the maximal function. More precisely, $\mathcal{M}f(x) \ll ||S[f]||_{V^\infty}(x) \ll \mathcal{M}f(x)$. Since the quantity $||a||_{V^{r}}$ is nondecreasing as $r$ decreases, we see that $||S[f]||_{V^r}(x)$ majorizes the maximal function whenever $r< \infty$. In [@OSTTW], the following is proved for the trigonometric system $\{e^{2\pi i n x}\}_{n=1}^{\infty}$: \[varCarleson\]Let $r>2$ and $r' < p < \infty$, where $\frac{1}{r} + \frac{1}{r'} = 1$. Then $$||S[f]||_{L^{p}(V^{r})} \leq C_{p,r} ||f||_{L^p},$$ where $C_{p,r}$ is a constant depending only on $p$ and $r$. This result is rather deep, being a strengthened version of the celebrated work of Carleson and Hunt on the almost everywhere convergence of Fourier series. The analogous inequalities were previously obtained in [@JonesWang] in the simpler situation of Cesàro partial sums of the trigonometric system. Moreover, the above inequality is known to hold for the Haar system and more generally for martingale differences by Lepingle’s inequality, a variational variant of Doob’s maximal inequality. In [@OSTTW], it is shown that the condition $r>2$ is necessary in case of the trigonometric system. Our focus here will be to study the case $p=r=2$ for general ONS. In this direction, we prove (closely following the classical proof): \[main\]Let $\Phi$ be an ONS. Then $$\label{var} ||S[f]||_{L^{2}(V^{2})} \ll \left( \sum_{n=1}^{\infty}|a_{n}|^2 \ln^{2}(n+1) \right)^{1/2}.$$ If $||\mathcal{M}f||_{L^2} \ll \Delta(N)||f||_{L^2}$ for all $f= \sum_{n=1}^{N}a_n \phi_n$ for some real valued function $\Delta(N)$, then $$\label{Maxrelate} |||f||_{L^2(V^2)} \ll \left( \sum_{n=1}^{N} \Delta(n)\ln(n+1) |a_n|^2 \right)^{1/2}.$$ Interestingly, the first inequality strengthens the Rademacher-Menshov theorem stated above, since the right sides are the same (up to implicit constants), yet we have replaced the maximal function with the square variation operator $V^2$ on the left side. Since the $V^2$ operator dominates the maximal operator, this implies the Rademacher-Menshov theorem and the claim that this result is sharp follows from the sharpness of Rademacher-Menshov. This might lead one to think that the two operators behave similarly, however we will see that the $V^2$ operator is much larger than the maximal operator for the classical systems. Theorem \[main\] can be refined further for certain classes of ONS, see Section \[sec7\] for discussion of this. We can apply (\[Maxrelate\]) to the trigonometric system with $\Delta(N)=O(1)$, the Carleson-Hunt inequality, and obtain the following corollary: \[varTrig\] Let $\{e^{2\pi i n x}\}_{n=1}^{\infty}$ be the trigonometric system. We then have $$\label{trigsys} ||S[f]||_{L^{2}(V^{2})} \ll \left(\sum_{n=1}^{\infty}|a_{n}|^2 \ln(n+1)\right)^{1/2}.$$ Moreover, the function $\ln(n+1)$ cannot be replaced by a function that is $o(\ln(n+1))$. The lower bound can be obtained by considering the Dirichlet kernel $D_N(x)=\sum_{n=1}^{N} e^{2 \pi i n x}$. A proof of this is contained in Section 2 of [@OSTTW]. Strictly speaking, they work with the de la Vallee-Poussin kernel there, but the same proof works for the Dirichlet kernel. As we will see below, it is easy to construct an infinite ONS such that $||S[f]||_{L^{2}(V^{2})} \ll ||f||_{L^2}$ holds, by choosing the basis functions $\phi_{n}(x)$ to have disjoint supports. However, this is a very contrived ONS, and it is then natural to ask if there exists a complete ONS such that $||S[f]||_{L^{2}(V^{2})} \ll ||f||_{L^2}$. This is not possible. In fact, we show slightly more: \[completeDiverg\]Let $\{\phi_{n}\}$ be a complete orthogonal system. There exists a $L^{\infty}$ function such that $||S[f]||_{V^{2}}(x)=\infty$ for almost every $x$. In general, this divergence cannot be made quantitative. We show that for any function $w(n)\rightarrow \infty$, there exists a complete ONS such that $||S[f]||_{L^{2}(V^{2})} \ll w(N) ||f||_{L^2}$ whenever $f(x) = \sum_{n=1}^{N}a_n\phi_n(x)$. However, a quantitative refinement is possible if we restrict our attention to uniformly bounded ONS: \[boundedDiverg\] In the case of a uniformly bounded ONS, it is not possible for $w(N)=o(\sqrt{\ln\ln(N)})$. However, there do exist uniformly bounded ONS such that $w(N)=O(\sqrt{\ln\ln(N)})$. The Rademacher system provides an example of the second claim. See Theorem \[varRad\] below. Recall that we defined an ONS to be a sequence of orthonormal functions with a specified ordering. This is essential since the behavior of the maximal and variational operators depend heavily on the ordering. For instance, the Carleson-Hunt bound on the maximal function for the trigonometric system makes essential use of the ordering of the system, and the result is known to fail for other orderings. It is thus natural to ask what one can say about the $V^2$ operator for reorderings of the trigonometric system. Surprisingly, it turns out that the $O(\sqrt{\ln (N)})$ bound can be improved to $O(\sqrt{\ln\ln(N)})$ for any choice of coefficients by reordering the system. More generally: \[mod1Perm\]Let $\{\phi_n\}_{n=1}^{N}$ be an ONS such that $|\phi_n(x)|=1$ for all $x$ and $n$, and let $f(x) = \sum_{n=1}^{N} a_n \phi_n(x)$. Then there exists a permutation $\pi:[N]\rightarrow [N]$ such that $$||f||_{L^2(V^2)} \ll \sqrt{\ln \ln (N)} ||f||_{L^2}$$ holds (for sufficiently large $N$) with respect to the rearranged ONS $\{\psi_n\}_{n=1}^{N}$, where $\psi_n(x) := \phi_{\pi(n)}(x)$. This is perhaps the most technically interesting part of the paper. This result should be compared to Garsia’s theorem [@Garsia2], which states that the Fourier series of an arbitrary function with respect to an arbitrary ONS can be rearranged so that the maximal function is bounded on $L^2$. Garsia’s proof proceeds by selecting a uniformly random permutation, and arguing that it will satisfy the claim with positive probability. In our case, however, we randomize over a subset of all permutations. This subset is chosen based on structural information about the Fourier coefficients of the function. It is unclear if this restriction is necessary or an artifact of our proof techniques. It would be interesting to extend this result to more general ONS. We note that it can be seen from the work of Qian [@Qian] (see also our refinement [@Lewko]) that $|| \sum_{n=1}^{N}r_n||_{L^2(V^2)} \gg \sqrt{N\ln\ln(N)}= \sqrt{\ln\ln(N)}\; ||\sum_{n=1}^{N}r_n||_{L^2}$, regardless of the ordering of the Rademacher functions $r_n$, hence the $\sqrt{\ln \ln (N)}$ term in the statement of the theorem is sharp. A similar result can be obtained for general ONS when the coefficients are multiplied by random signs: \[randomSigns\] Let $\{\phi_n\}_{n=1}^{N}$ be an ONS and $f(x) = \sum_{n=1}^{N} a_n \phi_n(x)$. Then there exists a sequence of signs $\epsilon_{n}$ such that $$||g||_{L^2(V^2)} \ll_{M} \sqrt{\ln \ln (N)} \; ||g||_{L^2}$$ holds, where $g(x) = \sum_{n=1}^{N} \epsilon_n a_n \phi_n(x)$. This easily follows from the following inequality: \[varRad\] Let $\{r_n\}_{n=1}^{N}$ be a sequence of uniformly bounded independent random variables. Then $$\left|\left| \sum_{n=1}^{N} a_n r_n \right|\right|_{L^2(V^2)} \ll \sqrt{\ln\ln(N)} \left( \sum_{n=1}^{N} a_n^2 \right)^{1/2}.$$ In particular, combining this with Theorem \[boundedDiverg\], we see that the $L^2$ norm of the $V^2$ operator for the Rademacher system grows like $\sqrt{\ln\ln(N)}$. Finally, we prove that the $V^p$ norm of some systems can be improved uniformly for all choices of coefficients by a rearrangement, for $p>2$. \[varVp\]Let $\{\phi_{n}\}_{n=1}^{N}$ be an ONS such that $||\phi_{n}||_{L^\infty}\leq M$ for each $n$, and let $p >2$. There exists a permutation $\pi:[N]\rightarrow [N]$ such that the orthonormal system $\{\phi_{\pi(n)}\}_{n=1}^{N}$ satisfies $$\label{varperm} ||S[f]||_{L^{2}(V^{p})} \ll_{M,p} \ln\ln(N)||f||_{L^2}$$ for all $f = \sum_{n=1}^{N} a_n \phi_n $. The maximal $V^{\infty}$ version of this result is due to Bourgain [@Bour] and represents the best progress known towards Garsia and Kolmogorov’s rearrangement conjectures. Our methods rely heavily on those developed in that paper. This also leads us to perhaps the most interesting open problem relating to $V^2$ operators: Does there exist a permutation $\pi: [N] \rightarrow [N]$ such that the $L^2$ norm of the associated $V^2$ operator on the trigonometric system grows like $o(\sqrt{\ln(N)})$? Our Theorems \[mod1Perm\] and \[varVp\] may be viewed as evidence that this may in fact be possible. It is consistent with our knowledge that one could get growth as slow as $\sqrt{\ln\ln(N)}$. It is known that purely probabilistic techniques in the maximal ($V^\infty$) case can only go as far as Bourgain’s bound of $\ln \ln(N)$ (see Remark 2 of [@Bour]). Thus, finding a permutation that reduces the growth further (Garsia’s conjecture is the assertion that there exists a rearrangement that gets to $O(1)$) would require fundamentally new ideas. However, it is consistent with our current knowledge that the purely probabilistic techniques could get one down to $\ln\ln(N)$ in the $V^2$ case. If true, this will certainly require a much more delicate analysis than the methods used here. Theorem \[main\] combined with the $V^\infty$ case of the previous theorem does give a bound of $\sqrt{\ln(N)}\ln\ln(N)$ for general bounded ONS for the $V^2$ operator. This is a nontrivial improvement for some systems, but not the most interesting classical systems. Notation and General Remarks ============================ We will work with ONS defined on the unit interval $\mathbb{T}$. The underlying space $\mathbb{T}$ plays almost no role in our proofs (the role is similar to that of a probability space in probability theory), and one could replace it with an abstract probability space. We assume that the ONS is real valued in most of our results. In these cases, one can obtain the same results for complex valued ONS by splitting into real and imaginary parts and applying the arguments to each. The details are routine so we omit them. The proof of Theorem \[mod1Perm\] is the one place where this requires some care, and thus we work with complex valued functions directly there. We define the trigonometric system to be the system of complex exponentials $\{e^{2\pi i n x}\}_{n=1}^{\infty}$. Typically the trigonometric system is defined to be the doubly infinite system $\{e^{2\pi i n x}\}_{n=-\infty}^{\infty}$ and the maximal and variational operators are defined with respect to the symmetric partial sums. However, we find it more convenient to define the trigonometric system this way and avoid having to state all of the following results for both singly and doubly infinite systems. All of our results can easily be transferred to the doubly infinite setting (using symmetric partial sums) by splitting the Fourier series of a function $f \in L^2(\T)$ with respect to a doubly infinite system into two functions with singly infinite Fourier series and applying the results in this setting. For instance, note that $$\mathcal{M}f(x) := \sup_{N}\left|\sum_{n= -N}^{N} a_{n}\phi_{n}(x)\right| \ll \sup_{N}\left|\sum_{n= -N}^{0} a_{n}\phi_{n}(x)\right| + \sup_{N}\left|\sum_{n= 1}^{N} a_{n}\phi_{n}(x)\right|.$$ Thus it follows that the $L^2$ boundedness of the maximal operator associated to the system $\{e^{2\pi i n x}\}_{n=1}^{\infty}$ implies the $L^2$ boudedness of the symmetric maximal operator associated to $\{e^{2\pi i n x}\}_{n=-\infty}^{\infty}$, and similarly for the $V^p$ operators. The Haar system, which we denote by $\{\mathcal{H}_{n}\}_{n=0}^\infty$, is a complete ONS comprised of the following functions. For $k \in \N$ and $1 \leq j \leq 2^k$, we define $\{\mathcal{H}_{k,j}\}$ by $$\mathcal{H}_{k,j}(x) = \begin{cases}\sqrt{2^{k}} \quad & x \in \left(\frac{j-1}{2^{k}}, \frac{j-1/2}{2^k}\right),\\ -\sqrt{2^{k}} & x \in \left(\frac{j-1/2}{2^{k}}, \frac{j}{2^k}\right), \\0& \mbox{otherwise.}\end{cases}$$ We form the system $\mathcal{H}_{n}$ by ordering the basis functions $\{\mathcal{H}_{k,j}\}$ first by the parameter $k$ and then by the parameter $j$, or $\mathcal{H}_{n}=\mathcal{H}_{j,k}$ for $n=2^{k}+j$. Lastly, we set $H_{0}=1$. The Rademacher system, denoted $\{r_n(x)\}_{n=1}^{\infty}$, is defined by $$r_n(x) = \text{sign} \sin \left(2^n \pi x \right).$$ The Rademacher system can also be thought of as independent random variables which take each of the values $\{-1,1\}$ with probability $1/2$. Variational Rademacher-Menshov-Type Results {#sec:rad} =========================================== We start by giving a proof of Theorem \[main\]. It suffices to assume that $N$ is a power of $2$, say $N=2^{\ell}$. For all $i,k$ such that $0 \leq i \leq \ell$ and $0 \leq k \leq 2^{\ell-i}-1$, we consider the collection of intervals $I_{k,i} := (k 2^{i},(k+1)2^{i}]$. \[lem:binarydecomp\] Any subinterval of $S \subset [0,2^{\ell}]$ can be expressed as the disjoint union of intervals of the form $I_{k,i}$, such as $$\label{Idec} S = \bigcup_{m} I_{k_{m},i_{m}}$$ where at most two of the intervals $I_{k_{m},i_{m}}$ in the union are of each size, and where the union consists of at most $2\ell$ intervals. Let $S=[a,b]$ and set $i' := \max_{I_{k,i} \subseteq S} i$. It follows that there are at most two intervals of the form $I_{k,i'}$ contained in $S$ (otherwise $S$ would contain an interval of the form $I_{k,i'+1}$). Let $r$ denote the right-most element of the interval with the largest $k$ value satisfying $I_{k,i'} \subseteq S$. Now $b-r$ has a unique binary expansion. It easily follows from this that $(r,b]$ can be written as $[r,b] = \bigcup_{m} I_{k_{m},i_{m}}$ where the union contains only one interval of the form $I_{k_{m},i_{m}}$ of any particular size, and these intervals are disjoint. An analogous argument allows us to obtain a decomposition of this form also for $[a,r']$, where $r'$ is the left-most element of an interval with the smallest $k$ value satisfying $I_{k,i'} \subseteq S$. The lemma follows by taking the union of these two decompositions. We now prove \[varRM1\]In the notation above, we have that $$\label{var} ||S[f]||_{L^{2}(V^{2})} \ll \ln(N) \left(\sum_{n=1}^{\infty}|a_{n}|^2 \right)^{1/2}.$$ By rounding up to the nearest power of two, we can assume without loss of generality that $N = 2^\ell$ for some positive integer $\ell$ (this change will only affect the constants absorbed by the $\ll$ notation). Now, for each $x$, we have some disjoint intervals $J_1, \ldots, J_b \subseteq [N]$ such that: $$||S[f]||_{V^{2}}(x) = \sqrt{ \sum_{j=1}^b \left(\sum_{n \in J_j} a_n \phi_n (x)\right)^2}.$$ It is important to note that these intervals depend on $x$. By Lemma \[lem:binarydecomp\], each $J_j$ can be decomposed as a disjoint union of the form (\[Idec\]). In this disjoint union of intervals $I_{k_m, i_m}$, each value of $i_m$ appears at most twice. For each $j$ and $i$, we let $I^j_{i}$ denote the union of the (at most two) intervals in the decomposition of $J_j$ which are of length $2^i$. We then have: $$||S[f]||_{V^{2}}(x) = \sqrt{ \sum_{j=1}^b \left( \sum_{i=0}^\ell \sum_{n \in I^j_i} a_n \phi_n (x)\right)^2}.$$ Applying the triangle inequality for the $\ell^2$ norm, this is: $$\leq \sum_{i=0}^\ell \sqrt{\sum_{j=1}^b \left(\sum_{n \in I^j_i} a_n \phi_n(x)\right)^2}.$$ Now, since each $I^j_i$ is a union of at most two intervals, this implies: $$\label{pointwise} ||S[f]||_{V^{2}}(x) \ll \sum_{i=0}^\ell \sqrt{\sum_{k = 0}^{2^{\ell-i}-1} \left( \sum_{n \in I_{k,i}} a_n \phi_n(x) \right)^2 }.$$ Notice that we are now summing over all intervals $I_{k,i}$ for each $i$, regardless of the value of $x$. We take the $L^2$ norm of both sides of (\[pointwise\]), and apply the triangle inequality to obtain: $$\label{triangle} ||S[f]||_{L^{2}(V^{2})} \ll \sum_{i=0}^\ell \left|\left| \sqrt{\sum_{k = 0}^{2^{\ell-i}-1} \left( \sum_{n \in I_{k,i}} a_n \phi_n(x) \right)^2 }\right|\right|_{L^2}.$$ By linearity of the integral and Parseval’s identity, we have that $$\left|\left| \sqrt{\sum_{k = 0}^{2^{\ell-i}-1} \left( \sum_{n \in I_{k,i}} a_n \phi_n(x) \right)^2 }\right|\right|_{L^2} = \left(\sum_{k=0}^{2^{\ell-i}-1} \sum_{n \in I_{k,i}} a_n^2\right)^{\frac{1}{2}} = \left( \sum_{n=1}^{N} a_n^2\right)^{\frac{1}{2}},$$ for each $i$. Combining this with (\[triangle\]) and noting that there are $\ll \ln N$ values of $i$, we have: $$||S[f]||_{L^{2}(V^{2})} \ll \ln(N) \left(\sum_{n=1}^{\infty}|a_{n}|^2 \right)^{1/2}.$$ We now define a variant of the function $||S[f]||_{V^2}(x)$ which we will denote by $||S_{\text{L}}[f]||_{V^2}(x)$. For each $x$, we define $S_{\text{L}}[f](x)$ to be the sequence of differences of lacunary partial sums of $f$ at $x$, i.e. $S_{\text{L}}[f](x):=\{S_{2^{0}}[f](x),S_{2^{1}}[f](x)-S_{2^0}[f](x),S_{2^{2}}[f](x)-S_{2^1}[f](x),\ldots \}$. As usual, we let $||S_{\text{L}}[f]||_{V^2}(x)$ denote the 2-variation of this function. \[longRMvar\]In the notation above we have that $$||S_{\text{L}}[f]||_{L^2(V^2)} \ll \left( \sum_{n=1}^{\infty} \ln(n+1)|a_{n}|^2 \right)^{1/2}.$$ We will need the inequality $|a|^2 \leq 2|a-b|^2 + 2|b|^2$ for any real numbers $a,b$. For each $x$, there exists some sequence $m_0(x), m_1(x), m_2(x), \ldots$ such that: $$\label{sequence} ||S_{\text{L}}[f]||^2_{V^2}(x) = \left| S_{2^{m_0(x)}} [f](x)\right|^2 + \sum_{i=1}^{\infty} \left| S_{2^{m_i(x)}}[f](x) - S_{2^{m_{i-1}(x)}}[f](x)\right|^2.$$ Setting $a := S_{2^{m_i(x)}}[f](x) - S_{2^{m_{i-1}(x)}}[f](x)$ and $b:= f(x)- S_{2^{m_{i-1}(x)}}[f](x)$, we can apply the inequality above to obtain: $$\left| S_{2^{m_i(x)}}[f](x) - S_{2^{m_{i-1}(x)}}[f](x)\right|^2 \leq 2\left| S_{2^{m_i(x)}} [f](x) - f(x)\right|^2 + 2\left|S_{2^{m_{i-1}(x)}}[f](x) - f(x)\right|^2$$ for each $i\geq 1$. Combining this with (\[sequence\]), we have: $$\begin{aligned} \nonumber ||S_{\text{L}}[f]||^2_{V^2}(x) &\ll & \left| S_{2^{m_0(x)}} [f](x)\right|^2 + \sum_{i=1}^\infty \left| S_{2^{m_i(x)}} [f](x) - f(x)\right|^2 + \left|S_{2^{m_{i-1}(x)}}[f](x) - f(x)\right|^2\\ \nonumber &\ll & \left| S_{2^{m_0(x)}} [f](x)\right|^2 + \sum_{i=0}^\infty \left| S_{2^{m_i(x)}} [f](x) - f(x)\right|^2 \\ \nonumber & \ll & \left| S_{2^{m_0(x)}} [f](x)\right|^2 + \sum_{m=0}^\infty \left| S_{2^m} [f](x) - f(x) \right|^2.\end{aligned}$$ Note that in this last quantity, we are always summing over all values of $m$, instead of summing over a subsequence dependent on $x$. This gives us $$||S_{\text{L}}[f]||_{V^2}(x) \ll \left(\left| S_{2^{m_0(x)}} [f](x)\right|^2 + \sum_{m=0}^\infty \left| S_{2^m} [f](x) - f(x) \right|^2\right)^{\frac{1}{2}}.$$ Now we take the $L^2$ norm of both sides of this inequality to obtain: $$||S_{\text{L}}[f]||_{L^2(V^2)} \ll \left( \sum_{n=1}^\infty \ln(n+1) a_n^2\right)^{\frac{1}{2}}.$$ To see this, note that $\left|S_{2^m} [f](x) -f(x) \right| = \left| \sum_{n=2^m+1}^\infty a_n \phi_n(x)\right|$ and each $n$ is greater than $2^m$ for $\ll \ln(n)$ values of $m$. The result then follows from Parseval’s identity. We now combine these two results to prove the following theorem. \[StrongRMvar\]For an arbitrary ONS, in the notation above, we have $$||S[f]||_{L^2(V^2)} \ll \left( \sum_{n=1}^\infty \ln^2 (n+1) a_n^2\right)^{\frac{1}{2}}.$$ We write $U_{k}(x):= \sum_{n= 2^{k-1}+1}^{2^{k}} a_n \phi_n(x)$ (when $k=0$, $U_0 (x) := a_1 \phi_1(x)$.). We claim that $$||S[f]||_{L^2(V^2)}^2 \ll \int_{\T} \left( ||S_{\text{L}}[f]||_{V^2}^2(x) + \sum_{k=0}^{\infty} || U_k||_{V^2}^2(x) \right) dx.$$ To see this, note that any interval $[a,b]$ can be decomposed as the disjoint union of at most three intervals $I_l, I_c, I_r$, where $I_c=(2^{k},2^{k'}]$ and $I_l \subseteq (2^{k-1},2^{k}]$ and $I_r \subseteq (2^{k'},2^{k'+1})$ (here, $2^k$ can be set as the smallest integral power of 2 contained in $[a,b]$, and $2^{k'}$ can be set as the largest integral power of 2 contained in $[a,b]$). Now, $\int_{\T} ||S_{\text{L}}[f]||_{V^2}^2(x) dx \ll \sum_{n=1}^{\infty} \ln(n+1)|a_{n}|^2 $ from the previous lemma, which is clearly bounded by $ \sum_{n=1}^\infty \ln^2 (n+1) a_n^2$. By Lemma \[varRM1\], we have $$\int_{\T} || U_k||_{V^2}^2(x) dx \ll \ln^2(2^{k}+1) \sum_{n= 2^{k-1}+1}^{2^k} a_n^2 \ll \sum_{n= 2^{k-1}+1}^{2^k} \ln^2(n+1) a_n^2.$$ Combining these estimates completes the proof. Next we show that these estimates can be improved if one has additional information regarding the ONS. In particular, if the partial sum maximal operator $\mathcal{M}$ associated to the system is bounded then one can replace the $\ln^2(n)$ above with an $\ln(n)$. Let $f(x) = \sum_{n=1}^{N}a_n \phi_n(x)$ and assume that $||\mathcal{M}f||_{L^2} \ll \Delta(N) \left( \sum_{n=1}^{N}a_n^2 \right)^{1/2}$ for any choice of $f$. Then $$||f||_{L^2(V^2)} \ll \Delta(N) \sqrt{\ln(N)} \left( \sum_{n=1}^{N}a_n^2 \right)^{1/2}$$ and $$||f||_{L^2(V^2)} \ll \left( \sum_{n=1}^{N} \Delta(n)\ln(n+1) a_n^2 \right)^{1/2}.$$ In particular, if the quantity on the right is finite, then the variational operator applied to $f$ must be finite almost everywhere. As before, without loss of generality, we may assume that $N=2^\ell$ for some positive integer $\ell$. And we consider the collection of dyadic subintervals of $[1,N]$ of the form $I_{k,i} = (k 2^{i},(k+1)2^{i}]$ for each $0 \leq i \leq \ell$, $0 \leq k \leq 2^{\ell-i}-1$. We will refer to intervals of this form as admissible intervals. Now we note that an arbitrary interval $J=[a,b] \subseteq [N]$ can be written as a disjoint union $J = J_l \cup J_r$, where $J_r \subseteq I_{{k_r},{i_r}}$ and $J_l \subseteq I_{{k_l},{i_l}}$ and $|J_l| \geq \frac{1}{2}|I_{{k_l},{i_l}}|$ and $|J_r| \geq \frac{1}{2}|I_{{k_r},{i_r}}|$. We allow one of the intervals to be empty if needed, although in the following we will always assume that the intervals are not empty, since estimating the contribution from an empty interval is trivial. That is, we can write an arbitrary interval $J$ as the union of two intervals which are contained within admissible intervals and the intersection with the admissible intervals is a constant fraction of the the admissible interval. For $J \subseteq [N]$, let $S_J := \sum_{n \in J} a_n \phi_n(x)$. We now claim the pointwise inequality $$||f||_{V^2}^{2}(x) \ll \sum_{0 \leq i \leq \ell} \; \sum_{0 \leq k \leq 2^{\ell-i}-1} |\mathcal{M}S_{I_{k,i}}(x)|^2.$$ Note that the sum on the right is only over all admissible intervals. To see that this inequality holds, let $ \{J_{i}\}_{i=1}^{m}$ be a partition of $[N]$ that maximizes the square variation (at $x$). From the discussion above, we can associate disjoint $J_i^{l}$ and $J_i^{r}$ to $J_i$ such that $J_i \subset J_i^{l} \cup J_i^{r}$. Moreover, we can find disjoint admissible intervals $I_i^{l}$ and $I_i^{r}$ such that $J_{i}^{s} \subseteq I_i^{s}$ and $|J_i^s| \geq \frac{1}{2} |I_i^{s}|$ ($s \in \{r,l\}$). We observe that $|S_{J_i}(x)|^2 \ll |\mathcal{M}S_{I_i^{l}}(x)|^2 + |\mathcal{M}S_{I_i^{r}}(x)|^2$. Moreover, any particular admissible interval $I$ will be associated to at most two intervals in the partition $\{J_i\}$ since the intervals in the partition are disjoint and have at least half the length of the associated admissible interval. The pointwise inequality above now follows. Now integrating each side, applying the hypothesized inequality $||\mathcal{M}S_J||_{L^2}^2 \ll \Delta^2(N) \sum_{n \in J}a_n^2$, and noting that every point in $[N]$ is in $O(\ln(N))$ admissible intervals, we have that $$\int_{\T}||f||_{V^2}^{2} dx \ll\sum_{0 \leq i \leq \ell}\; \sum_{0 \leq k \leq 2^{\ell-i}-1} \int_{\T}|\mathcal{M}S_{I_{k,i}}(x)|^2 dx$$ $$\ll \Delta^2(N) \ln(N) \sum_{n=1}^{N} a_n^2.$$ Taking the square root of each side completes the the proof of the first inequality in the theorem statement. The second statement follows from the first via the argument used to prove Theorem \[StrongRMvar\]. Note that we obtained a bound on the lacunary partial sums in Lemma \[longRMvar\] of the order $\sqrt{\ln(n)}$. This estimate was better than we needed for the proof of Theorem \[StrongRMvar\], however is exactly the order we need here. This completes the proof of Theorem \[main\] and Corollary \[varTrig\] follows. Lower bounds ============ In this section, we prove: Let $\{\phi_n(x)\}$ be a complete ONS. Then there exists a function $f \in L^{\infty}(\mathbb{T})$ such that for almost every $x \in \T$ $$\label{fail} ||f||_{V^2}(x) = \infty.$$ Here, as before, $||f||_{V^{2}}(x) = \sup_{K} \sup_{n_{0}<\ldots<n_{K}}\left(\sum_{l=1}^{K}|S_{n_{l}}[f](x) - S_{n_{l-1}}[f](x)|^{2} \right)^{1/2}$ where $S_{n_l}[f](x) = \sum_{n=1}^{n_l}a_n\phi_n(x)$ is the $n_l$-th partial sum. Using Lemma \[rw\] below and properties of the Dirichlet kernel, Jones and Wang showed (\[fail\]) for the trigonometric system. In the case of general orthonormal systems, we do not have analytic information regarding the partial summation operator and need to proceed differently. We start by establishing the result for the Haar system. We let $E_{k}:L^1 \rightarrow L^1$ denote the conditional expectation operator defined as follows. For $x \in [ l 2^{-k}, (l+1) 2^{-k} )$, $0 \leq l < 2^{k}$, $l \in \N$ we define $$E_{k}f(x) = \int_{l 2^{-k}}^{(l+1) 2^{-k}}f(x)dx.$$ Using a probabilistic result of Qian [@Qian], Jones and Wang [@JonesWang] showed that: \[rw\] (Proposition 8.1 of [@JonesWang]) There exists $f\in L^{\infty}(\T)$ such that $$\sup_{K} \sup_{n_{0}<\ldots<n_{K}}\left(\sum_{\ell=1}^{K}|E_{n_{\ell}}f(x) - E_{n_{\ell-1}}f(x)|^{2} \right)^{1/2} = \infty$$ almost everywhere. If we let $S_n[f]$ denote the partial summation operator with respect to the Haar system, then it easily follows that $E_kf(x) =S_{n_{k+1}}[f](x)- S_{n_{k}}[f](x)$ for some sequence $\{n_k\}$. Therefore, there exists $f \in L^{\infty}(\T)$ such that $||f||_{V^2}(x)= \infty$ for almost every $x \in \T$, where the operator $V^2$ is associated to the Haar system. For future use, let us define $\{b_{n}\}$ to be the Haar coefficients of the function $f$, that is $$\label{bdef} b_{n} = \left<f(x), \mathcal{H}_{n}(x) \right>.$$ We will also need a theorem of Olevskii (see [@Olev] Chapter 3), which requires that we introduce some additional notation. Let $\{g_n\}$ and $\{f_n\}$ be two sequences of real-valued measurable functions on $\T$. We say that they are weakly isomorphic if for each $n \in \N$ there exists an invertible measure-preserving mapping $T_{n}:\T \rightarrow \T$ that is one-to-one on a set of full measure and satisfies $$f_{k}(T_n x ) = g_{k}(x)$$ for all $1\leq k \leq n$. \[thm:olevskii\] (Olevskii) Let $\{\phi_{n}\}_{n=1}^{\infty}$ be a complete real-valued orthonormal system. There exists an orthonormal system $\{H_{k}\}_{k=1}^{\infty}$ that is weakly isomorphic to the Haar system, and a sequence $\{n_{k}\}_{k=1}^{\infty}$ such that $$\left|\left|\sum_{i=n_{k}+1}^{n_{k+1}} \left< H_{j}, \phi_{i} \right> \phi_{i}(x)\right|\right|_{L^2} \leq 2^{-k-j}$$ whenever $j \neq k$. We now set $ \tilde{f}(x) := \sum_{n=1}^{\infty}b_n H_{n}(x)$, for $b_n$ defined in (\[bdef\]). Using the fact that the (finite) partial sums of the series defining $\tilde{f}(x)$ are weakly isomorphic to the partial sums of the Haar expansion of $f$, it follows that the partial sums of the function $\tilde{f}$ are uniformly bounded, hence $\tilde{f} \in L^{\infty}(\T)$. For $\tilde{f}$ defined as above, we set $c_{n}:= \left<\tilde{f},\phi_n\right>$. It follows that $$\sum_{n= n_{k}+1}^{n_{k+1}} c_{n}\phi_n(x) = b_{k} H_{k}(x) + e_{k}(x),$$ where $\sum_{k}|e_{k}(x)| < \infty$ for almost every $x$. Since $\tilde{f}(x)= \sum_{j=1}^{\infty} b_j H_j (x)$, we have $$\sum_{n_{k}+1}^{n_{k+1}} c_{n}\phi_n(x) = \sum_{n=n_{k}+1}^{n_{k+1}} \left<\sum_{j=1}^{\infty} b_j H_j (x) , \phi_n(x) \right>\phi_n(x)$$ $$= \sum_{n=n_{k}+1}^{n_{k+1}} b_k\left< H_k (x) , \phi_n(x) \right>\phi_n(x) + \sum_{n=n_{k}+1}^{n_{k+1}} \left<\sum_{j \neq k} b_j H_j (x) , \phi_n(x) \right>\phi_n(x).$$ By applying the triangle inequality, we obtain: $$\left|\left|b_k H_k(x) - \sum_{n_{k}+1}^{n_{k+1}} c_{n}\phi_n(x) \right|\right|_{L^2} \leq |b_k| \left|\left| \sum_{n \notin [n_k+1,n_{k+1}]} \left< H_k (x) , \phi_n(x) \right>\phi_n(x) \right| \right|_{L^2}$$ $$+ \sum_{j \neq k}|b_j| \left| \left| \sum_{n=n_{k}+1}^{n_{k+1}} \left< H_j (x) , \phi_n(x) \right>\phi_n(x) \right| \right|_{L^2}.$$ Now applying Theorem \[thm:olevskii\], we have that $$\left|\left|b_k H_k(x) - \sum_{n_{k}+1}^{n_{k+1}} c_{n}\phi_n(x) \right|\right|_{L^2} \ll 2^{-k} \left( |b_k| \sum_{j\neq k} 2^{-j} +\sum_{j\neq k} |b_j| 2^{-j} \right) \ll 2^{-k}||\tilde{f}||_{L^2}.$$ The last bound follows from the fact that $|b_j| \leq ||\tilde{f}||_{L^2} = \left(\sum_{i=1}^{\infty} b_i^2\right)^{1/2}$ for all $j$. Denoting the expression on the inside of the norm on the left as $e_k(x)$, we see that $\big|\big|\sum_{k=1}^{\infty} |e_k| \big|\big|_{L^2} \ll ||\tilde{f}||_{L^2}$ and hence $\sum_{k=1}^{\infty}|e_k(x)|$ is finite for almost every $x \in \mathbb{T}$. We now prove Theorem \[completeDiverg\]. We let $V_{\phi}$ and $V_{H}$ denote the variation operators associated to the systems $\{\phi_{n}\}$ and $\{H_{n}\}$ respectively. Moreover, we let $V^2$ be the variation operator associated to the partial sums of the absolutely convergent function $E(x)=\sum_{k=1}^{\infty}e_{k}(x)$. We have, for almost every $x \in \T$, $$|| E||_{V^2}(x) \leq \sum_{k=1}^{\infty} |e_{k}(x)| < \infty.$$ It follows that $$||\tilde{f}||_{L^2(V_H^2)} = \left|\left| \sum_{k=1}^{\infty} b_k H_{k} \right|\right|_{L^2(V_H^2)} \leq ||\tilde{f}||_{L^2(V_\phi^{2})} - ||E||_{L^2(V^2)}.$$ Since the first quantity in this expression is infinite almost everywhere, and the third quantity is finite almost everywhere, it must hold that $||\tilde{f}||_{L^2(V_\phi^2)}$ is infinite almost everywhere. This completes the proof of the theorem. Our proof of Theorem \[completeDiverg\] was purely qualitative, a feature we inherit from Theorem \[thm:olevskii\], which relies on the Riemann-Lebesgue lemma. Next we show that it is impossible to obtain a quantitative lower bound on the growth of the variation in Theorem \[completeDiverg\]. One could obtain the conclusion of Theorem \[completeDiverg\] for functions in more restrictive classes. Combining the above argument with known perturbation techniques, one can show that the $f$ in the statement of the theorem can be taken to be continuous. The proof of this relies on the fact that one already has an example in $L^{\infty}$ (an example in $L^2$ is not sufficient). See [@Olev] p.67 and the associated references for details. Additionally, one can show that for any nonconstant function $f$, there exists an invertible measure preserving transformation of $T:\T \rightarrow \T$ such that the conclusion holds for $g(x) = f(T(x))$. See [@Olev] p.69 and the related references for details. From this, we see that one cannot hope to prove that $V^2$ is bounded on $L^2$ even in “restricted weak type" form, at least not for complete systems. Since the details of these arguments are not essential to our current investigation, and are essentially a combination of the above argument and the ideas of the cited papers, we omit them. \[V2bex\]Let $w(\cdot)$ denote a positive real-valued function monotonically increasing to infinity. Then there exists a complete orthonormal system $\{\phi_n\}_{n=1}^{\infty}$ such that for all sufficiently large $N \in \N$, $$|| f ||_{L^2(V^2)} \ll w(N) \left(\sum_{n=1}^{N} |a_n|^2 \right)^{\frac{1}{2}} .$$ for all $f$ of the form $f(x) = \sum_{n=1}^{N} a_n \phi_n(x)$. Our example will be a rearrangement of the Haar system. We let $\Psi =\{\psi_n(x)\}_{n=1}^{\infty}$ be a subsequence of the Haar system with disjoint supports. We let $\{\rho_{n}(x)\}_{n=1}^{\infty}$ denote the subsequence of the Haar system consisting of all the elements of the Haar system that are not included in $\Psi$. We now form a complete orthonormal system $\{\phi_{n}\}$ by sparsely inserting elements of the sequence $\{\rho_{n}(x)\}_{n=1}^{\infty}$ into the sequence $\{\psi_n(x)\}_{n=1}^{\infty}$, maintaining the relative ordering of each sequence. Clearly we may do this so that the first $N$ elements of the system $\{\phi_n\}$ have at most $w(n)$ elements from the $\rho$’s. We thus may partition the indices $[N]$ of the system $\{\phi_{n}\}_{n=1}^{N}$ into two classes. We let $S$ be the subset of indices $n$ for which $\phi_n = \rho_m$ for some $m$ and $S^c := [N] \setminus S$. We note that for $n \in S^c$, $\phi_n$ is an element of the subsequence $\Psi$, and so all of these have disjoint supports. We then have: $$\left|\left| \sum_{n \in S} a_n \phi_n + \sum_{n \in S^{c}} a_n \phi_n \right|\right|_{L^2(V^2)} \leq \left|\left| \sum_{n \in S} a_n \phi_n\right|\right|_{L^2(V^2)} + \left|\left|\sum_{m \in S^{c}} a_m \phi_m \right|\right|_{L^2(V^2)}$$ $$\ll \ln(w(n))||f||_{L^2} + ||f||_{L^2} \ll \ln(w(n))||f||_{L^2} \ll w(n)||f||_{L^2}.$$ Here, we have employed the triangle inequality, Lemma \[varRM1\], and the fact that $\{\phi_n\}_{n \in S^c}$ have disjoint supports. Lastly, we show that if a system is uniformly bounded, then an quantitative lower bound on the growth of the $V^2$ operator is available, even without assuming completeness. Let $\{\phi_{n}\}_{n=1}^N$ be an ONS uniformly bounded by $M$. Then there exists a function of the form $f=\sum_{n=1}^{N} a_n \phi_n (x)$ such that $$||S[f]||_{L^2(V^2)} \gg_{M} \sqrt{\ln\ln(N)}||f||_{L^2}$$ In light of Theorem \[varRad\], this is best possible. To prove this, we will rely on the following lemma: \[lem:Qian\] We let $c_1, \ldots, c_N$ denote real numbers, all $\geq \delta$ for some constant $\delta >0$. We let $X_1, \ldots, X_N$ denote independent Gaussian random variables, each with mean 0 and variance 1. Then $$\mathbb{E}\left[ \left|\left|\sum_{n=1}^N c_n X_n \right|\right|_{V^2}\right] \gg \delta \sqrt{N \ln \ln (N)}.$$ We essentially follow the proof of Theorem 2.1 in [@Qian] (pp. 1373-1375), with minor modifications. We let $\Phi(x)$ denote the standard normal distribution function. By Lemma 2.1 of [@Qian] (p. 1373), we have that $$\label{Qian} 1-\Phi(x) \geq (1/12) exp(-3x^2/4) \text{ for } x\geq 1.$$ We define $S_k = \sum_{n=1}^k c_n X_n$ and we set $K:= 25$. We also set $$\ell := \ell(N):= \left\lfloor \frac{\ln N}{4 \ln K}\right\rfloor \text{ and } m:= m(N) := \left\lfloor \frac{\ln N}{2 \ln K}\right\rfloor.$$ We let $L x:= \max\{1, \ln x\}$. For each $\omega \in \Omega$ (where $\Omega$ denotes the probability space), we define $E_N(\omega)$ to be the subset of values $t \in \{1, 2, \ldots, N-\sqrt{N}\}$ such that, for some $\ell \leq j \leq m$, $|S_{t+K^j}(\omega) - S_t(\omega)| \geq \delta \sqrt{K^j LL(N)}/2$. Additionally, for each fixed $t$ and $j$, we define the event $$E_N^j(t):= \left\{\omega: |S_{t+K^j}(\omega)-S_{t+K^{j-1}}(\omega)|\geq \delta \sqrt{K^j LL(N)}\right\}.$$ Now, $S_{t+K^j} -S_{t+K^{j-1}}$ is distributed as a Gaussian random variable with mean 0 and variance equal to $$\sigma^2 := Var[S_{t+K^j} -S_{t+K^{j-1}}] = \sum_{n=t+K^{j-1}+1}^{t+K^j} c_n^2.$$ For any $\lambda \in \mathbb{R}$, $$\mathbb{P} \left[ S_{t+K^j}(\omega)-S_{t+K^{j-1}} \geq \lambda\right] = 1 - \Phi \left(\frac{\lambda}{\sigma}\right).$$ We apply this with $\lambda := \delta \sqrt{K^j LL(N)}$, and since each $c_n \geq \delta$, we have: $$\frac{\lambda}{\sigma} \leq \sqrt{\frac{K^j LL(N)}{K^j - K^{j-1}}}.$$ Therefore, using (\[Qian\]), we obtain: $$\mathbb{P}[E_N^j(t)] = 1 - \Phi \left(\frac{\lambda}{\sigma}\right) \geq 1 - \Phi \left(\sqrt{\frac{K^j LL(N)}{K^j - K^{j-1}}}\right) \geq \frac{1}{12} exp\left( -\frac{3}{4} \frac{K^j}{K^j - K^{j-1}} LL (N)\right).$$ This is $\geq \frac{1}{12} exp \left(-\frac{4}{5} LL (N)\right) = \frac{1}{12} (\ln (N))^{-4/5}$. We observe that if $|S_{t+K^j}(\omega) - S_{t+ K^{j-1}}(\omega)| \geq \delta \sqrt{K^j LL(N)}$ for some $\ell < j \leq m$, then either $|S_{t+K^j}(\omega) - S_t (\omega)| \geq \delta \sqrt{K^j LL(N)}/2$ or $|S_{t+K^{j-1}}-S_t| \geq \delta \sqrt{K^j LL(N)}/2 \geq \delta \sqrt{K^{j-1}LL(N)}/2$. Thus, $$\omega \in \bigcup_{j=\ell+1}^m E_N^j(t) \Rightarrow t \in E_N(\omega).$$ Therefore, for any $t \in \{1, 2, \ldots, N -\lfloor \sqrt{N}\rfloor\}$, we have: $$\mathbb{P}\left[ \omega: t \in E_N(\omega)\right] \geq \mathbb{P}\left[ \bigcup_{j = \ell+1}^m E_N^j(t)\right].$$ We note that for $j' \neq j$, $E_N^j (t)$ and $E_N^{j'}(t)$ depend on disjoint sets of the random variables $X_i$, and so are independent events. Therefore, letting $\overline{E}_N^j(t)$ denote the complement of $E_{N}^j(t)$, we have $$\mathbb{P}\left[ \bigcup_{j = \ell+1}^m E_N^j(t)\right] = 1 - \mathbb{P}\left[ \bigcap_{j = \ell+1}^m \overline{E}_N^j(t)\right] = 1 - \prod_{j=\ell+1}^m \mathbb{P}[\overline{E}_N^j(t)].$$ By the above computations, this is $$\geq 1 - exp\left( -(1/12)(m-\ell)(\ln N)^{-4/5}\right).$$ For sufficiently large $N$, we can bound this by: $$> 1 - exp \left( - (\ln N)^{1/5}/(52 \ln K)\right):= 1 - p_N.$$ This shows that for each $t$, $\mathbb{P}\left[ \omega: t \in E_N(\omega)\right] > 1 - p_N$. We can alternately express this as: $$\int_{\Omega} 1_{E_N}(t) d\mathbb{P} > 1- p_N,$$ where $1_{E_N}(t)$ denotes the function that is equal to 1 when $t \in E_N(\omega)$ and equal to 0 otherwise. We define the subset $\mathcal{S} \subseteq \Omega$ to be the set of $\omega \in \Omega$ such that $|E_N(\omega)| > (1-\sqrt{p_N})(N-\sqrt{N})$. Then $$\label{Qian2} \mathbb{P}[\mathcal{S}] > 1 - \sqrt{p_N}.$$ To see this, observe that $$\int_{\Omega} \sum_{t=1}^{N-\sqrt{N}} 1_{E_N}(t) d\mathbb{P} = \sum_{t=1}^{N-\sqrt{N}} \int_{\Omega}1_{E_N}(t) d \mathbb{P} > (N - \sqrt{N})(1 - p_N).$$ Now, if $\mathbb{P}[\mathcal{S}] \leq 1 - \sqrt{p_N}$ held, this would imply that the integral on the left hand side of the above is also $$\leq \sqrt{p_N}\left(1- \sqrt{p_N}\right)\left(N - \sqrt{N}\right) + \left(1-\sqrt{p_N}\right)\left(N - \sqrt{N}\right) = \left(N - \sqrt{N}\right)\left(1-p_N\right),$$ which is a contradiction. We next use the following Vitali covering lemma: \[Vitali\] ([@Folland], Lemma 3.15) Let $\mu(A)$ denote the Lebesgue measure of a set $A \subseteq \mathbb{R}$. Let $\mathcal{U}$ be a collection of open intervals in $\R$ with bounded union $W$. Then for any $\lambda < \mu(W)$, there is a finite, disjoint subcollection $\{V_1, V_2, \ldots, V_q\} \subseteq \mathcal{U}$ such that $\sum_{i=1}^q \mu(V_i) \geq \lambda/3$. For sufficiently large $N$, (\[Qian2\]) implies that with probability $> 1- \sqrt{p_N}$, for $\geq N':= \lfloor (1-\sqrt{p_N})(N-\sqrt{N}-1)\rfloor$ integers $t \in \{1,2, \ldots, N-\sqrt{N}\}$ (we will call them $t_1, t_2, \ldots, t_{N'}$), we have corresponding values $j_1, \ldots, j_{N'}$ (all $\leq m$) such that $|S_{t_i + K^{j_i}} - S_{t_i}| \geq \delta \sqrt{K^{j_i} LL (N)}/2$ for each $i$ from 1 to $N'$. We consider the collection $\mathcal{U}$ of the open intervals $(t_i, t_i + K^{j_i})$ for $i$ from 1 to $N'$. We note that each $K^{j_i} > 1$. We fix some positive constant $\alpha < 1$. For $N$ sufficiently large, we have $N' > \alpha N$. (Note that $p_N$ approaches 0 as $N$ goes to infinity). Therefore, the union of the intervals in $\mathcal{U}$ is a subset of $(0,N]$ with Lebesgue measure $\geq N' > \alpha N$. Applying Lemma \[Vitali\], we conclude that there is disjoint subcollection of these open intervals, denoted by $\{(t_i, t_i + K^{j_i})\}_{i \in Q}$, where $Q \subseteq [N']$, such that $$\sum_{i \in Q} K^{j_i} \geq \alpha N/3.$$ The closures of the intervals in $Q$ are non-overlapping except for possibly at their endpoints. Relabeling the $t_i$’s for $i \in Q$ as $t_1, \ldots, t_q$ (where $q = |Q|$), we have $t_1 < t_1 + K^{j_1} \leq t_2 < t_2 + K^{j_2} \leq \cdots \leq t_q < t_q + K^{j_q} \leq N$. Then, $$\sum_{i=1}^q \left(S_{t_i+K^{j_i}} - S_{t_i}\right)^2 \geq (1/4) \delta^2 \sum_{i=1}^q K^{j_i} LL(N) \geq (\alpha/12)\delta^2 N LL(N).$$ This implies that $$\mathbb{P}\left[ \left|\left| \sum_{n=1}^n c_nX_n\right|\right|_{V^2} \geq \delta \sqrt{(\alpha/12) N \ln \ln N}\right] > 1 - \sqrt{p_N},$$ for all sufficiently large $N$. Hence, by Markov’s inequality, $$\mathbb{E}\left[ \left|\left|\sum_{n=1}^N c_n X_n \right|\right|_{V^2}\right] \geq \delta \sqrt{(\alpha/12) N \ln \ln N} (1- \sqrt{p_N}) \gg \delta \sqrt{N \ln \ln N}.$$ We now prove Theorem \[boundedDiverg\]. We begin by noting that for each $n$, $\int_{\T} \phi_n^2(x) dx = 1$ and $|\phi_n(x)| \leq M \; \forall x$ implies that there are positive constants $\epsilon, \delta >0$ (depending on $M$) such that for some sets $U_n \subseteq \T$ each of measure $\geq \epsilon$, $|\phi_n(x)| \geq \delta$ for all $x \in U_n$. For each $n$, we let $\chi_n$ denote the characteristic function of the set $U_n$. We then have: $$\label{characteristic} \int_{\T} \sum_{n=1}^{N} \chi_n(x) dx = \sum_{n=1}^{N} \int_{T} \chi_n(x) dx \geq N\epsilon.$$ We define $\epsilon' :=\frac{\epsilon}{2}$. Then the function $\sum_{n=1}^N \chi_n (x)$ must be $\geq \epsilon' N$ on a set of measure $\geq \epsilon'$. To see this, note that $0 \leq \sum_{n=1}^N \chi_n(x)\leq N$ for all $N$. If this function is less than $\epsilon' N$ on a set of measure $> 1 - \epsilon'$, this would imply $$\int_T \sum_{n=1}^N \chi_n(x) dx < \epsilon' N (1-\epsilon') + \epsilon' N = (1-\epsilon/4) N\epsilon,$$ contradicting (\[characteristic\]). Thus, there is some set $U$ of measure $\geq \epsilon'$ such that for every $x \in U$, $|\phi_n(x)| \geq \delta$ for at least $\epsilon' N$ values of $n$. We let $X_1, \ldots, X_N$ denote independent Gaussian random variables with mean 0 and variance 1. We consider the quantity $$\mathbb{E}\left[ \left|\left| \{X_n \phi_n(x)\}_{n=1}^N \right|\right|^2_{L^2(V^2)}\right].$$ This can be written as: $$\mathbb{E}\left[ \int_{\T} \left|\left| \{X_n\phi_n(x)\}_{n=1}^N \right|\right|^2_{V^2} dx \right] = \int_{\Omega} \int_{\T} \left|\left| \{X_n\phi_n(x)\}_{n=1}^N \right|\right|^2_{V^2} dx d\mathbb{P}.$$ By Fubini’s theorem, we may exchange the integrals to obtain $$= \int_{T} \int_{\Omega} \left|\left| \{X_n\phi_n(x)\}_{n=1}^N \right|\right|^2_{V^2} d\mathbb{P} dx.$$ Since the inner integral is a non-negative quantity, this is $$\geq \int_U \mathbb{E}\left[ \left|\left|\{X_n\phi_n(x)\}_{n=1}^N\right|\right|^2_{V^2}\right] dx.$$ We consider a fixed $x \in U$. By definition of $U$, we have $|\phi_n(x)|\geq \delta$ for at least $\epsilon' N$ values of $n$. We now define new independent Gaussian random variables $Y_1, \ldots, Y_{\widetilde{N}}$ for $\widetilde{N} \geq \epsilon' N$ as follows. We start from $n=1$, and we define $Y_1$ to be the first partial sum $\sum_{n=1}^{n_1} \phi_n(x) X_n$ such that $\sum_{n=1}^{n_1} |\phi_n(x)| \geq \delta$. We then similarly define $Y_2$ to be $\sum_{n=n_1 +1}^{n_2} \phi_n(x) X_n$ for the smallest $n_2$ such that $\sum_{n = n_1+1}^{n_2} |\phi_n(x)|\geq \delta$. We continue this process, defining the $Y_i$’s to be disjoint sums of the $\phi_n(x) X_n$’s. Since $x \in U$, we will have $Y_1, \ldots, Y_{\widetilde{N}}$ for $\widetilde{N} \geq \epsilon' N$. Since the sum of independent Gaussians is distributed as a Gaussian (with variance equal to the sum of the variances), each $Y_i$ is distributed as an independent, mean zero Gaussian with variance $\geq \delta^2$. Thus, applying Lemma \[lem:Qian\], we have for each $x \in U$: $$\mathbb{E}\left[ \left|\left| \{X_n\phi_n(x)\}_{n=1}^N\right|\right|_{V^2}^2 \right] \geq \mathbb{E}\left[ \left|\left| \{Y_i\}_{i=1}^{\widetilde{N}}\right|\right|^2_{V^2} \right] \geq \delta^2 \widetilde{N} \ln \ln (\widetilde{N}) \gg \delta^2 N \ln \ln (N).$$ Therefore, we have $$\label{expectationbound} \mathbb{E}\left[ \left|\left| \{X_n \phi_n(x)\}_{n=1}^N \right|\right|^2_{L^2(V^2)}\right] \gg \int_U \delta^2 N \ln \ln (N) dx \gg N \ln \ln N.$$ We note that the constants being subsumed by the $\gg$ notation above depend on $M$. Now, we consider the contribution to this expectation from points $\omega$ in the probability space $\Omega$ such that $\sum_{n=1}^N X_n(\omega)^2$ is much larger than $N$. We will show this contribution is small. To do this, we will upper bound the quantity $\mathbb{P}\left[\sum_{n=1}^N X_n^2 \geq kN\right]$ for each positive integer $k\geq 2$. We rely on the following version of the Berry-Esseen theorem. \[lem:strongbe\]([@Petrov], p. 132) Let $Z_1, \ldots, Z_N$ be independent, mean zero random variables with $\mathbb{E}[|Z_n|^{2+\gamma}] < \infty$ for all $n$ for some $0 < \gamma \leq 1$. Let $\sigma^2_n := \mathbb{E}[Z_n^2]$ and $B_N := \sum_{n=1}^N \sigma_n^2$. Then, for all $x \in \R$: $$\left| \mathbb{P}\left[ B_N^{-\frac{1}{2}} \sum_{n=1}^N Z_n< x\right] - \Phi(x) \right| \leq \frac{A}{B_N^{1+\gamma/2} (1+|x|)^{2+\gamma}} \sum_{n=1}^N \mathbb{E}[|Z_n|^{2+\gamma}],$$ where $A$ is a constant and $\Phi(x)$ denotes the standard normal distribution function. Now, letting $X_1, \ldots, X_N$ denote the independent, mean zero, variance one Gaussians as above, we define $Z_1, \ldots, Z_N$ by $Z_n := X_n^2-1$. Then the $Z_n$’s are independent, mean zero random variables. We note that $\mathbb{E}[Z_n^2] = \mathbb{E}[X_n^4] -1 = 2$ for each $n$. Also, $$\mathbb{E}[|Z_n|^3] = \mathbb{E}[|X_n^6-3X_n^4+3X_n^2-1|] \leq \mathbb{E}[X_n^6] + 3\mathbb{E}[X_n^4] + 3\mathbb{E}[X_n^2]+1 = 28.$$ We will apply Lemma \[lem:strongbe\] for $Z_1, \ldots, Z_N$, with $\gamma := 1$ and $B_N = 2N$ (since $\sigma_n^2 = 2$ for each $n$). We observe: $$\mathbb{P}\left[\sum_{n=1}^N X_n^2 \geq kN\right] = \mathbb{P}\left[\sum_{n=1}^N Z_n \geq (k-1)N\right] = \mathbb{P}\left[ B_N^{-\frac{1}{2}}\sum_{n=1}^N Z_n \geq 2^{-\frac{1}{2}}(k-1)N^{\frac{1}{2}}\right]$$ $$= 1 - \mathbb{P}\left[ B_N^{-\frac{1}{2}}\sum_{n=1}^N Z_n < x \right] \leq 1 - \Phi(x) + \frac{A}{B_N^{3/2}(1+|x|)^{3}} \sum_{n=1}^N \mathbb{E}\left[|Z_n|^3\right],$$ where $x:= 2^{-1/2}(k-1)N^{1/2}$. Since $\mathbb{E}\left[|Z_n|^3\right]$ is a constant, this is $$\ll \int_{x}^\infty e^{-\frac{y^2}{2}} dy + \frac{1}{N^{1/2}(1+|x|)^3}.$$ Using that $x = 2^{-1/2} (k-1)N^{1/2}$, we have $$\label{errorterm} \frac{1}{N^{1/2}(1+|x|)^3}\ll \frac{1}{N^2(k-1)^3}.$$ Since $x \geq 1$ (recall that $k \geq 2$), we have $$\label{mainterm} \int_{x}^\infty e^{-\frac{y^2}{2}}dy \leq \int_x^\infty y e^{-\frac{y^2}{2}} dy = e^{-\frac{x^2}{2}}= e^{-\frac{1}{4} N(k-1)^2}.$$ Combining (\[errorterm\]) and (\[mainterm\]), we see that $$\mathbb{P}\left[\sum_{n=1}^N X_n^2 \geq kN\right] \ll \frac{1}{N^2(k-1)^3} + e^{-\frac{1}{4}N(k-1)^3},$$ for each positive integer $k \geq 2$. Now, by Lemma \[varRM1\], for each $\omega \in \Omega$ such that $ kN \leq \sum_{n=1}^N X_n^2(\omega) < (k+1)N$, we have that the quantity $\left|\left| \{X_n\phi_n(x)\}_{n=1}^N\right|\right|^2_{L^2(V^2)}$ evaluated at $\omega$ is $\ll (k+1)\ln^2(N)N$. Thus, the contribution to the expectation bounded in (\[expectationbound\]) coming from such points $\omega$ for all $k\geq 2$ is upper bounded as: $$\ll \sum_{k=2}^{\infty} (k+1)\ln^2(N) N \left( e^{-\frac{1}{4}N(k-1)^2}+ \frac{1}{N^2(k-1)^3}\right)$$ $$= \ln^2(N) N e^{-\frac{1}{4}N} \sum_{k=2}^\infty (k+1)\left(e^{-\frac{1}{4}N}\right)^{k^2-2k} + \frac{\ln^2(N)}{N}\sum_{k=2}^{\infty} \frac{k+1}{(k-1)^3}.$$ Both of these sums are convergent, and it is easy to see that this quantity is $o(N \ln \ln N)$. Therefore, by (\[expectationbound\]) and the above bounds, we have proven that there exists some point $\omega \in \Omega$ such that when we define $a_n:= X_n(\omega)$ and define $f(x) =\sum_{n=1}^{N} a_n \phi_n (x)$, we have $$||S[f]||_{L^2(V^2)} \gg_{M} \sqrt{\ln\ln(N)}||f||_{L^2}.$$ Here, we have used that we can choose $\omega$ so that $||S[f]||_{L^2(V^2)}^2 \gg_M N \ln \ln (N)$ and $||f||^2_{L^2} = \sum_{n=1}^N a_n^2 \leq 2N$ simultaneously. Systems of Bounded Independent Random Variables {#sec:probability} =============================================== In this section, we prove the following theorem: Let $\{X_i\}_{i=1}^N$ be a sequence of mean zero independent random variables such that $|X_i|\leq C$ and $\mathbb{E}\left[ |X_i|^2 \right] =1$ for all $i \in [N]$. Then $$\mathbb{E}\left[\left| \left| \{a_i X_i\}_{i=1}^N\right|\right|_{V^2}\right] \ll_C \sqrt{\ln\ln(N)} \left(\sum_{i=1}^{N} a_i^2 \right)^{1/2} .$$ We will require the following lemmas. The first is a form of Hoeffding’s inequality [@Hoeffding]. \[lem:Hoeffding\]Let $\{X_i\}$ be independent random variables such that $\mathbb{P}[X_i \in [a_i,b_i]]=1$. Then $$\mathbb{P}\left[ \left| S_n - \mathbb{E}\left[S_n\right] \right| \geq t \right] \leq 2\exp\left( - \frac{2t^2}{\sum_{i=1}^{n}(b_i-a_i)^2} \right)$$ where $S_n = \sum_{i=1}^{n} X_i$. \[lem:etemadi\] (Etemadi’s Inequality). (See Theorem 1 in [@E].) Let $X_1, X_2, \ldots, X_n$ denote independent random variables and let $a > 0$. Let $S_\ell := X_1 + \cdots + X_\ell$ denote the partial sum. Then $$\mathbb{P} [\max_{1 \leq \ell \leq n} |S_\ell| \geq 3 a] \leq 3 \max_{1 \leq \ell \leq n} \mathbb{P}[|S_\ell| \geq a].$$ \[lem:Rosenthal\] (Rosenthal’s Inequality). (See Theorem 3 in [@R].) Let $2 < p < \infty$. Then there exists a constant $K_p$ depending only on $p$, so that if $X_1, \ldots, X_n$ are independent random variables with $\mathbb{E}[X_i] = 0$ for all $i$ and $\mathbb{E}[|X_i|^p] < \infty$ for all $i$, then: $$\left(\mathbb{E}[|S_n|^p]\right)^{1/p} \leq K_p\; \max \left\{ \left( \sum_{i=1}^{n} \mathbb{E}[|X_i|^p]\right)^{1/p}, \left(\sum_{i=1}^{n} \mathbb{E}[|X_i|^2]\right)^{1/2}\right\}.$$ We also use the following consequence of Doob’s inequality. For an interval $I \subseteq [n]$, we define $S_I:= \sum_{i\in I} X_i$. We also define $$\tilde{S}_n := \max_{I \subseteq [n]} |S_I|.$$ We then have: \[lem:Doob\] For $p >1$ and independent random variables $X_1, \ldots, X_n$ with $\mathbb{E}[X_i] = 0$ for all $i$, $$\mathbb{E}\left[ |\tilde{S}_n|^p\right] \leq 2^p \mathbb{E}\left[ \max_{1\leq \ell \leq n} \left| \sum_{i=1}^\ell X_i\right|^p\right] \leq 2^p \left(\frac{p}{p-1}\right)^p \mathbb{E}\left[ |S_n|^p\right].$$ The first inequality is a consequence of the following observation. For a subinterval $I \subseteq [n]$, we let $I_0$ be the subinterval that starts at 1 and ends just before $I$, and we let $I_1$ be the interval $I_0 \cup I$. Then $I_0$ and $I_1$ are both intervals starting at 1, and $S_{I_0} + S_{I} = S_{I_1}$. Therefore, $\max \{ |S_{I_0}|, |S_{I_1}|\} \geq \frac{1}{2} |S_I|$. The second inequality follows from Theorem 3.4 on p. 317 in [@Doob]. We begin by decomposing $[N]$ into a family of subintervals according to a concept of mass defined with respect to the $a_i$ values. We define the *mass* of a subinterval $I \subseteq [N]$ as $M(I) := \sum_{n \in I} a_{n}^2$. By normalization, we may assume that $M([N])=1$. We define $I_{0,1} := [N]$ and we iteratively define $I_{k,s}$, for $1\leq s\leq 2^k$, as follows. Assuming we have already defined $I_{k-1,s}$ for all $1 \leq s \leq 2^{k-1}$, we will define $I_{k,2s-1}$ and $I_{k,2s}$, which are subintervals of $I_{k-1,s}$. $I_{k,2s-1}$ begins at the left endpoint of $I_{k-1,s}$ and extends to the right as far as possible while covering strictly less than half the mass of $I_{k-1,s}$, while $I_{k,2s}$ ends at the right endpoint of $I_{k-1,s}$ and extends to the left as far as possible while covering at most half the mass of $I_{k-1,s}$. More formally, we define $I_{k,2s-1}$ as the maximal subinterval of $I_{k-1,s}$ which contains the left endpoint of $I_{k-1,s}$ and satisfies $M(I_{k,2s-1}) < \frac{1}{2} M(I_{k,s})$. We also define $I_{k,2s}$ as the maximal subinterval of $I_{k-1,s}$ which contains the right endpoint of $I_{k-1,s}$ and satisfies $M(I_{k,2s}) \leq \frac{1}{2} M(I_{k,s})$. We note that these subintervals are disjoint. We may express $I_{k-1,s} = I_{k,2s-1} \bigcup I_{k,2s} \bigcup i_{k,s}$, where $i_{k,s} \in I_{k-1,s}$. In other words, $i_{k,s}$ denotes the single element which lies between $I_{k,2s-1}$ and $I_{k,2s}$ (note that such a point always exists because we have required that $I_{k,2s-1}$ contains strictly less than half of the mass of the interval). Here it is acceptable, and in many instances necessary, for some choices of the intervals in this decomposition to be empty. By construction we have that $$\label{eq:mass} M(I_{k,s}) \leq 2^{-k}.$$ We call an interval $J \subseteq [N]$ admissible if it is an element of the decomposition given above. We denote the collection of admissible intervals by $\mathcal{A}$. We additionally refer to the subset $\{I_{k,s}| 1\leq s\leq 2^k\}$ of $\mathcal{A}$ as the admissible intervals on level $k$ and the subset $\{i_{k,s} | 1 \leq s \leq 2^{k}\}$ as the admissible points on level $k$. We note that every point in $[N]$ is an admissible point on some level. (Eventually, we have subdivided all intervals down to being single elements.) We consider an arbitrary interval $J \subseteq [N]$. We would like to approximate $J$ by an admissible interval $\tilde{J}$ such that $J \subseteq \tilde{J}$ and $M(\tilde{J}) \leq c M(J)$, for some constant $c$. This may be impossible, however, since $J$ could span the boundary between adjacent admissible intervals for all comparable masses. To address this, we will instead approximate $J$ by the union of two admissible intervals and one point. \[lem:decomposition\] For every $J \subseteq [N]$, ($J \neq \emptyset$) there exist $\tilde{J}_\ell, \tilde{J}_r \in \mathcal{A}$ and $i_J \in [N]$ such that $\tilde{J}:= \tilde{J}_{\ell} \cup i_J \cup \tilde{J}_r$ is an interval (i.e. $J_\ell, i_J, J_\ell$ are adjacent), $J \subseteq \tilde{J}$, and $M(\tilde{J}) \leq 2M(J)$. We consider the minimal value $k$ such that $J$ contains an admissible point on level $k$. We note that this point is unique, and we define $i_J$ to be equal to it. To see why a unique such point exists, first note that if $J$ contained at least two admissible points on level $k$, then it would also contain an admissible point between them on level $k-1$. Now we consider the subinterval $J_\ell$ consisting of elements of $J$ that lie to the left of $i_j$. Since the rightmost endpoint of this subinterval is at rightmost endpoint of an admissible interval on level $k$, it is also a rightmost endpoint of some admissible interval on every level $> k$. We define $\tilde{J}_\ell$ to be the admissible interval with this right endpoint on the highest level $k_\ell$ such that $J_\ell \subseteq \tilde{J}_\ell$. We note that the admissible interval with this right endpoint on level $k$ contains $J$, so such an interval $\tilde{J}_\ell$ must exist, and $k_\ell \geq k$. We claim that $M(\tilde{J}_\ell) \leq 2M(J_\ell)$. To prove this, we consider the admissible interval $\tilde{J}'$ on level $k_\ell+1$ with this same right endpoint. By maximality of $k_\ell$, we must have that $J \nsubseteq \tilde{J}'$. This implies that $J$ must contain the admissible point on level $k_\ell+1$ that occurs when $\tilde{J}_\ell$ is decomposed. Therefore, $M(J_\ell) \geq \frac{1}{2} M(\tilde{J}_\ell)$. We define the subinterval $J_r$ consisting of elements of $J$ that lie to the right of $i_j$, and we can similarly find an admissible $\tilde{J}_r$ such that $J_r \subseteq \tilde{J}_r$ and $M(\tilde{J}_r) \leq 2M(J_r)$. We then have $J \subseteq \tilde{J}:= \tilde{J}_\ell \cup i_J \cup \tilde{J}_r$ and $M(\tilde{J})\leq 2M(J)$ follows from: $$M(\tilde{J}) = M(\tilde{J}_\ell) + M(i_J) + M(\tilde{J}_r) \leq 2(M(J_\ell) + M(i_J) + M(J_r)) = 2M(J).$$ Defining $\tilde{J}_\ell$, $\tilde{J}_r$, and $i_J$ with respect to $J$ as in the lemma, we observe that: $$\label{SquareDecomp} |S_{J}|^2 \ll |\tilde{S}_{\tilde{J}_{\ell}}|^2 + |\tilde{S}_{\tilde{J}_{r}}|^2 + |S_{i_{J}}|^2.$$ Here, $|\tilde{S}_{\tilde{J}}|$ is the maximal partial sum over all subintervals contained in $\tilde{J}$. Also, if $\mathcal{P}$ is a partition of $[N]$, then the admissible intervals and points ($\tilde{J}_{\ell}$, $\tilde{J}_{r}$, and $i_{J}$) associated to an element $J$ of the partition will only reoccur for a bounded number of elements of the partition (i.e. a particular admissible interval/point will only appear among $\tilde{J}_\ell, \tilde{J}_r, i_J$ for a constant number of $J \in \mathcal{P}$). This is because the $J$’s in $\mathcal{P}$ are disjoint, so $i_J \in J$ for only one $J \in \mathcal{P}$, and $M(J\cap \tilde{J}_\ell) \geq \frac{1}{2} \tilde{J}_\ell$ implies $\tilde{J}_\ell$ can appear for at most two $J$’s in $\mathcal{P}$. Now we will prove Theorem \[varRad\]. We let $\Omega$ denote the probability space for $X_1, \ldots, X_N$ (each $\omega$ in $\Omega$ is associated to a sequence of $N$ real numbers). For each $\omega \in \Omega$, we let $\mathcal{P}_{\omega}$ denote a maximizing partition. We define $\mathcal{P}_{\omega,\ell}$ (resp. $\mathcal{P}_{\omega,r}$) to be the set of $\tilde{J}_{\ell}$ (resp. $\tilde{J}_{r}$) associated to $J \in \mathcal{P}_{\omega}$. We note that the same interval could appear as $\tilde{J}_\ell$ or $\tilde{J}_r$ for up to two different $J$’s in $\mathcal{P}_\omega$. We fix a large constant $B$ which will be specified later. Now we split each set $\mathcal{P}_{\omega,\text{side}}$ (here $\text{side} \in \{\ell,r\}$) into two disjoint subsets $\mathcal{P}_{\omega,\text{side}}^{\text{good}}$ and $\mathcal{P}_{\omega,\text{side}}^{\text{bad}}$. We define $\mathcal{P}_{\omega,\text{side}}^{\text{good}}$ to be the set of $\tilde{J} \in \mathcal{P}_{\omega,\text{side}}$ such that $$\left|\tilde{S}_{\tilde{J}} \right|^2 \leq B M(\tilde{J}) \ln \ln (N).$$ We then define $\mathcal{P}_{\omega,\text{side}}^{\text{bad}}$ to be the complement of $\mathcal{P}_{\omega,\text{side}}^{\text{good}}$ inside $\mathcal{P}_{\omega,\text{side}}$. Our objective is to prove the estimate $$\mathbb{E} \left[ \sum_{J \in \mathcal{P}_{\omega}} |S_{J}|^2 \right] \ll \ln \ln (N) .$$ Using (\[SquareDecomp\]), we upper bound the left side as follows: $$\mathbb{E} \left[ \sum_{J \in \mathcal{P}_{\omega}} |S_{J}|^2 \right] \ll \mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,l}^{\text{good}} } |\tilde{S}_{\tilde{J}}|^2 \right] + \mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,r}^{\text{good}} } |\tilde{S}_{\tilde{J}}|^2 \right]$$ $$+ \mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,l}^{\text{bad}} } |\tilde{S}_{\tilde{J}}|^2 \right] + \mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,r}^{\text{bad}} } |\tilde{S}_{\tilde{J}}|^2 \right] + \mathbb{E} \left[ \sum_{J \in \mathcal{P}_{\omega}} |S_{i_J}|^2 \right].$$ We observe that $ \sum_{\tilde{J} \in \mathcal{P}_{\omega,\text{side}}^{\text{good}}} |\tilde{S}_{\tilde{J}}|^2 \ll \left( \sum_{\tilde{J} \in \mathcal{P}_{\omega,\text{side}}} M(\tilde{J}) \right) \ln\ln (N) \ll \ln\ln (N) $. This holds because $\sum_{J \in \mathcal{P}} M(J) = 1$, and the total mass of the intervals $\tilde{J}_\ell, \tilde{J}_r, i_J$ used to cover each $J$ is at most $2M(J)$, thus $\sum_{\tilde{J} \in \mathcal{P}_{\omega, \text{side}}} M(\tilde{J}) \leq 2$. This shows that the terms involving the good admissible intervals are easily controlled. The last term is also easily controlled as follows $$\mathbb{E} \left[ \sum_{J \in \mathcal{P}_{\omega}} |S_{i_J}|^2 \right] \ll \mathbb{E} \left[ \sum_{n \in [N]} |a_n X_{n}|^2 \right] \ll 1.$$ It remains to control the terms involving the bad admissible intervals. The argument is essentially the same for both the sums over $\mathcal{P}_{\omega,l}^{\text{bad}}$ and $\mathcal{P}_{\omega,r}^{\text{bad}}$, so we will work with the quantity $\mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,\text{side}}^{\text{bad}} } |\tilde{S}_{\tilde{J}}|^2 \right]$ in what follows. We now partition $\mathcal{P}_{\omega,\text{side}}^{\text{bad}}$ into two disjoint sets $\mathcal{P}_{\omega,\text{side}}^{\text{bad},1}$ and $\mathcal{P}_{\omega,\text{side}}^{\text{bad},2}$. The set $\mathcal{P}_{\omega,\text{side}}^{\text{bad},1}$ consists of intervals $I_{k,s} \in \mathcal{P}_{\omega,\text{side}}^{\text{bad}}$ such that $|I_{k,s}| \leq 2^{-k/2} N$ and $\mathcal{P}_{\omega,\text{side}}^{\text{bad},2}$ contains the complement set. For each $k$, we define $T_{k} \subseteq \{I_{k,s} : 1\leq s \leq 2^{k}\} $ as the collection of all intervals $I_{k,s}$ satisfying $|I_{k,s}| \geq 2^{-k/2} N$. Clearly, $|T_{k}| \leq 2^{k/2}$ for each $k$. We then have: $$\mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,\text{side}}^{\text{bad},2} } |\tilde{S}_{\tilde{J}}|^2 \right] \ll \mathbb{E} \left[ \sum_{k=1}^{\infty} \sum_{\tilde{J} \in T_{k}} |\tilde{S}_{\tilde{J}}|^2 \right] = \sum_{k=1}^{\infty} \sum_{\tilde{J} \in T_{k}} \mathbb{E} \left[ |\tilde{S}_{\tilde{J}}|^2 \right].$$ Using (\[eq:mass\]) and the fact that $ \mathbb{E} \left[ |\tilde{S}_{\tilde{J}}|^2\right] \ll \mathbb{E} \left[ |S_{\tilde{J}}|^2 \right]$ (by Lemma \[lem:Doob\]), we have $$\sum_{k=1}^{\infty} \sum_{\tilde{J} \in T_{k}} \mathbb{E} \left[ |\tilde{S}_{\tilde{J}}|^2 \right] \ll \sum_{k=1}^{\infty} \sum_{\tilde{J} \in T_{k}} \mathbb{E} \left[ |S_{\tilde{J}}|^2 \right] \ll \sum_{k=1}^{\infty} 2^{k/2} 2^{-k} \ll 1.$$ It now suffices to bound the more difficult term $\mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,\text{side}}^{\text{bad},1} } |\tilde{S}_{\tilde{J}}|^2 \right]$. Now $|I_{k,s}| \leq 2^{-k/2}N$ if $I_{k,s} \in \mathcal{P}_{\omega,\text{side}}^{\text{bad},1}$. For a fixed interval $J$, we let $B(J) \subseteq \Omega$ denote the event that the $|\tilde{S}_{J}(\omega)|^2$ is bad. In other words, $\omega \in B(J)$ if $\left| \tilde{S}_{J}(\omega) \right|^2 \geq B M(J) \ln \ln (N)$. We let $T_{k}^{c}$ denote the complement of $T_{k}$. We now have that $$\mathbb{E} \left[ \sum_{\tilde{J} \in \mathcal{P}_{\omega,\text{side}}^{\text{bad},1} } |\tilde{S}_{\tilde{J}}|^2 \right] \ll \sum_{k=1}^{2 \ln(N) } \sum_{\tilde{J} \in T_{k}^{c}} \mathbb{E} \left[ |1_{B(\tilde{J})} \tilde{S}_{\tilde{J}}|^2 \right] .$$ Here we have restricted the the summation of $k$ to the range $1\leq k \leq 2 \ln(N)$ using the fact that $1 \leq |I_{k,s}| \leq 2^{-k/2}N$ implies $k \leq 2 \ln (N)$. We let $\gamma > 0$ denote a positive value to be specified later. Letting $2p:=2+\gamma$ and applying Lemma \[lem:Rosenthal\] (Rosenthal’s inequality) we have that $$\left(\mathbb{E} \left[ | S_{\tilde{J}} |^{2p} \right] \right)^{1/p} =\left(\mathbb{E}\left[ \left| S_{\tilde{J}} \right|^{2+\gamma} \right]\right)^{\frac{2}{2+\gamma} } \ll \left( \mathbb{E}\left[ \left| \sum_{n\in \tilde{J}} a_n X_{n} \right|^{2+\gamma} \right]\right)^{\frac{2}{2+\gamma} }$$ $$\label{holder1} \ll \max \left\{ \left(\sum_{n\in \tilde{J}} |a_n|^{2+\gamma} \mathbb{E}\left[ |X_{i}| ^{2+\gamma}\right]\right)^{\frac{2}{2+\gamma}} , \left( \sum_{n\in \tilde{J}} |a_n|^2\right)\right\}\ll M(\tilde{J}).$$ The last inequality follows from the fact that the $\ell^{2}$ norm is greater than the $\ell^{2+\gamma}$ norm and $\mathbb{E}\left[|X_i|^{2+\gamma}\right] \leq C^{2+\gamma}$. We let $s:= |\tilde{J}|$, and we let $S_{\tilde{J}, \ell}$ denote the sum of $a_iX_i$ for the first $\ell$ indices $i$ in $\tilde{J}$. By definition of the event $B(\tilde{J})$, we have: $$\mathbb{E} \left[ 1_{B(\tilde{J})}\right] = \mathbb{P}\left[\left| \tilde{S}_{\tilde{J}} \right|^2 \geq B M(\tilde{J}) \ln \ln (N) \right] \leq \mathbb{P}\left[\max_{1\leq \ell \leq s} \left| S_{\tilde{J},\ell}\right|^2 \geq \frac{B}{2} M(\tilde{J}) \ln \ln (N)\right].$$ By Lemma \[lem:etemadi\], this is $$\ll \max_{1 \leq \ell \leq s} \mathbb{P}\left[ \left| S_{\tilde{J},\ell} \right|^2 \geq \frac{B}{6} M(\tilde{J})\ln \ln (N)\right].$$ By Lemma \[lem:Hoeffding\], this is: $$\ll \exp\left( - \frac{ B M(\tilde{J}) \ln \ln (N) }{ 3C^2 M(\tilde{J}) } \right) = \exp \left( - \frac{B \ln \ln (N)}{3C^2}\right).$$ By setting the value of $B$ to be sufficiently large with respect to the constant $C$ (i.e. $B > 12 C^2$), we have: $$\mathbb{E} \left[ 1_{B(\tilde{J})}\right]\ll \ln^{-4}(N).$$ We now define $q$ as a function of $p$ so that $\frac{1}{p} + \frac{1}{q} = 1$, i.e. $q = \frac{p}{p-1}$. We then set $\gamma$ such that $$\label{holder2} \left( \mathbb{E} \left[ 1_{B(\tilde{J})} \right] \right)^{1/q} \ll \ln^{-2}(N)$$ for all $\tilde{J}$. (Recall that $p:= \frac{2+\gamma}{2}$.) We now apply Hölder’s inequality with $p$ and $q$ to obtain: $$\sum_{k=1}^{2\ln (N) } \sum_{\tilde{J} \in T_k^c} \mathbb{E} \left[ \left|1_{B(\tilde{J})}\tilde{S}_{\tilde{J}}^2\right|\right] \leq \sum_{k=1}^{2 \ln (N)} \sum_{\tilde{J} \in T_k^c} \left( \mathbb{E} \left[ \left| 1_{B(\tilde{J})}\right|^q \right] \right)^{\frac{1}{q}} \left( \mathbb{E} \left[ \left| \tilde{S}_{\tilde{J}}\right|^{2p}\right]\right)^{\frac{1}{p}}.$$ Using (\[holder1\]), (\[holder2\]) and Lemma \[lem:Doob\], we see this is: $$\ll \sum_{k=1}^{2\ln (N)} \sum_{\tilde{J} \in T_k^c} \ln^{-2}(N) M(\tilde{J}) \ll \sum_{k=1}^{2 \ln (N)} \ln^{-2}(N) \ll \frac{1}{\ln (N)}.$$ This completes the proof. Random Permutations =================== In this section, we will use probabilistic techniques to prove the following theorem: Let $\{ \phi_n \}_{n=1}^N$ be an orthonormal system such that $|\phi_n(x)| =1$ for all $n$ and all $x \in \T$, and $\{ a_n \}_{n=1}^N$ a choice of (complex) coefficients. Then there exists a permutation $\pi:[N] \rightarrow [N]$ such that $$\left|\left|\{ a_{\pi(n)}\phi_{\pi(n)}\}_{n=1}^N\right|\right|_{L^2(V^2)} \ll \sqrt{\ln\ln(N)} \left(\sum_{n=1}^{N} |a_n|^2 \right)^{1/2}$$ We assume without loss of generality that $\sum_{n=1}^N |a_n|^2 = 1$. Then, for each $a_n$, there exists some non-negative integer $j$ such that $2^{-j-1} < |a_n|^2 \leq 2^{-j}$. For each fixed $j$, we let $A_j$ denote the set of $n \in [N]$ such that $2^{-j-1} < |a_n|^2 \leq 2^{-j}$. We define $A^* \subseteq [N]$ as $A^*:= \bigcup_{j=\lceil 2\ln N \rceil}^\infty A_j$. We also define $$b_n = \left\{ \begin{array}{ll} a_n, & \hbox{$n \in A^*$} \\ 0, & \hbox{$n \notin A^*$.} \end{array} \right.$$ We then observe, for any permutation $\pi:[N]\rightarrow [N]$ and any $x \in \T$, $$\left|\left| \{b_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^N\right|\right|_{V^2} \ll \sum_{n=1}^N \left| b_n \phi_n(x)\right| \ll \frac{1}{N}\cdot N \ll 1.$$ Applying the triangle inequality for the $||\cdot ||_{V^2}$ norm, this allows us to ignore the contribution of all terms $a_n$ where $n \in A^*$. We consider the class of permutations $\pi: [N] \rightarrow [N]$ such that $\pi^{-1}(A_j)$ is an interval for each $j$. In other words, these are permutations which group the elements of each $A_j$ together. We allow arbitrary orderings within each group and an arbitrary ordering of the groups. For a fixed permutation $\pi$, we let $B_j$ denote the preimage of $A_j$ under $\pi$ (so $B_j$ is an interval). We will refer to the intervals $B_j$ as “blocks". From this point onward, we will only consider permutations belonging to this class, and we will only consider the contribution of terms for $A_1$ up to $A_{\lfloor 2 \ln (N) \rfloor}$. We let $N' := |A_1| + \cdots + |A_{\lfloor 2 \ln(N) \rfloor}|$. For notational convenience, we assume that $\pi$ maps $[N']$ bijectively to $\bigcup_{i=1}^{\lfloor 2\ln (N)\rfloor} A_j$. (This is without loss of generality, since we have seen that we can treat the set $A^*$ separately.) For each fixed permutation $\pi:[N]\rightarrow [N]$ in this class and each fixed $x \in \T$, we consider the quantity $$\label{allpartitions} \left| \left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2} = \sum_{I \in \mathcal{P}} \left|\sum_{n \in I} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2,$$ where $\mathcal{P}$ denotes the maximizing partition of $[N']$. We now define two additional operators, $V^2_L$ and $V^2_S$. The value of $\left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2_L}$ is defined as $$\left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'} \right| \right|^2_{V^2_L} := \sum_{I \in \mathcal{P}_L} \left|\sum_{n\in I} a_{\pi(n)} \phi_{\pi(n)}(x) \right|^2,$$ where $\mathcal{P}_L$ is the maximizing partition among the subset of partitions of $[N']$ that use only intervals which are unions of the $B_j$’s. The value of $\left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2_S}$ is defined as $$\left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2_S} := \sum_{I \in \mathcal{P}_S} \left|\sum_{n\in I} a_{\pi(n)} \phi_{\pi(n)}(x) \right|^2,$$ where $\mathcal{P}_S$ is the maximizing partition among the subset of partitions of $[N']$ that use only intervals $I$ that are contained in some $B_j$. This can be alternatively described as taking that maximizing partition of each $B_j$ and then taking a union of these to form $\mathcal{P}_S$. We now claim: $$\label{longshort} \left| \left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2} \ll \left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2_L} + \left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2_S}.$$ To see this, consider the maximizing partition $\mathcal{P}$ in (\[allpartitions\]). Each $I \in \mathcal{P}$ can be expressed as the union of three disjoint intervals, $I_{S_\ell}$, $I_L$, and $I_{S_r}$, where $I_{S_\ell}$ and $I_{S_r}$ are each contained in some $B_i$, and $I_L$ is a union of $B_i$’s. More precisely, $I_L$ is the union of all the intervals $B_j$ that are contained in $I$, $I_{S_\ell}$ goes from the left endpoint of $I$ until the left endpoint of $I_L$, and $I_{S_r}$ goes from the right endpoint of $I_L$ until the right endpoint of $I$. By construction, each of $I_{S_\ell}$ and $I_{S_r}$ is contained in some $B_j$. (Some of $I_L, I_{S_r}, I_{S_\ell}$ may be empty.) Thus, $$\left|\sum_{n \in I} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2 \ll \left| \sum_{n\in I_L} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2 + \left|\sum_{n \in I_{S_\ell}} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2 + \left|\sum_{n \in I_{S_r}} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2.$$ Now, if we consider the set of intervals $I_L$ corresponding to $I \in \mathcal{P}$, we get a disjoint set of intervals that can occur as part of a partition considered by the operator $V^2_L$. Similarly, if we consider the set of intervals $I_{S_\ell}, I_{S_r}$ corresponding to $I \in \mathcal{P}$, we get a disjoint set of intervals that can occur as part of a partition considered by the operator $V^2_S$. Therefore, $$\sum_{I \in \mathcal{P}} \left|\sum_{n \in I} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2 \ll \sum_{I \in \mathcal{P}_L} \left|\sum_{n \in I} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2 + \sum_{I \in \mathcal{P}_S} \left|\sum_{n \in I} a_{\pi(n)} \phi_{\pi(n)}(x)\right|^2.$$ The inequality (\[longshort\]) then follows. We first bound the contribution of the $V^2_L$ operator. For each $B_j$, we define the function $f_j: \T \rightarrow \mathbb{C}$ as: $$\label{blockfunction} f_j(x):= \sum_{n \in B_j} a_{\pi(n)} \phi_{\pi(n)}(x).$$ Since the sets $B_j$ are disjoint, we note that the functions $f_j$ are orthogonal to each other, but they may not be uniformly bounded. We need to show that there exists a permutation $\sigma: [\lfloor 2 \ln (N) \rfloor]\rightarrow [\lfloor 2\ln(N) \rfloor]$ of the $f_j$ values such that $$\label{blockgoal} \left|\left| \{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N)\rfloor}\right|\right|_{L^2(V^2)} \ll \sqrt{\ln \ln(N)} \left( \sum_{n=1}^N |a_n|^2\right)^{1/2}.$$ This would imply that there is some ordering of the blocks for which the contribution of the $V^2_L$ operator is suitably bounded. To show (\[blockgoal\]), we will use the following inequality of Garsia for real numbers: \[lem:garsia\](See Theorem 3.6.15 in [@Garsia].) Let $x_1, \ldots, x_M \in \mathbb{R}$. We consider choosing a permutation $\psi$ of $[M]$ uniformly at random. Then: $$\mathbb{E}\left[ \max_{1 \leq k \leq M} \left( x_{\psi(1)} + \cdots + x_{\psi(k)}\right)^2\right] \ll \left( \sum_{k=1}^M x_k\right)^2 + \sum_{k=1}^M x_k^2.$$ We derive the following corollary: \[cor:garsia\] Let $x_1, \ldots, x_M \in \mathbb{R}$. Let $L$ be a positive integer, $1 \leq L \leq M$. Let $\mathcal{P}$ denote the partition of $[M]$ into intervals of size $L$ (starting with $[L]$), except that the last interval may be of smaller size (when $L$ does not divide $M$). We consider choosing a permutation $\psi$ of $[M]$ uniformly at random. Then: $$\mathbb{E} \left[ \sum_{I \in \mathcal{P}} \max_{I' \subseteq I} \left( \sum_{j\in I'} x_{\psi(j)}\right)^2 \right] \ll {M-1 \choose L-1}^{-1} \left(\sum_{\stackrel{S\subseteq [M]}{|S|=L}} \left(\sum_{j \in S} x_j\right)^2+\sum_{j \in S} x_j^2\right).$$ We note here that $S$ ranges over all subsets of $[M]$ of size $L$. By linearity of expectation, we first observe: $$\mathbb{E} \left[ \sum_{I \in \mathcal{P}} \max_{I' \subseteq I} \left( \sum_{j\in I'} x_{\psi(j)}\right)^2 \right] = \sum_{I \in \mathcal{P}} \mathbb{E} \left[ \max_{I' \subseteq I} \left(\sum_{j \in I'} x_{\psi(j)}\right)^2\right].$$ This quantity is then $$\ll \frac{M}{L} \; \mathbb{E}\left[ \max_{I' \subset I} \left(\sum_{j \in I'} x_{\psi(j)}\right)^2\right],$$ where $I$ is any fixed interval of size $L$ (without loss of generality, we may take $I$ to be $[L]$). For any subset $S \subseteq [M]$ of size $L$, the probability that $\psi$ maps $I$ to $S$ is ${M \choose L}^{-1}$. Conditioned on this event, the action of $\psi$ on $I$ acts as random permutation of the values $x_j$ for $j \in S$. Applying Lemma \[lem:garsia\], we then have the expectation (still conditioned on $\psi$ mapping $I$ to $S$) is $\ll \left(\sum_{j \in S} x_j\right)^2 + \sum_{j \in S} x_j^2$. (Note that the maximum over all subintervals $I'$ of $I$ is bounded by a constant times the maximum over subintervals starting at the left endpoint of $I$, as in the lemma.) Thus, $$\mathbb{E}\left[ \max_{I' \subset I} \left(\sum_{j \in I'} x_{\psi(j)}\right)^2\right] \ll {M \choose L}^{-1} \sum_{\stackrel{S \subseteq [M]}{|S|=L}} \left( \left( \sum_{j \in S} x_j\right)^2 + \sum_{j \in S} x_j^2\right).$$ Since $\frac{M}{L} {M \choose L}^{-1} = {M-1 \choose L-1}^{-1}$, the corollary follows. We now decompose $[\lfloor 2\ln (N) \rfloor]$ into a family of dyadic intervals. More precisely, we consider all dyadic intervals of the form $$((c-1)2^\ell, c2^\ell], \; \ell \in \{0,1, \ldots, \lceil \ln (2\ln N) \rceil\}, \; c \in \left\{1, \ldots, 2^{\lceil \ln \ln (N)+\ln 2 \rceil -\ell}\right\}$$ (Some of these intervals may go beyond $M := \lfloor 2\ln (N) \rfloor$. For these, we consider their intersection with $[M]$.) The exponent $\ell$ of an interval here defines its “level". In other words, we say an interval $((c-1)2^\ell, c2^\ell]$ is on level $\ell$. We let $\mathcal{F}$ denote the set of all intervals of this form. We then have that for *any* interval $I' \subseteq [M]$, there are (at most) two adjacent intervals $I_l, I_r \in \mathcal{F}$ such that $I' \subseteq I_l \cup I_r$, and $|I_l \cup I_r|\leq 4 |I'|$ (when only one interval is needed, one of $I_l, I_r$ can be substituted by $\emptyset$). To see this, consider the smallest positive integer $k$ such that $|I'| < 2^k$. Then either $I'$ is contained in some dyadic interval of length $2^k$, or it contains exactly one right endpoint of such an interval. We then take $I_l$ to the be interval on level $k$ with this right endpoint, and take $I_r$ to be the next interval (with this as its open left endpoint). This implies the following upper bound for each permutation $\sigma$ and each $x \in \T$: $$\label{dyadicsum} \left|\left| \{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N)\rfloor} \right|\right|^2_{V^2} \ll \sum_{I \in \mathcal{F}} \max_{I' \subseteq I} \left| \sum_{j \in I'} f_{\sigma(j)}(x)\right|^2.$$ This holds because for each interval $J$ in the maximizing partition, $J \subseteq I_l \cup I_r$ for some $I_r, I_l \in \mathcal{F}$ with $|I| < 4|I_l \cup I_r|$. Each $I \in \mathcal{F}$ will correspond to at most a constant number of $J$’s (it can only be $I_l$ for one $J$ when $I_r$ is non-empty, $I_r$ for one $J$ when $I_l$ is non-empty, and it can contain at most 3 corresponding $J'$s), and this constant factor is absorbed by the $\ll$ notation. We consider choosing $\sigma$ uniformly at random. We observe by Fubini’s theorem: $$\mathbb{E} \left[ \int_{\T} \left|\left|\{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2} dx\right] = \int_{\T} \mathbb{E}\left[\left|\left|\{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2}\right] dx.$$ Using the triangle inequality for the $\left|\left| \cdot \right|\right|_{V^2}$ norm and linearity of expectation, we can split each $f_j(x)$ into real and imaginary parts, $f_j(x) = f^r_j(x) + i f^i_j(x)$, where $f^r_j$ and $f^i_j$ are both real valued. We then have: $$\ll \int_{\T} \mathbb{E}\left[\left|\left|\{f^r_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2}\right] dx + \int_{\T} \mathbb{E}\left[\left|\left|\{f^i_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2}\right] dx.$$ For each $\ell$ from 0 to $\lceil \ln (2 \ln N)\rceil$, we let $\mathcal{F}_\ell$ denote the intervals in $\mathcal{F}$ on level $\ell$. On each level, these intervals are disjoint. Applying (\[dyadicsum\]) to the quantity above for $f^r$ (the argument for $f^i$ is identical), we can express the result as: $$\int_{\T} \mathbb{E}\left[\left|\left|\{f^r_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2}\right] dx \ll \int_{\T} \mathbb{E} \left[ \sum_{\ell=0}^{\lceil \ln (2\ln N) \rceil} \sum_{I \in \mathcal{F}_\ell} \max_{I' \subseteq I} \left| \sum_{j \in I'} f^r_{\sigma(j)}(x)\right|^2\right] dx.$$ By linearity of expectation, this is: $$= \int_{\T} \sum_{\ell=0}^{\lceil \ln (2 \ln N) \rceil} \mathbb{E}\left[\sum_{I \in \mathcal{F}_\ell} \max_{I' \subseteq I} \left| \sum_{j \in I'} f^r_{\sigma(j)}(x)\right|^2\right] dx.$$ Now, for each $\ell$, we apply Corollary \[cor:garsia\] to the dyadic intervals on level $\ell$. As a result, we see that the above quantity is $$\label{aftercor} \ll \sum_{\ell=0}^{\lceil \ln (2 \ln N) \rceil} {\lfloor 2\ln(N)\rfloor -1 \choose 2^{\ell}-1}^{-1} \sum_{\stackrel{S \subseteq [\lfloor 2\ln(N)\rfloor]}{|S| = 2^\ell}} \left( \int_{\T} \left(\sum_{j \in S} f^r_j(x)\right)^2 dx + \sum_{j \in S} \int_{\T} f^r_j(x)^2 dx\right).$$ Combining this with the same result for the imaginary parts, we have: $$\int_{\T} \mathbb{E}\left[\left|\left|\{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2}\right] dx \ll \sum_{\ell=0}^{\lceil \ln (2 \ln N) \rceil} {\lfloor 2\ln(N)\rfloor -1 \choose 2^{\ell}-1}^{-1} \times$$ $$\label{backtogether} \sum_{\stackrel{S \subseteq [\lfloor 2\ln(N)\rfloor]}{|S| = 2^\ell}} \left( \int_{\T} \left(\sum_{j \in S} f^r_j(x)\right)^2 + \left(\sum_{j\in S} f^i_j(x)\right)^2dx + \sum_{j \in S} \int_{\T} f^r_j(x)^2 + f^i_j(x)^2 dx\right)$$ We consider the quantity $$\int_{\T} \left(\sum_{j \in S} f^r_j(x)\right)^2 + \left(\sum_{j\in S} f^i_j(x)\right)^2dx = \int_{\T} \sum_{j, j' \in S} f^r_j(x) f^r_{j'}(x) + f^i_{j}(x)f^i_{j'}(x) dx.$$ When $j \neq j'$, $$\int_{\T} f^r_j(x)f^r_{j'}(x) + f^i_j(x)f^i_{j'}(x) dx = 0,$$ since $f_j$ and $f_{j'}$ are orthogonal, and this is the real part of $\int_{\T} f_j(x)\overline{f_{j'}(x)} dx$. Thus, $$\int_{\T} \left(\sum_{j \in S} f^r_j(x)\right)^2 + \left(\sum_{j\in S} f^i_j(x)\right)^2dx \ll \sum_{j \in S} \int_{\T} f^r_j(x)^2 + f^i_j(x)^2 dx.$$ We then have: $$\mathbb{E} \left[ \int_{\T} \left|\left|\{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2} dx\right] \ll \sum_{\ell=0}^{\lceil \ln (2 \ln N) \rceil} {\lfloor 2\ln(N)\rfloor -1 \choose 2^{\ell}-1}^{-1} \sum_{\stackrel{S \subseteq [\lfloor 2\ln(N)\rfloor]}{|S| = 2^\ell}} \sum_{j \in S} \int_{\T} |f_j(x)|^2 dx.$$ By Parseval’s identity, $\int_{\T} |f_j(x)|^2 dx = \sum_{n \in A_j} |a_n|^2$. Since each $j$ occurs in exactly ${\lfloor 2\ln(N)\rfloor -1 \choose 2^{\ell}-1}$ sets of size $2^{\ell}$ for each $\ell$, the above quantity is: $$\ll \ln \ln (N) \sum_{n=1}^N |a_n|^2.$$ This implies that there exists some permutation $\sigma$ such that $$\int_{\T} \left|\left|\{f_{\sigma(j)}(x)\}_{j=1}^{\lfloor 2\ln(N) \rfloor} \right|\right|^2_{V^2} dx \ll \ln \ln (N) \sum_{n=1}^N |a_n|^2.$$ Taking a square root of both sides of this establishes (\[blockgoal\]), as desired. This concludes our analysis of the $V^2_L$ operator. We now bound the contribution of the $V^2_S$ operator. \[lem:shortop\] For some $\pi$ in our class of permutations, $$\int_{\T} \left|\left| \{a_{\pi(n)} \phi_{\pi(n)}(x)\}_{n=1}^{N'}\right|\right|^2_{V^2_S} dx \ll \ln \ln (N) \sum_{n=1}^N |a_n|^2.$$ We first observe that it suffices to prove the following inequality for each $A_j$. We let $\Pi_j$ denote the set of permutations of $A_j$, i.e. each $\pi_j \in \Pi_j$ is a bijective map from $[|A_j|] \rightarrow A_j$. We consider choosing such a permutation uniformly at random. Then if we have $$\label{expect} \mathop{\mathbb{E}}_{\pi_j \in \Pi_j} \left[ \int_{\T} \left|\left| \{ a_{\pi_j(n)} \phi_{\pi_j(n)}(x)\}_{n=1}^{|A_j|}\right|\right|^2_{V^2} dx\right] \ll \ln \ln (N) \sum_{n \in A_j} |a_n|^2$$ for each $j$, this means that there exists a permutation $\pi_j$ of each $A_j$ satisfying $$\int_{\T} \left|\left| \{ a_{\pi_j(n)} \phi_{\pi_j(n)}(x)\}_{n=1}^{|A_j|}\right|\right|^2_{V^2} dx \ll \ln \ln (N) \sum_{n \in A_j} |a_n|^2,$$ and these permutations can be put together to form a permutation $\pi$ as required for Lemma \[lem:shortop\]. We note that it does not matter how we concatenate the $\pi_j$’s: by definition of the $V^2_S$ operator, it only matters how each $A_j$ is permutated, not the order the $A_j$’s are placed in. We now fix a $j$ and we will prove (\[expect\]). By Fubini’s theorem, we can interchange the order of the integral and the expectation and instead work with the quantity $$\int_{\T} \mathop{\mathbb{E}}_{\pi_j \in \Pi_j} \left[ \left|\left| \{ a_{\pi_j(n)} \phi_{\pi_j(n)}(x)\}_{n=1}^{|A_j|}\right|\right|^2_{V^2}\right] dx.$$ For each fixed $x$, we define the set of complex numbers $\mathcal{C}$ to be the set of values $a_n \phi_n(x)$ for $n \in A_j$. Then, these complex numbers $c \in \mathcal{C}$ all satisfy $2^{-j-1} < |c|^2 \leq 2^{-j}$ (recall that $|\phi_n(x)| = 1$). We let $N_j:= |A_j|$, and we let random variables $Z_1, \ldots, Z_{N_j}$ denote random samples from $\mathcal{C}$ taken *without* replacement. We then see that it suffices to show: $$\label{woreplace} \mathbb{E}\left[ \left|\left| \{Z_n\}_{n=1}^{N_j}\right|\right|_{V^2}^2\right] \ll \ln \ln (N) \sum_{c \in \mathcal{C}} |c|^2 + \left|\sum_{c \in \mathcal{C}}c\right|^2.$$ To show this, we will need the following lemma: \[lem:complicated\] Let $X_1, \ldots, X_{N_j}$ denote uniformly random samples from $\mathcal{C}$ **with** replacement. For each $k$ from 1 to $N_j$, we let $S_k:= \sum_{i =1}^k X_i$. For a subinterval $I \subseteq [N_j]$, we let $S_I := \sum_{i \in I} X_i$. Then for any $k$ and any $p >2$: $$\mathbb{E}\left[ \max_{I \subseteq [k]} |S_I - \mathbb{E}[S_I]|^p\right] \ll C^{p} k^{\frac{p}{2}} p^{\frac{p}{2}} 2^{-jp/2},$$ where $C$ is a positive constant. We rely on Hoeffding’s inequality [@Hoeffding], which implies that $$\label{hoeffding} \mathbb{P}\left[ \max_{I \subseteq [k]} \left|Re[S_I] - \mathbb{E}[Re[S_I]]\right| > t\right] \ll exp\left( \frac{-ct^2}{k 2^{-j}}\right),$$ for some positive constant $c$, where $Re[S_I]$ denotes the real part of $S_I$. (More precisely, Hoeffding’s inequality is applied with the maximum over $S_m$ for $1 \leq m \leq k$. However, moving to a maximum over arbitrary subintervals only results in a change of the constant $c$.) The same holds analogously for the imaginary part of $S_I$. We note that $$\label{integral} \mathbb{E}\left[ \max_{I \subseteq [k]} \left|Re[S_I] - \mathbb{E}[Re[S_I]]\right|^p\right] = p \int_{0}^\infty t^{p-1}\mathbb{P}\left[ \max_{I \subseteq [k]} \left|Re[S_I] - \mathbb{E}[Re[S_I]]\right| > t\right] dt.$$ Applying (\[hoeffding\]), this is $$\ll p \int_{0}^\infty t^{p-1} exp\left( \frac{-ct^2}{k 2^{-j}}\right) dt.$$ We now perform the change of variable $t = \lambda^{\frac{1}{p}}$, so $dt = \frac{1}{p} \lambda^{\frac{1}{p}-1} d\lambda$. We obtain: $$= \int_{0}^\infty exp\left( \frac{-c \lambda^{2/p}}{k 2^{-j}}\right) d\lambda.$$ We recall that $\Gamma(z):= \int_{0}^\infty t^{z-1} e^{-t} dt$. Performing the change of variable $t = s^{\frac{2}{p}}$, we have $$\Gamma(z):= \frac{2}{p} \int_0^\infty s^{\frac{2}{p}-1} s^{\frac{2}{p} (z-1)} e^{-s^{2/p}} ds = \frac{2}{p} \int_0^\infty s^{\frac{2}{p} z -1}e^{-s^{2/p}} ds.$$ We now see that $$\int_{0}^\infty e^{-t^{\frac{2}{p}}}dt = \frac{p}{2}\; \Gamma \left(\frac{p}{2}\right).$$ We then set $s:= \left(\frac{c}{k 2^{-j}}\right)^{p/2} \lambda$, and we have: $$\int_{0}^\infty exp\left( \frac{-c \lambda^{2/p}}{k 2^{-j}}\right) d\lambda = \left(\frac{c}{k 2^{-j}}\right)^{-p/2} \int_{0}^\infty e^{-s^{\frac{2}{p}}} ds = \left(\frac{c}{k 2^{-j}}\right)^{-p/2} \frac{p}{2} \; \Gamma \left(\frac{p}{2}\right) .$$ This yields $$\mathbb{E}\left[ \max_{I \subseteq [k]} \left|Re[S_I] - \mathbb{E}[Re[S_I]]\right|^p\right] \ll \frac{p}{2} k^{p/2} c^{-p/2} 2^{-jp/2} \; \Gamma \left(\frac{p}{2}\right) .$$ By Sterling’s formula, $\Gamma (z) \ll \sqrt{\frac{2\pi}{z}} \left( \frac{z}{e}\right)^z$. Thus, $\Gamma \left(\frac{p}{2}\right) \ll \sqrt{\frac{4\pi}{p}}\left(\frac{p}{2e}\right)^{\frac{p}{2}}$. By arguing analogously for the imaginary parts, we obtain: $$\mathbb{E}\left[ \max_{I \subseteq [k]} |S_I - \mathbb{E}[S_I]|^p\right] \ll C^{p} k^{\frac{p}{2}} p^{\frac{p}{2}} 2^{-jp/2},$$ where $C$ is a positive constant. Using the above lemma, we estimate $\mathbb{E}\left[ \left|\left| \{Z_n\}_{n=1}^{N_j}\right|\right|_{V^2}^2\right]$ as follows. We let $N'_j = 2^{m}$ be the smallest power of $2$ which is $\geq N_j$. We then decompose $[N'_j]$ into a family of dyadic intervals. More precisely, we define $\mathcal{F}$ to be the family of intervals of the form $$((d-1)2^\ell, d2^\ell], \; \ell \in \{0, 1, \ldots, m\}, \; d \in \{1, \ldots, 2^{m-\ell}\}.$$ Now, for any interval $I'$, there are (at most) two intervals $I_l, I_r \in \mathcal{F}$ such that $I' \subseteq I_l \cup I_r$ and $|I_l \cup I_r| < 4|I'|$. Moreover, for any partition $\mathcal{P}$ of $[N_j]$, the number of times an $I \in \mathcal{F}$ is associated to an $I' \in \mathcal{P}$ is upper bounded by a constant. (This is as we have argued previously.) We let $\Omega$ denote our probability space ($\omega \in \Omega$ corresponds to a specified value for each $Z_n$). Now, for a fixed $\omega \in \Omega$, we say an interval $I \subseteq \mathcal{F}$ is *good* if: $$\max_{I' \subseteq I} |S_{I'} - \mathbb{E}[S_{I'}]|^2 \leq D 2^{-j} |I| \ln \ln (N),$$ where $D$ is a positive constant whose value we will specify later. Otherwise, we say $I$ is *bad*. We let $\mathcal{P}$ denote the maximal partition (which depends on $\omega$). For each interval $I' \in \mathcal{P}$, we have (at most two) covering intervals $I_r, I_l \in \mathcal{F}$. We let $\mathcal{F}_{\mathcal{P}}$ denote the set of intervals in $\mathcal{F}$ which correspond to intervals in $\mathcal{P}$ (each $I \in \mathcal{F}$ corresponds to at most a constant number of intervals $I' \in \mathcal{P}$). We have: $$\sum_{I' \in \mathcal{P}} \left| \sum_{n \in I} Z_n\right|^2 \ll \sum_{I \in \mathcal{F}_{\mathcal{P}}} \max_{I' \subseteq I} \left| \sum_{n \in I'} Z_n\right|^2.$$ We observe that $$\sum_{\stackrel{I \in \mathcal{F}_{\mathcal{P}}}{I \text{ is good}}} \max_{I' \subseteq I} \left|\sum_{n \in I'} Z_n\right|^2 \ll \left|\sum_{c \in \mathcal{C}}c \right|^2 + D2^{-j} N_j \ln \ln (N) \ll \ln \ln (N) \sum_{c \in \mathcal{C}} |c|^2 + \left|\sum_{c \in \mathcal{C}}c \right|^2,$$ since each $|c|^2$ is between $2^{-j-1}$ and $2^{-j}$, and $|\mathcal{C}| = N_j$. To see this, note that for each $I'$, $|S_{I'}|^2 \ll |S_{I'} - \mathbb{E}[S_{I'}]|^2 + |\mathbb{E}[S_{I'}]|^2$, and $|\mathbb{E}[S_{I'}]|^2 = \left| \frac{|I'|}{N_j} \sum_{c \in \mathcal{C}}c\right|^2$. It only remains to bound the contribution of the intervals that are not good. For this, we first prove the following lemma. For each interval $I \in \mathcal{F}$, we let $B(I)$ denote the event that $I$ is *bad* (i.e. not good), and we let $1_{B(I)}$ denote its indicator function. \[lem:probbad\] For each $I \in \mathcal{F}$, $$\mathbb{P}\left[ 1_{B(I)}\right] \ll \frac{1}{\ln (N)^4},$$ when $D$ is chosen to be a sufficiently large constant. By Chebyshev’s inequality, for any $p>2$ we have $$\label{chebyshevresult} \mathbb{P}\left[ 1_{B(I)}\right] = \mathbb{P}\left[ \max_{I' \subseteq I} |S_{I'} - \mathbb{E}[S_{I'}]|^2 > D 2^{-j} |I| \ln \ln (N)\right] \ll \frac{\mathbb{E}\left[ \max_{I'\subseteq I} |S_{I'} - \mathbb{E}[S_{I'}]|^p\right]}{\left(D 2^{-j} |I| \ln \ln (N)\right)^{p/2}}.$$ We now rely on the following result of Rosén [@Rosen]. \[lem:rosen\](Theorem 4 in [@Rosen]) Let $X_1, \ldots, X_k$ be samples drawn from a finite set of real numbers with replacement, and let $Z_1, \ldots, Z_k$ be samples drawn without replacement. Let $1 \leq n_1 < n_2 < \cdots < n_m$. For every convex, monotone function $\phi: \R \rightarrow \R$, we have $$\mathbb{E}\left[ \max \left( \phi\left(\sum_{n=1}^{n_1} Z_n\right), \ldots, \phi\left(\sum_{n=1}^{n_m} Z_n\right)\right) \right] \leq \mathbb{E}\left[ \max \left( \phi\left(\sum_{n=1}^{n_1} X_n\right), \ldots, \phi\left(\sum_{n=1}^{n_m} X_n\right)\right) \right].$$ We want to apply this lemma to the function $f(x) := |x|^p$, but this is not monotone. Instead we define monotone, convex functions $f_1, f_2$ such that $|x|^p = f_1 (x) + f_2(x)$, namely setting $f_1(x) = (-x)^p$ for $x <0$ and equal to 0 otherwise, and $f_2(x) = x^p$ for $x >0$ and equal to 0 otherwise. We note that $|x|^p \geq f_1(x), f_2(x)$ always holds. Without loss of generality, we consider $I$ equal to the interval of length $|I|$ starting at 1. Then, for some constant $H$, we have: $$\mathbb{E}\left[ \max_{I'\subseteq I} |S_{I'} - \mathbb{E}[S_{I'}]|^p\right] \ll H^p \; \mathbb{E}\left[ \max_{1 \leq n \leq |I|} f_1\left(Re\left( S_n - \mathbb{E}[S_n] \right)\right)\right] +$$ $$\cdots + H^p \; \mathbb{E}\left[\max_{1 \leq n \leq |I|} f_2\left(Im\left(S_n - \mathbb{E}[S_n]\right)\right)\right].$$ Here, $S_n$ denotes the partial sum of $Z_1 + Z_2 + \cdots + Z_n$, $Re$ denotes the real part, $Im$ denotes the imaginary part, and there are four terms in this sum: one for each combination of $f_1,f_2$ and real and imaginary parts. We can apply Lemma \[lem:rosen\] to each of these four terms to replace the samples $Z_1, \ldots, Z_{|I|}$ taken without replacement with samples $X_1, \ldots, X_{|I|}$ taken with replacement. Now applying Lemma \[lem:complicated\], we have $$\mathbb{P}\left[ 1_{B(I)}\right] \ll \frac{\tilde{H}^p |I|^{\frac{p}{2}} p^{\frac{p}{2}} 2^{-jp/2}}{ \sqrt{D}^p (\ln \ln (N))^{\frac{p}{2}} |I|^{\frac{p}{2}} 2^{-jp/2}} = \left(\frac{\tilde{H}}{\sqrt{D}}\right)^p p^{\frac{p}{2}} (\ln \ln (N))^{-\frac{p}{2}},$$ for some constant $\tilde{H}$. Now, setting $p := \ln \ln (N)/e$, this is: $$= \left( \frac{\tilde{H}}{\sqrt{D}}\right)^{\frac{\ln \ln (N)}{e}} \ln (N)^{-\frac{1}{2e}}.$$ We can then set $D$ large enough so that $\frac{\tilde{H}}{\sqrt{D}} < e^{-4e}$, and the lemma follows. We observe that the contribution of the bad intervals is upper bounded by $$\label{badsum} \ll \sum_{ I \in \mathcal{F}} \mathbb{E}\left[ 1_{B(I)} \max_{I' \subseteq I} |S_{I'}|^2\right].$$ We next apply Hölder’s inequality with $q,r$ fixed to be constants such that $\frac{1}{r}+ \frac{1}{q} = 1$ and $\frac{4}{q} >2, r >1$. We then have that the above quantity is: $$\ll \sum_{I \in \mathcal{F}} \left( \mathbb{E}[1_{B(I)}]\right)^{\frac{1}{q}} \left( \mathbb{E}\left[ \max_{I' \subseteq I} |S_{I'}|^{2r}\right]\right)^{\frac{1}{r}}.$$ By Lemma \[lem:probbad\], we know that $$\left( \mathbb{E}[1_{B(I)}]\right)^{\frac{1}{q}} \ll (\ln (N))^{-2}.$$ We also know that for each $I'$, $|\mathbb{E}[S_{I'}]|^2 \ll \left(\frac{|I'|}{N_j}\right)^2 \left|\sum_{c \in \mathcal{C}} c\right|^2 \ll \frac{|I'|}{N_j} \left|\sum_{c \in \mathcal{C}} c\right|^2$. When we sum these up over all $\mathcal{I} \in \mathcal{F}$, we obtain $\ll \ln (N) \left| \sum_{c\in \mathcal{C}} c \right|^2$. Now multiplying by $ \ln(N)^{-2}$, we obtain a contribution which is $o\left( \left|\sum_{c\in \mathcal{C}} c\right|^2\right)$. Thus, it only remains to bound $$(\ln (N))^{-2}\sum_{I \in \mathcal{F}} \left( \mathbb{E}\left[ \max_{I' \subseteq I} |S_{I'}-\mathbb{E}[S_{I'}]|^{2r}\right]\right)^{\frac{1}{r}}.$$ Similarly to our above arguments, we define convex, monotone functions $f_1, f_2: \R \rightarrow \R$ such that $f_1(x) + f_2(x) = |x|^{2r}$. More precisely, we set $f_1(x) = (-x)^{2r}$ when $x <0$ and equal to 0 otherwise, while we set $f_2(x) = x^{2r}$ when $x >0$ and equal to 0 otherwise. Now, again applying Lemma \[lem:rosen\], it suffices to bound e.g. $$\sum_{I \in \mathcal{F}} \left(\mathbb{E}\left[\max_{1 \leq n \leq |I|} f_1(Re(S_n - \mathbb{E}[S_n]))\right]\right)^{\frac{1}{r}},$$ where $S_n$ is now the partial sum $X_1 + \cdots + X_n$, where each $X_k$ is a sample from $\mathcal{C}$ taken *with* replacement. (We must also bound the analogous quantities for other combinations of $f_1, f_2$ and $Re, Im$, but these will follow via the same argument.) We now apply Lemma \[lem:Doob\] to obtain that the above quantity is $$\ll \sum_{I \in \mathcal{F}} \left(\mathbb{E}\left[ \max_{1 \leq n \leq |I|}|Re(S_n-\mathbb{E}[S_n])|^{2r}\right]\right)^{\frac{1}{r}} \ll \sum_{I \in \mathcal{F}} \left( \mathbb{E}[|Re(S_{I}- \mathbb{E}[S_I])|^{2r}]\right)^{\frac{1}{r}}.$$ Next applying Lemma \[lem:Rosenthal\], we see that this is $$\ll \sum_{I \in \mathcal{F}} \max \left\{ \left( \sum_{n=1}^{|I|} \mathbb{E}[|\tilde{X}_n|^{2r}]\right)^{\frac{1}{r}}, \sum_{n=1}^{|I|} \mathbb{E}[|\tilde{X}_n|^2]\right\},$$ where $\tilde{X}_n$ is defined to be an (independent, uniform) sample from $\mathcal{C}$ with replacement, recentered to be mean zero. In other words, $\tilde{X}_n = X_n - \mathbb{E}{X_n}$. Now, since $r >1$, both of the quantities in this maximum are $\ll |I| 2^{-j}$. Hence, we have: $$\ll \sum_{ I \in \mathcal{F}} |I| 2^{-j} \ll \ln (N) \sum_{c \in \mathcal{C}} |c|^2.$$ Multiplying this by our bound $(\ln (N))^{-2}$ for the probability of each $I$ being bad, we see that this is $o\left( \sum_{c \in \mathcal{C}} |c|^2\right)$. This completes the proof of Lemma \[lem:shortop\]. Combining Lemma \[lem:shortop\] with (\[blockgoal\]), we obtain Theorem \[mod1Perm\]. Refinements of Theorem \[main\] for Certain Structured ONS {#sec7} ========================================================== In this section, we briefly outline how Theorem \[main\] can be improved for more restrictive classes of ONS, using the methods employed in proving Theorem \[varRad\]. We consider an ONS such that for $f$ in the span of the system, we have $||f||_{L^p} \leq C_p ||f||_{L^2}$ for $p>2$, where $C_p$ is a constant depending only on $p$. Such systems arise naturally, for example, as the restriction of the trigonometric system to certain arithmetic subsets ($\Lambda(p)$ sets). We will use the fact that a maximal form of this hypothesis can be obtained from a very general theorem of Christ and Kiselev [@ChristKiselev]. Let $\{ \phi_n \}_{n=1}^{\infty}$ be an ONS such that for $f$ in the span of the system, we have $||f||_{L^p} \leq C_p ||f||_{L^2}$ for some $p>2$. Then $$||\mathcal{M}f||_{L^p} \ll_{\delta} C_{p} ||f||_{L^2}$$ as long as $p>\delta>2$. This last condition implies that the implicit constant is uniform for large $p$. Using this and the arguments in the proof of Theorem \[varRad\], one can obtain the following: Let $\{ \phi_n \}_{n=1}^{\infty}$ be a ONS such that if $f$ is in the span of the system, then $||f||_{L^p} \ll C_{p} ||f||_{L^2}$ for some $p>2$. We then have that $$||f||_{L^2(V^2)} \ll_{p} \ln^{1/p}(|A|)||f||_{L^2}.$$ where the coefficients of $f$ are supported a finite index set $A$. We briefly sketch the proof. We note that if $||\mathcal{M}f||_{L^2} \ll ||f||_{L^2}$ holds, then this theorem follows for $p=2$. However, this is in general not true and by the sharpness of Theorem \[main\], the best one can hope for in the general case is a factor of $\ln(|A|)$ in place of $\ln^{1/2}(|A|)$. The proof follows the same setup as the proof of Theorem \[varRad\]. We define a bad event for some interval $J$ to be the event that $ |\tilde{S}_J| \gg \ln^{1/p}(|A|) (M(J))^{1/2}$ (here $M(J)$ is defined to be the sum of $a_n^2$ over $n \in J$, where the $a_n$’s are the coefficients of $\phi_n$ in the expansion of $f$). It is easy to see that the contribution from the good events are of an acceptable order and it suffices to bound the bad events. The argument is essentially the same as the proof of Theorem \[varRad\], with the exception that we use the following estimate: $$\int_{\T} |1_{B(\tilde{J})} \tilde{S}_{\tilde{J}}|^2 \leq \left(\int_{\T} 1_{B(\tilde{J})} \right)^{1/(p/2)'} \left( \int_{\T} |\tilde{S}_{\tilde{J}}|^p \right)^{(2/p)}.$$ (Here, $(p/2)'$ denotes the conjugate exponent of $p/2$.) We now estimate $\int_{\T}|\tilde{S}_{\tilde{J}}|^p \ll C_{p}^p \; \left(\int_{\T} |S_{\tilde{J}}|^2 \right)^{p/2} \ll C_{p}^p \; (M(\tilde{J}))^{p/2}$. Hence $ \left(\int_{\T} |\tilde{S}_{\tilde{J}}|^p \right)^{(2/p)} \ll C_{p}^2 M(\tilde{J})$. Next, by Chebyshev’s inequality, $$\int_{\T} 1_{B(\tilde{J})} \leq \frac{\int_{\T} |\tilde{S}_{\tilde{J}}|^p }{ \left( \ln^{1/p} (|A|) (M(\tilde{J}))^{1/2} \right)^{p} } \leq \frac{ C_{p}^p}{\ln(|A|)}.$$ Hence (using $1/(p/2)'= \frac{p-2}{p}$), we have $\left(\int_{\T} 1_{B(\tilde{J})} \right)^{(p-2)/p} \ll \frac{ C_{p}^{p-2} }{\ln^{(p-2)/p}(|A|)}$. This yields $$\int_{\T} |1_{B(J)} \tilde{S}_{\tilde{J}}|^2 \leq \left(\int_{\T} 1_{B(\tilde{J})} \right)^{1/(p/2)'} \left(\int_{\T} |\tilde{S}_{\tilde{J}}|^p \right)^{(2/p)} \ll \frac{C_{p}^{p} M(\tilde{J})}{\ln^{(p-2)/p} (|A|)}.$$ Now we sum this quantity over $\ln(|A|)$ levels, each with the sum of $M(\tilde{J})$ summing to $1$. Hence the contribution from the bad events to the quantity we wish to estimate is $O( \ln^{2/p}(|A|))$. This is exactly the order we wish to show. Finally, we observe that: Let $\{ \phi_n \}_{n=1}^{\infty}$ be an ONS such that if $f$ is in the span of the system, then $||f||_{L^p} \ll \sqrt{p}||f||_{L^2}$ (for all $p>2$). Then $$||f||_{L^2(V^2)} \ll \sqrt{\ln\ln(|A|)}||f||_{L^2},$$ where the coefficients of $f$ are supported on the index set $A$. This is proved using the same arguments sketched for the previous theorem, however now we have freedom to optimize over the choice of $p$ we use. The optimum occurs with a choice of $p$ about $c e^{-1}\ln\ln(N)$. Essentially the same argument is given in detail in the proof of Theorem \[mod1Perm\] for random permutations (see the proof of Lemma \[lem:probbad\]). Here it is important that the constants in the Christ-Kiselev theorem are uniformly bounded for large $p$. The above theorem can be applied to systems formed by Sidon subsets of the trigonometric system, since the hypothesis of this theorem characterizes Sidon sets (when applied to subsets of the trigonometric system) by a theorem of Pisier [@Pisier] (see also [@Rudin]). Variational Estimates for the $V^p$ Operator ============================================ Notation -------- Let $\Gamma: \R \rightarrow \R^{+}$ be a convex symmetric function, increasing on $\R^{+}$ and tending to infinity at infinity such that $\Gamma(0)=0$. Then the Orlicz space norm associated to $\Gamma$ is defined as $$||f||_{\Gamma}:= \min \left\{\lambda : \int_{\T} \Gamma \left(\frac{f(x)}{\lambda} \right) dx \leq 1 \right\}.$$ The fact that this norm satisfies the triangle inequality is an easy exercise using Jensen’s inequality. We refer the reader to [@KR] for the general theory of these spaces. Following [@Bour], we will be interested in $\Gamma := \Gamma_{K}$ defined as follows $$\Gamma_{K}(t):= \left\{ \begin{array}{ll} |t|^{5/2}, & \hbox{$|t|\leq K$} \\ \frac{5}{4}K^{1/2}t^2 - \frac{1}{4}K^{5/2}, & \hbox{$|t| \geq K$} \end{array} \right. .$$ Later we will also use $$\gamma_{K}(t) :=\left\{ \begin{array}{ll} |t|^{1/2}, & \hbox{$|t|\leq K$} \\ K^{1/2}, & \hbox{$|t| \geq K$} \end{array} \right. .$$ We note that $t^2 \gamma_K(t) \leq \Gamma_K(t)$ for all $t$. We state some other basic properties that we will need. \[lem:pconvex\] Let $2= p$. Then $||\cdot||_{\Gamma_K}$ is $p$-convex. That is, for any functions $f_1, \ldots, f_k$ from $\T$ to $\R$, $$\left|\left| \left(\sum_{i=1}^k |f_{i}|^p\right)^{1/p} \right|\right|_{\Gamma_K} \leq \left(\sum_{i=1}^k || f_i ||_{\Gamma_{K}}^{p}\right)^{1/p}.$$ Let $\Gamma_{K,1/p}(t):= \Gamma_{K}(t^{1/p})$, which we observe is still convex (we have used that $p= 2 $ here). Since $\Gamma_{K,1/p}(t)$ is convex, we can use it to form an Orlicz space norm. We observe that $$\left|\left| \left(\sum_{i=1}^k |f_{i}|^p\right)^{1/p} \right|\right|_{\Gamma_K} = \min \left\{\lambda : \int_{\T} \Gamma_{K} \left(\frac{\left(\sum_{i=1}^k |f_{i}(x)|^p\right)^{1/p} }{\lambda} \right) dx \leq 1 \right\}$$ $$= \min \left\{\lambda : \int_{\T} \Gamma_{K,1/p} \left(\frac{ \sum_{i=1}^k |f_{i}(x)|^p }{\lambda^p} \right) dx \leq 1 \right\} = \left|\left| \sum_{i=1}^k |f_{i}|^p \right|\right|_{\Gamma_{K,1/p}}^{1/p}$$ $$\leq \left(\sum_{i=1}^k \left|\left| |f_{i}|^p \right|\right|_{\Gamma_{K,1/p}} \right)^{1/p} = \left(\sum_{i=1}^k || f_i ||_{\Gamma_{K}}^{p}\right)^{1/p}.$$ The inequality here follows from the triangle inequality for $||\cdot||_{\Gamma_{K,1/p}}$. Proof of Theorem \[varVp\] -------------------------- We now prove: Let $p>2$ and $\{\phi_{n}\}_{n=1}^{N}$ be an orthonormal system such that $||\phi_{n}||_{L^\infty}\leq C$ for all $n$. There exists a permutation $\pi:[N]\rightarrow [N]$ such that the orthonormal system $\{\psi_n := \phi_{\pi(n)}\}_{n=1}^{N}$ satisfies $$\label{varperm} ||f||_{L^{2}(V^{p})} \ll_{C,p} \ln\ln(N)||f||_{L^2}$$ for all $f= \sum_{n=1}^{N} a_n\psi_n(x)$. Our starting point is the inequality (3.21) of [@Bour]: \[bourgain\] Let $\{\phi_n\}_{n=1}^N$ be an orthonormal system with $||\phi_n||_{L^\infty} \leq C$ for all $n$. Then there exists a permutation $\pi: [N] \rightarrow [N]$ such that for all subintervals $I$ of $[N]$ and all real values $a_1, \ldots, a_N$, the orthonormal system $\{\psi_n := \phi_{\pi(n)}\}_{n=1}^N$ satisfies: $$\label{permIneq} \left|\left| \sum_{n \in I} a_{n}\psi_{n} \right|\right|_{\Gamma_{N/|I|}} \ll_C \ln^{3/4}(N) \left( \sum_{n \in [I]} a_n^2\right)^{1/2}.$$ We will need a variational form of this inequality. This is easily achieved using a Rademacher-Menshov argument. \[lem:perm\] With the notation as above, we have that $$\label{permIneq} \bigg| \bigg| || \{a_{n}\psi_{n}\}_{n \in I} ||_{V^2} \bigg|\bigg|_{\Gamma_{N/|I|}}\ll_C \ln^{7/4}(N) \left( \sum_{n \in I} a_n^2\right)^{1/2}$$ for all $I \subseteq [N]$ and all real sequences $a_1, \ldots, a_N$. As in section \[sec:rad\], we assume (without loss of generality) that $I = [2^{\ell}]$ for some $\ell$ and we define the intervals $I_{k,i} := (k2^{i}, (k+1)2^i]$ for $0 \leq i \leq \ell$ and $0 \leq k \leq 2^{\ell-i}-1$. For each $J \subseteq I$, we can express $J$ as a disjoint union of intervals $I_{k,i}$, where the union contains at most two intervals of each size. As in (\[pointwise\]), we then observe for each $x \in \T$: $$|| \{a_{n}\psi_{n}\}_{n \in I} ||_{V^2}(x) \ll \sum_{i=0}^\ell \sqrt{\sum_{k=0}^{2^{\ell-i}-1} \left( \sum_{n \in I_{k,i}} a_n \psi_n(x)\right)^2}.$$ By the triangle inequality for the Orlicz norm, we then have $$\bigg| \bigg| || \{a_{n}\psi_{n}\}_{n \in I} ||_{V^2} \bigg|\bigg|_{\Gamma_{N/|I|}} \ll \sum_{i=0}^\ell \left|\left| \sqrt{\sum_{k=0}^{2^{\ell-i}-1} \left( \sum_{n \in I_{k,i}} a_n \psi_n(x)\right)^2}\right|\right|_{\Gamma_{N/|I|}}.$$ Applying Lemma \[lem:pconvex\], this is $$\leq \sum_{i=0}^\ell \sqrt{ \sum_{k=0}^{2^{\ell-i}-1} \left|\left|\sum_{n \in I_{k,i}} a_n \psi_n(x) \right|\right|^2_{\Gamma_{N/|I|}}}.$$ By Theorem \[bourgain\], we obtain $$\ll_C \ln^{3/4} (N)\sum_{i=0}^\ell \sqrt{ \sum_{k=0}^{2^{\ell-i}-1} \sum_{n \in I_{k,i}} a_n^2} = \ln^{3/4} (N) \sum_{i=0}^\ell \sqrt{\sum_{n \in I} a_n^2} = \ln^{7/4} (N) \sqrt{\sum_{n \in I} a_n^2}.$$ We now prove Theorem \[varVp\]. We assume (without loss of generality) that $\sum_{n=1}^N a_n^2 =1$. As in Section \[sec:probability\], we consider decomposing $[N]$ into a family of subintervals according to mass, defined with respect to the $a_n$’s. We recall that the mass of an arbitrary subinterval $I$ is defined to be $M(I):= \sum_{n \in I} a_n^2$. We define the intervals $I_{k,s}$ for $1 \leq s \leq 2^k$ and points $i_{k,s}$ as in Section \[sec:probability\]. We refer to the intervals $I_{k,s}$ for $1 \leq s \leq 2^k$ as the admissible intervals on level $k$, and the points $i_{k,s}$ (as $s$ ranges) as the admissible points on level $k$. We note that any interval $I \subseteq [N]$ can be expressed as a union of intervals of the form $I_{k,s}$ and points $i_{k,s}$, where there are at most two intervals and two points for each value of $k$ (this follows analogously to the proof of Lemma \[lem:binarydecomp\]). This decomposition is obtained by first taking the intervals $I_{k,s}$ and points $i_{k,s}$ contained in $I$ with the smallest value of $k$. (There are at most 2 of each, otherwise $I$ would contain an admissible interval or point for a smaller $k$ value.) These “components" of $I$ on level $k$ form an interval, and when we remove this from $I$, we are left with a left part and a right part. Each part can then be decomposed as union of intervals $I_{k,s}$ and points $i_{k,s}$ for higher values of $k$, and each of the two unions contains at most one interval and one point on each level. We let $\pi:[N] \rightarrow [N]$ be the permutation as in Lemma \[lem:perm\], and $\psi_n := \phi_{\pi(n)}$. We fix an $x \in \T$. The value of $$\left| \left| \{a_n \psi_n(x)\}_{n=1}^N \right| \right|_{V^p}$$ is achieved by some partition $\mathcal{P}$ of $[N]$. Each $I \in \mathcal{P}$ can be expressed as a union of intervals of the form $I_{k,s}$ and points $i_{k,s}$, and we denote the set of these intervals and points by $T_I$ and $t_I$ respectively. We recall that each of $T_I$ and $t_I$ will have at most two intervals or points (respectively) on each level. We also note that each admissible interval will appear in this union for at most one $I \in \mathcal{P}$. We fix a positive constant $c$ (depending on $p$) such that $c > \max \{\frac{35}{4}\left(\frac{1}{2} - \frac{1}{p}\right)^{-1}, 9\}$ (this is possible because $p >2$). We define $k^* :=c \ln \ln (N)$ (more precisely, $k^*$ is the nearest integer greater than $c \ln \ln (N)$). Now, for each $I \in \mathcal{P}$, all of the intervals in $T_I$ and points in $t_I$ on levels greater than $k^*$ are contained in the two intervals $I_{k^*,s_\ell}$ and $I_{k^*,s_r}$ on level $k^*$, where $s_\ell$ is one less than the $s$ value for the leftmost interval $I_{k^*,s}$ in $T_I$, and $s_r$ is one more than the $s$ value for the rightmost interval $I_{k^*,s}$ in $T_I$. We will use $k^*$ as a cutoff threshold: we handle the intervals and points at levels $\leq k^*$ directly and handle the intervals and points at levels $> k^*$ using the fact that they are contained in $I_{k^*,s_\ell}, I_{k^*,s_r}$. We define $T'_I$ to be the subset of intervals in $T_I$ on levels $\leq k^*$ and $t'_I$ to be the subset of points in $t_I$ on levels $\leq k^*$. Now, $\left| \left| \{a_n \psi_n(x)\}_{n=1}^N \right| \right|_{V^p}$ is equal to: $$\left(\sum_{I \in \mathcal{P}} \left( \sum_{n \in I} a_n \psi_n(x)\right)^p\right)^{1/p} =$$$$\left(\sum_{I \in \mathcal{P}} \left( \sum_{J \in T'_I} \sum_{n \in J} a_n \psi_n(x) + \sum_{J \in T_I\backslash T'_I} \sum_{n \in J} a_n \psi_n(x) +\sum_{n \in t'_I} a_n \psi_n(x) + \sum_{n \in t_I \backslash t'_I} a_n \psi_n(x)\right)^p\right)^{1/p}.$$ Applying the triangle inequality for the $\ell_p$-norm, this is: $$\begin{aligned} \label{decomp1} \nonumber &\leq& \left( \sum_{I \in \mathcal{P}} \left( \sum_{J \in T'_I} \sum_{n \in J} a_n \psi_n(x)\right)^p\right)^{1/p} + \left(\sum_{I \in \mathcal{P}} \left( \sum_{n \in t'_I} a_n \psi_n(x)\right)^p\right)^{1/p} \\ &+ & \left(\sum_{I \in \mathcal{P}} \left( \sum_{J \in T_I\backslash T'_I} \sum_{n \in J} a_n \psi_n(x) + \sum_{n \in t_I\backslash t'_I} a_n \psi_n(x)\right)^p \right)^{1/p}\end{aligned}$$ We consider the second of these three terms. Since $p \geq 2$, we have $$\left(\sum_{I \in \mathcal{P}} \left( \sum_{n \in t'_I} a_n \psi_n(x)\right)^p\right)^{1/p} \leq \left(\sum_{I \in \mathcal{P}} \left(\sum_{n \in t'_I} a_n \psi_n(x)\right)^2\right)^{1/2}.$$ For each $k \leq k^*$, we let $\ell_k$ denote the set of admissible points on level $k$. Since each $t'_I$ contains at most 2 points in each $\ell_k$, we can apply the triangle inequality to obtain $$\left(\sum_{I \in \mathcal{P}} \left(\sum_{n \in t'_I} a_n \psi_n(x)\right)^2\right)^{1/2} \ll \sum_{k=0}^{k^*}\left(\sum_{n \in \ell_k} (a_n\psi_n(x))^2\right)^{1/2}.$$ Now, by the triangle inequality for the $L^2$ norm and the fact that $\int_{\T} a_n^2 \psi_n^2(x) dx= a_n^2$ for all $n$, we have $$\left| \left| \sum_{k=0}^{k^*} \left(\sum_{n \in \ell_k} (a_n\psi_n(x))^2\right)^{1/2}\right|\right|_{L^2} \ll_p \ln \ln (N).$$ To see this, recall that $\sum_{n=1}^N a_n^2 = 1$, so $\sum_{n \in \ell_k} a_n^2 \leq 1$ for each $k$, and $k^* \ll_p \ln \ln (N)$. It remains to bound the first and third terms in (\[decomp1\]). We consider the first term. For each $k$, we let $\mathcal{L}_k$ denote the set of admissible intervals $I_{k,s}$ as $s$ ranges from 1 to $2^k$ (i.e. the admissible intervals on level $k$). Then, by triangle inequality for the $\ell^2$ norm and the fact that $p \geq 2$, $$\left( \sum_{I \in \mathcal{P}} \left( \sum_{J \in T'_I} \sum_{n \in J} a_n \psi_n(x)\right)^p\right)^{1/p} \leq \left( \sum_{I \in \mathcal{P}} \left( \sum_{J \in T'_I} \sum_{n \in J} a_n \psi_n(x)\right)^2\right)^{1/2}$$ $$\leq \sum_{k=0}^{k^*} \left(\sum_{I \in \mathcal{P}} \left (\sum_{J \in T'_I \cap \mathcal{L}_k} \sum_{n \in J} a_n\psi_n(x)\right)^2\right)^{1/2}$$ $$\leq \sum_{k=0}^{k^*} \left( \sum_{J \in \mathcal{L}_k} \left( \sum_{n\in J} a_n \psi_n(x)\right)^2\right)^{1/2}.$$ Now, using the triangle inequality for the $||\cdot ||_{L^2}$ norm, we have: $$\left|\left| \sum_{k=0}^{k^*} \left( \sum_{J \in \mathcal{L}_k} \left( \sum_{n\in J} a_n \psi_n(x)\right)^2\right)^{1/2} \right| \right|_{L^2} \leq \sum_{k=0}^{k^*} \left|\left| \left( \sum_{J \in \mathcal{L}_k} \left( \sum_{n\in J} a_n \psi_n(x)\right)^2\right)^{1/2} \right|\right|_{L^2}$$ $$= \sum_{k=0}^{k^*} \left( \sum_{J \in \mathcal{L}_k} \int_{\T}\left( \sum_{n \in J} a_n \psi_n(x)\right)^2 dx \right)^{1/2}$$ $$= \sum_{k=0}^{k^*} \left( \sum_{J \in \mathcal{L}_k} M(J)\right)^{1/2} \ll_p \ln \ln (N),$$ since $\sum_{J \in \mathcal{L}_k} M(J) =1$ for each $k$, and $k^* \ll_p \ln \ln (N)$. We are thus left with the third term of (\[decomp1\]). For each $I \in \mathcal{P}$, we consider the union of the intervals and points in $T_I \backslash T'_I$ and $t_I \backslash t'_I$. This can alternatively be described as a union of at most two intervals $J_\ell$ and $J_r$, where each of $J_\ell, J_r$ is a subinterval of $I_{k^*,s}$ for some $s$. To see this, recall that $I$ is decomposed into a union of admissible intervals and points by taking the admissible intervals and points contained in $I$ for the earliest level where this set is non-empty. The remaining left and right parts of $I$ are then decomposed separately. If the minimal $k$ is $\leq k^*$, then $J_\ell$ is the union of the intervals/points in the decomposition of the left part that fall beyond level $k^*$, and $J_r$ is the same for the right part. If the minimal $k$ is $> k^*$, then in fact all of $I$ is contained in some admissible interval on level $k^*$, and we can take $J_\ell$ to be this interval and $J_r$ to be empty. We then rewrite the quantity we wish to bound as: $$\left( \sum_{I \in \mathcal{P}} \left( \sum_{n \in J_\ell} a_n \psi_n(x) + \sum_{n \in J_r} a_n\psi_n(x)\right)^p \right)^{1/p}.$$ Applying the simple fact that $(a+b)^p \leq 2^p(a^p + b^p)$ for all non-negative real numbers $a$ and $b$, we see this is $$\ll \left( \sum_{I \in \mathcal{P}} \left( \sum_{n \in J_\ell} a_n \psi_n(x)\right)^p + \left(\sum_{n \in J_r} a_n \psi_n(x)\right)^p \right)^{1/p}.$$ Now we observe that we are summing the values $a_n \psi_n(x)$ over disjoint intervals, each of which is contained in $I_{k^*, s}$ for some $s$. Thus, this quantity is upper bounded by: $$\leq \left( \sum_{1 \leq s \leq 2^{k^*}} \left| \left| \{a_n \psi_n(x)\}_{n \in I_{k^*,s}} \right|\right|^p_{V^p}\right)^{1/p}.$$ Therefore, it suffices to bound $$\left|\left| \left( \sum_{1 \leq s \leq 2^{k^*}} \left| \left| \{a_n \psi_n(x)\}_{n \in I_{k^*,s}} \right|\right|^p_{V^p}\right)^{1/p}\right|\right|_{L^2}.$$ For each $s$ from 1 to $2^{k^*}$, we define disjoint sets $G_s, B_s$ such that $G_s \cup B_s = \mathbb{T}$. We define $G_s$ to be $x \in \mathbb{T}$ such that $||\{a_n\psi_n(x)\}_{n \in I_{k^*,s}}||_{V^p} \leq 2^{- c\ln\ln(N) /p}$ and $B_s$ to be the complement. By two applications of the triangle inequality (first in the $\ell^p$ norm and then in the $L^2$ norm), we have $$\left| \left| \left( \sum_{s=1}^{2^{k^*}} ||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}} ||_{V^{p}}^p \right)^{1/p} \right|\right|_{L^2} \ll \left| \left| \left( \sum_{s=1}^{2^{k^*}} 1_{G_s} ||\{a_n\psi_n(x)\}_{n \in I_{k^*,s}} ||_{V^{p}}^p \right)^{1/p} \right|\right|_{L^2}$$$$+ \left| \left| \left( \sum_{s=1}^{2^{k^*}} 1_{B_s} ||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}}||_{V^{p}}^p \right)^{1/p} \right|\right|_{L^2}.$$ Using that $||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}}||_{V^p}^{p} \ll 2^{-c\ln\ln(N)}$ for $x \in G_s$, we have that the first term is $O(1)$ (from the fact that there are at most $2^{c\ln \ln(N)}$ terms in the sum). We now estimate $$\left| \left| \left( \sum_{s=1}^{2^{k^*}} 1_{B_s}(x) ||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}}||_{V^{p}}^p \right)^{1/p} \right|\right|_{L^2} \ll \left| \left| \left( \sum_{s=1}^{2^{k^*}} 1_{\tilde{B}_s}(x) ||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}} ||_{V^{2}}^2 \right)^{1/2} \right|\right|_{L^2}$$ $$\label{V2lastterm} \ll \left( \sum_{s=1}^{2^{k^*}} ||1_{\tilde{B}_s}(x) || \{a_n \psi_n(x)\}_{n\in I_{k^*,s}}||_{V^2} ||_{L^2}^2 \right)^{1/2},$$ where $\tilde{B}_s$ is the set of $x \in \T$ such that $||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}}||_{V^2} \geq 2^{-c\ln\ln(N) /p}$, and we have used the fact that $B_s \subseteq \tilde{B}_s$. We now consider two cases. First, we consider the set $S_{\text{big}}$ of $s$ values where $|I_{k^*,s}| \geq N2^{-7\ln \ln (N)}$. Clearly, there can be at most $2^{7 \ln \ln (N)}$ such intervals. Now we bound the contribution to (\[V2lastterm\]) above from these big intervals as $$\left( \sum_{s \in S_{\text{big}}}^{2^{k^*}} ||1_{\tilde{B}_s}(x) || \{a_n \psi_n(x)\}_{n\in I_{k^*,s}}||_{V^2} ||_{L^2}^2 \right)^{1/2} \ll \left( \sum_{s \in S_{\text{big}}}^{2^{k^*}} || \{a_n \psi_n(x)\}_{n\in I_{k^*,s}}||_{L^2(V^2)}^2 \right)^{1/2}.$$ Recalling that $|| \{a_n \psi_n(x)\}_{n\in I_{k^*,s}}||_{L^2(V^2)}^2 \ll \ln^{2}(N)2^{-c \ln\ln(N)}$ (from Lemma \[varRM1\], since $M(I_{k^*,s}) \leq 2^{-k^*}$ for all $s$) and that there are at most $2^{7 \ln \ln (N)}$ values of $s \in S_{\text{big}}$, we have that the above is $$\ll \left( 2^{7\ln \ln (N)} \ln^{2}(N)2^{-c \ln\ln(N)} \right)^{1/2} \ll 1.$$ Here we have used that $9 \leq c$. It now suffices to consider the values of $s$ such that $|I_{k^*,s}| \leq N 2^{-7\ln \ln (N)}$. We define $\gamma_{*}=\gamma_{2^{7\ln\ln(N)}}$. For any real numbers $\epsilon >0$, $\lambda>1$, and $a \geq \epsilon$, we have $\frac{\gamma_{*}(\lambda^{-1}a) }{\gamma_{*}(\lambda^{-1}\epsilon) } \geq 1$. We set $\epsilon := 2^{-c\ln \ln (N)/p}$. Now, for all $x \in \tilde{B}_s$, we have: $$\label{easy} \left| \left| \{a_n\psi_n(x)\}_{n \in I_{k^*,s}} \right|\right|_{V^2}^2 \leq \left| \left| \{a_n\psi_n (x)\}_{n \in I_{k^*,s}} \right|\right|_{V^2}^2 \frac{\gamma_{*}(\lambda^{-1} ||\{a_n\psi_n(x)\}_{n \in I_{k^*,s}}||_{V^2}) }{\gamma_{*}(\lambda^{-1}\epsilon) } .$$ We recall that $M(I_{k^*,s}) \leq 2^{-c\ln\ln(N)}$ for each $s$. Analogously to $\gamma_{*}$, we define $\Gamma_{*} := \Gamma_{2^{7\ln \ln (N)}}$. Now, for any $\lambda > 1$: $$\int_{\tilde{B}_{s}}\left| \left| \{a_n \psi_n(x)\}_{n \in I_{k^*,s}} \right|\right|_{V^p}^2 dx \leq \lambda^2 \int_{\tilde{B}_{s}} \gamma_{*}\left(\frac{\epsilon}{\lambda} \right)^{-1} \Gamma_{*}(\lambda^{-1}||\{a_n \psi_n(x)\}_{n \in I_{k^*,s}}||_{V^2}) dx.$$ This follows from (\[easy\]) and the definitions of $\gamma_{*}$ and $\Gamma_{*}$ (recall also that $t^2 \gamma_{*}(t) \leq \Gamma_{*}(t)$ for all $t$). Since $\frac{N}{|I_{k^*,s}|} \geq 2^{7\ln \ln (N)}$ and the value of $||\cdot ||_{\Gamma_K}$ increases as $K$ increases, we can apply Lemma \[lem:perm\] to obtain $$\left| \left| | | \{a_{n}\psi_{n}\}_{n \in I_{k^*,s}} ||_{V^2} \right|\right|_{\Gamma_{*}}\leq D \ln^{7/4}(N) \left( \sum_{n \in I_{k^*,s}} a_n^2\right)^{1/2}$$ for all $s$ such that $|I_{k^*,s}| \leq N2^{-\frac{7}{2}\ln\ln (N)}$, where $D$ is some fixed constant (depending on $C$). We see that for $\lambda := D \ln^{7/4}(N) 2^{- \frac{c \ln\ln(N)}{2}}$, we have $\int_{\mathbb{T}} \Gamma_{*}(\lambda^{-1}|| \{a_n \psi_n(x)\}_{n \in I_{k^*,s}} ||_{V^2}) dx \ll 1$. Therefore: $$\label{mark} \int_{\tilde{B}_{s}}\left| \left| \{a_n \psi_n(x)\}_{n \in I_{k^*,s}} \right|\right|_{V^p}^2 dx \ll \ln^{7/2}(N) 2^{- c \ln\ln(N)} \gamma_{*}\left(\frac{\epsilon}{\lambda} \right)^{-1}.$$ We consider the quantity $\gamma_{*}\left(\frac{\epsilon}{\lambda}\right)^{-1}$. We observe: $$\label{quantity} \frac{\epsilon}{\lambda} = (D^{-1})2^{\ln \ln (N) \left(-c/p +c/2-7/4\right)}.$$ Now, if (\[quantity\]) is $\geq 2^{7 \ln \ln (N)}$, we will have $$\gamma_{*}\left(\frac{\epsilon}{\lambda}\right)^{-1} = 2^{-7/2 \ln \ln (N)}.$$ If (\[quantity\]) is $< 2^{7 \ln \ln (N)}$, we will have $$\gamma_{*} \left(\frac{\epsilon}{\lambda}\right)^{-1} = D^{1/2}2^{\ln \ln (N)(7/8-c/4+c/2p)}.$$ We note that $\frac{7}{8} - \frac{c}{4} + \frac{c}{2p} \leq -\frac{7}{2}$, because $c\left(\frac{1}{2} - \frac{1}{p}\right)\geq \frac{35}{4}$. Thus, in either case, $$\gamma_{*} \left(\frac{\epsilon}{\lambda}\right)^{-1} \ll_C 2^{-7/2\ln \ln (N)}.$$ Inserting this into (\[mark\]), we find that $$\int_{\tilde{B}_{s}}\left| \left| \{a_n \psi_n(x)\}_{n \in I_{k^*,s}} \right|\right|_{V^p}^2 dx \ll_C \ln^{7/2}(N) 2^{-c\ln \ln (N)} 2^{-7/2 \ln \ln (N)} \ll_C 2^{-c \ln \ln (N)}.$$ Now to bound (\[V2lastterm\]), we apply this to each of the $\leq 2^{c \ln \ln (N)}$ terms, yielding $O(1)$, completing the proof. Acknowledgements ================ We thank Mark Rothlisberger for help with translation of related literature. [10]{} J. Bourgain, On Kolmogorov’s rearrangement problem for orthogonal systems and Garsia’s conjecture. Geometric aspects of functional analysis (1987–88), Lecture Notes in Math., 1376, Springer, Berlin, (1989) 209–250. M. Christ, M. Kiselev, Alexander, Maximal functions associated to filtrations. J. Funct. Anal. 179 (2001), no. 2, 409-–425. J. Doob, Stochastic processes. Reprint of the 1953 original. Wiley Classics Library. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, (1990). N. Etemadi, On some classical results in probability theory. Sankhya Ser. A 47 (1985), 215–221. G. Folland, Real analysis. Modern techniques and their applications. Second edition. Pure and Applied Mathematics (New York). A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, (1999). A. Garsia, Topics in almost everywhere convergence. Wadworth (1970). A. Garsia, Existence of almost everywhere convergent rearrangements for Fourier series of $L_{2}$ functions. Ann. of Math. (2) 79 (1964) 623-–629. W. Hoeffding, Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 (1963) 13-–30. R. Jones, G. Wang, Variation inequalities for the Fejer and Poisson kernels., Trans. Amer. Math. Soc. 356 (2004), no. 11, 4493–-4518. M. Krasnoselskii, J. Rutickii, Convex functions and Orlicz spaces. Translated from the first Russian edition by Leo F. Boron. P. Noordhoff Ltd., Groningen (1961). A. Lewko, M. Lewko, An Exact Asymptotic for the Square Variation of Partial Sum Processes, Preprint (arxiv.org). R. Oberlin, A. Seeger, T. Tao, C. Thiele, J. Wright, A variation norm Carleson theorem, Preprint. A. Olevskii, Fourier series with respect to general orthogonal systems. Translated from the Russian by B. P. Marshall and H. J. Christoffers. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 86. Springer-Verlag, New York-Heidelberg, (1975). V. Petrov, Sums of independent random variables. Translated from the Russian by A. A. Brown. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 82. Springer-Verlag, New York-Heidelberg, (1975). G. Pisier, Ensembles de Sidon et processus gaussiens. C. R. Acad. Sci. Paris Ser. A-B 286 (1978), no. 15, A671–-A674. J. Qian, The $p$-Variation of Partial Sum Processes and the Empirical Process. The Annals of Probability, 26 (1998), no.3, 1370–1383. B. Rosén, On an inequality of Hoeffding. Ann. Math. Statist. 38 (2) (1967), 382–392. H. Rosenthal, On the subspaces of $L^p$ ($p > 2$) spanned by sequences of independent random variables. Israel J. Math. 8 (1970), 273–303. W. Rudin, Trigonometric series with gaps. J. Math. Mech. 9 (1960) 203-–227. `A. Lewko, Department of Computer Science, The University of Texas at Austin` *alewko@cs.utexas.edu* `M. Lewko, Department of Mathematics, The University of Texas at Austin` *mlewko@math.utexas.edu* [^1]: Supported by a National Defense Science and Engineering Graduate Fellowship.
¶[[P]{}]{} Ł[[L]{}]{} §[[S]{}]{} ł Ø[[O]{}]{} Ø[[O]{}]{} Introduction {#secIntro} ============ The subject of investigation in the present work is the structure and energy of a stationary and cylindrically symmetric quantized vortex in an interacting multi–fluid mixture, which may consist of charged and uncharged superfluids and of normal fluids. This analysis has initially been motivated by the superfluid mixture commonly found in neutron star models, namely in the outer core region, where superfluid neutrons, superconducting protons and normal electrons are generally thought to coexist. However, due to the generality of the present approach, it is equally well applicable to superfluid and superconducting systems found in more common laboratory contexts, some of which will be discussed briefly in the concluding section \[secDiscussion\]. The study of superfluid mixtures has a long history, beginning with the pioneering work of Khalatnikov [@khal57], later followed by the analysis of Andreev and Bashkin [@ab75], who incorporated allowance for a (nondissipative) interaction between the superfluids. This effect is called “entrainment” (sometimes also “drag”) and plays a central role in the study of such fluid mixtures. The model has been further extended by Vardanian and Sedrakian [@vs81] to include charged fluids, and later a Hamiltonian formulation in the Newtonian framework has been developed by Mendell and Lindblom [@ml91]. The problem of vortices in such mixtures has been considered especially in the context of neutron stars, namely by Sedrakian and Shahabasian [@ss80], Alpar, Langer, Sauls [@als84], Mendell [@men91] and others. The covariant vortex solution in a single uncharged superfluid has been analyzed by Carter and Langlois [@cl95a], who have also considered the modifications due to the compressibility of the superfluid. The present work is on the one hand a generalization of this analysis to arbitrary fluid mixtures, including charged ones and their coupling to electromagnetic fields, but on the other hand is restricted (for technical reasons) to the case of a “stiff” equation of state. This “stiff” case is characterized by the speed(s) of sound being equal to the speed of light, and is, within the limits of causality, the closest analogue to the common Newtonian incompressible models. Compressibility effects will be subject of future work. Finally, we mention the previously found result [@cpl00] for a Newtonian vortex in a rotating superconductor, that the (hydrodynamic) vortex energy is strictly independent of the rotating “normal fluid” of positively charged ions, a result that will be found here to hold under much more general conditions. In the present work we will consider only stationary situations, which has two major advantages. First, it restricts the normal fluids to be in a state of [*rigid*]{} motion, and moreover in the [*same*]{} state of rigid motion, because normal fluids always possess some nonvanishing amount of viscosity and mutual friction. This even allows to describe a solid component in the present framework as a “normal fluid”, because in the rigid state of motion the anisotropic effects of viscosity and elasticity become irrelevant. So we can for example conveniently describe a conventional laboratory superconductor as a superconducting–normal fluid mixture, consisting of superconducting electrons and a “normal” lattice of ions, as will briefly be discussed in the concluding section. The second and even more powerful consequence of stationarity is that we can use a [*conservative*]{} model based on a Lagrangian formalism that has been developed in recent years [@c87; @cl98] in a generally covariant language. The use of a generally covariant instead of simply Newtonian description has also been motivated initially by the perspective of application to neutron stars, where relativistic effects inevitably come into play, but this approach turns out to be generally more flexible and convenient for the hydrodynamic description of such systems, even if relativistic effects are not important. The plan of this work is as follows. In Sec. \[secDescription\] we introduce the relevant notions and equations of the covariant multi–fluid formalism on which the present analysis is based. In Sec. \[secSuperfluid\] we discuss the description of superfluids in this framework and the topology of the vortex–type configurations. Sec. \[secMongrel\] introduces what we called the “mongrel” representation of superfluid–normal mixtures, that consists of choosing the [*superfluid momenta*]{} and the [*normal currents*]{} as the basic variables of the description, and which will be particularly convenient for the present problem. In Sec. \[secVortex\] we specify the class of cylindrically symmetric and stationary vortex configurations and obtain the first integrals of motion for these solutions. Sec. \[secReference\] is devoted to the specification and the properties of the reference state, needed to separate the quantities attributed to the vortex from the fluid background. Finally, the relevant vortex stress–energy coefficients are integrated in Sec. \[secEnergy\], using the most general hydrodynamic modelization for the vortex core, and we find that the “rotation energy cancellation lemma” of [@cpl00] still holds under the more general conditions of the present work. In the concluding section \[secDiscussion\], we briefly illustrate the application of the foregoing results to some of the well known examples of superfluid and superconducting systems. Covariant description of perfect fluid mixtures {#secDescription} =============================================== The general class of (non–dissipative) mixtures of charged or neutral perfect fluids has been shown by Carter [@c87] to be describable by an elegant covariant action principle. In this section we will briefly introduce the part of the formalism and notations that will be relevant to the present work. In the absence of electromagnetic effects, a mixture of perfect fluids can be described by a Lagrangian density $\LM$ that depends only on the particle number currents $n^\a_\X$, where late Latin indices, $\X$, $\Y$ etc., enumerate the different fluid constituents. Variation of $\LM$ with respect to the currents, = \_\^n\^\_,\^\_, \[equDefMomentum\] defines the [*dynamical*]{} momenta per particle $\mu_\a^\X$ as the conjugate variables of the currents $n^\a_\X$ with respect to $\LM$. Here and in the following we use implicit summation (except otherwise stated) over identical spacetime as well as constituent indices. Legendre transformation with respect to the currents, i.e. ¶- n\^\_\_\^, defines the “Hamiltonian density” $\P$ as a function of the dynamic momenta $\mu_\a^\X$. This function only exists for nondegenerate systems, that is, if the functions $\mu_\a^\X(n^\b_\Y)$ defined in (\[equDefMomentum\]) are invertible. The conjugate relations can then be written as n\^\_= -[¶\_\^]{}. \[equDefCurrent\] Furthermore, the form of these relations is constrained by the requirement of covariance, namely $\P$ (as well as $\LM$) has to be a [*scalar*]{} density, and can therefore only depend on scalars, i.e. on . This restricts relation (\[equDefCurrent\]) to be of the form n\^\_= K\_\^, \[equEntrainment\] where the (necessarily symmetric) matrix $K_{\X\Y}$ is defined as K\_ - 2[¶(\^\_\^)]{}, \[equDefK\] The condition of a non–degenerate system is equivalent to , and so we can write the inverse relation \_\^= K\^n\_, K\^K\_ \^\_. \[equEntrainment2\] In the case of noninteracting fluids, the Hamiltonian $\P$ would not depend on crossed scalars with , but only on diagonal terms . In this case the matrix $K_{\X\Y}$ would be diagonal, and each current would be aligned with the respective momentum, similar to the case of a single perfect fluid, but any interaction terms between different fluid constituents in the Hamiltonian will lead to nondiagonal components of $K_{\X\Y}$, and therefore the currents will become linear combinations (in each point) of the respective momenta. This (nondissipative) effect is called “entrainment” and has first been considered for superfluid mixtures of $^3$He and $^4$He by Andreev and Bashkin [@ab75]. Before we come to the equations of motion, we need to extend our description to include the electromagnetic field and its coupling to charged fluids. This is done via the standard “minimal coupling” prescription that consists of defining the [*total*]{} Lagrangian density $\L$ as Ł+ j\^A\_+ [116]{} F\_ F\^, \[equDefL\] where we are using units with . The electric current $j^\a$ is defined as j\^e\^n\^\_, \[equElCurrent\] with $e^\X$ being the charge per particle of the constituent $\X$. The electromagnetic 2–form $F_{\a\b}$ is defined as the exterior derivative of the gauge 1–form $A_\a$, i.e. F\_ 2 \_[\[]{}A\_[\]]{}, \[equDefF\] where square brackets indicate (averaged) index antisymmetrization. The symbol $\nabla_\a$ denotes the usual covariant derivative, but we note that because of the antisymmetrization, exterior derivatives are [*independent*]{} of the affine connection, so we could as well replace $\nabla_\a$ by the partial derivative $\partial_\a$. The conjugate variables of the currents $n_\X^\a$ with respect to the [*total*]{} Lagrangian $\L$ are the [*canonical*]{} momenta $\pi^\X_\a$, defined as \^\_, which can be seen from (\[equDefMomentum\]) and (\[equDefL\]) to be directly related to the dynamical momenta $\mu^\X_\a$, namely \^\_= \^\_+ e\^A\_. \[equPiMu\] The equations of motion are to be derived from the total Lagrangian $\L$ via an appropriate variational principle. Imposing invariance of the action under free (infinitesimal) variations of the gauge field $A_\a$ leads to the Maxwell source equation, \_F\^ = 4j\^. \[equMaxwell\] However, the equations of motion for the fluids can not be derived via free variations of the currents $n_\X^\a$, as this would simply lead to the trivial equations . This is because free variations of the currents contain too many degrees of freedom, which results in overdetermined equations of motion, therefore the variations have to be [*constrained*]{}. It has been shown in [@c87] that variations with the correct number of degrees of freedom are generated by infinitesimal displacements of the worldlines of fluid particles. These worldline variations satisfy the physical constraint of conserving the number of particles, and they result in the correct equations of motion for the fluids. Without entering into the technical details of this procedure (see [@c87; @cl98]), the resulting equation of motion for each fluid $\X$ is found as (no sum over $\X$) 2 n\_\^\_[\[]{}\^\_[\]]{} + \^\_\_n\_\^= 0, and by contracting this equation with $n_\X^\b$, we see that it implies that the currents are conserved, i.e. , so the equations of motion reduce to the simple form of a vorticity conserving flow, namely (no sum over $\X$) n\_\^ w\^\_ = 0, \[equEOM\] where the (canonical) vorticity 2–form $w^\X_{\a\b}$ is defined as the exterior derivative of the canonical momentum $\pi^\X_\a$, i.e. w\^\_ 2\_[\[]{}\^\_[\]]{}. The very compact form (\[equEOM\]) of the equation of motion can be seen to “reduce” in the nonrelativistic limit to the (much less compact) Euler equation of a charged fluid in electromagnetic fields, and possibly subject to further potential forces. This is an example that shows the advantage and convenience of the covariant formalism, especially for more complex applications like interacting mixtures of possibly charged fluids in electromagnetic fields, as considered in the present analysis. And finally, the stress–energy tensor $T^{\a\b}$ is found [@cl98] in the form \_= n\_\^\^\_+ ¶[g\^]{}\_+ [14]{}( F\^ F\_ - [14]{} F\^F\_ [g\^]{}\_), \[equStressEnergy\] which (in the absence of external forces) satisfies the equation of (pseudo) conservation, . From the form of the stress–energy tensor (\[equStressEnergy\]) we see that $\P$ plays the role of a generalized pressure, which reduces to the ordinary pressure in the case of a single fluid. Properties of superfluids and topology of vortex solutions {#secSuperfluid} ========================================================== We want to allow for some of the fluids to be superfluid or superconducting, and we will denote these constituents by capital Greek indices $\sX$, $\sY$ etc. For “normal” fluids (i.e. not superfluid or superconducting), we will use early Latin capital indices $\A,\,\B$ etc., so a sum over all fluids (indexed by $\X,\,\Y$ etc.) can be written as . Apart from the electric charge there seems to be no fundamental difference between superfluids and superconductors, and therefore we will in the following refer to them as “uncharged” and “charged superfluids” respectively. We note that the present treatment considers superfluids as a subclass of perfect fluids, and will therefore represent some restrictions as to the application to strongly anisotropic superfluid phases like they are found in $^3$He [@LesHouches99], which is governed by additional “internal” degrees of freedom like the spin and angular momentum of the Cooper pairs. But at least for situations where these additional degrees of freedom of the order parameter can be considered as “frozen” and the dynamics mainly governed by the superfluid “phase” to be discussed in the following, the present approach should still represent an acceptable approximation. We distinguish the (connected) spacetime domain $\D^\sX$ occupied by the superfluid constituent $\sX$ from the subset of its respective “superfluid domain” , which corresponds to what is sometimes called the “bulk”. In the superfluid domain $\SD^\sX$ the canonical momentum $\pi^\sX_\a$ always obeys the constraint \^\_= \_\^, \[equQuantisation\] where the “phase” $\varphi^\sX$ is a continuously differentiable scalar on $\SD^\sX$, that can be multi–valued, but the differences between values in the same point are restricted to be integer multiples of $2\pi$. This reminds of an angle variable and reflects the role of $\varphi^\sX$ as a quantum phase $e^{i\varphi}$. In addition to the property of (quantized) potential flow (\[equQuantisation\]), the superfluid $\sX$ in its superfluid domain $\SD^\sX$ is [*perfectly inviscid*]{}. In that sense a superfluid is probably the best representation of a perfect fluid in nature. On the other hand, outside its superfluid domain, i.e. in , the superfluid is not constrained to potential flow (\[equQuantisation\]) and can also possess some viscosity like a “normal” fluid. The property (\[equQuantisation\]) implies that the canonical vorticity $w^\sX_{\a\b}$ vanishes on the whole superfluid domain $\SD^\sX$, i.e. w\^\_ = 2\_[\[]{}\^\_[\]]{} = 0 , \[equIrrot\] which states that the superfluid is irrotational, and implies that the equation of motion (\[equEOM\]) is automatically satisfied on $\SD^\sX$. Irrotational flow is of course not restricted to superfluids, and the vortex–type configurations to be discussed later have been known long before the discovery of superfluids, familiar examples are tornados or water flowing out the drain of the bath tub. But the multi–valuedness of the “phase” of a perfect fluid in a state of potential flow is [*not*]{} subject to a “quantization” condition of integer multiples of $2\pi$, and a perfect fluid only exists as an idealization of a “real” fluid with some nonvanishing amount of viscosity, contrary to the completely inviscid superfluids in the superfluid domain. Furthermore there is an important energy gain associated with the superfluid domain $\SD^\sX$, the so–called “condensation energy”. Superfluids consequently try to maximize their superfluid domain $\SD^\sX$ (and thereby to satisfy (\[equQuantisation\])) as far as possible in the limits of the fluid domain $\D^\sX$. One of the most important consequences of (\[equQuantisation\]) is that it allows for the topologically stable flow configurations known as vortices, which are characterized by the property that different values of the (multi–valued) phase $\varphi^\sX$ in the same point can be connected by closed paths $\Gamma$ that lie entirely in the superfluid domain $\SD^\sX$. As stated above, the difference can only be of the form $2\pi N^\sX$, where the integer $N^\sX$ is called the “winding number”. The winding number $N^\sX$ of a closed path can be written as N\^= [12]{}\_\^\_ds\^, \^. \[equWinding\] It is evident that $N^\sX$ does not change for continuously deformed paths , and $N^\sX$ is therefore a topological constant for each equivalence class of closed paths in $\SD^\sX$. A nonvanishing $N^\sX$ implies that the path can not be continuously contracted to a point, because it would necessarily have to cross at least one point $P\not\in \SD^\sX$ where the phase $\varphi^\sX$ is not defined, and therefore $\SD^\sX$ is necessarily multiply connected if there are nonvanishing winding numbers $N^\sX$. The “mongrel” representation of superfluid–normal mixtures {#secMongrel} ========================================================== In the previous section we have seen that a superfluid on its superfluid domain is generally characterized by a constraint (\[equQuantisation\]) on the (canonical) superfluid momentum, while “normal” fluids are generally more easily described in terms of their particle number currents. For this reason it will turn out to be extremely convenient to pass from the “pure” type of representation used in (\[equEntrainment\]), which expresses all the currents in terms of all the momenta (or vice–versa), to a “mongrel” representation where the [*superfluid currents*]{} and [*normal momenta*]{} are expressed in terms of the [*superfluid momenta*]{} and [*normal currents*]{}. This type of representation has for example been used tacitly as the base of Landau’s two–fluid model for superfluid $^4$He [@Landau], which was formulated in terms of a “superfluid velocity”, representing in fact the irrotational superfluid momentum of (\[equQuantisation\]) (divided by a fixed mass), and of a “normal fluid” velocity, which represents the real mean velocity of the viscous gas of excitations. This will be seen in some more detail in the discussion of the two–fluid model in the concluding section \[secDiscussion\]. In order to pass to this mongrel representation, we decompose the entrainment matrix $K_{\X\Y}$ into a purely superfluid symmetric matrix $S_{\sX\sY}$, a symmetric matrix $V_{\A\B}$ of purely normal (“viscous”) fluids and a “mixed” superfluid–normal matrix $M_{\sX\A}$, so (\[equEntrainment\]) can be written in this decomposition as \_&=& S\_ \^+ M\_ \^,\ \_&=& M\_ \^+ V\_\^, \[equPure2\] where . For clarity we use in this section [**bold**]{} typeset for denoting spacetime vectors and covectors, as the spacetime indices are not important here and can be put in any consistent way. Applying the inverse matrix $V^{-1}$ to (\[equPure2\]), we can easily rewrite these relations in the “mongrel” form \^&=& -\^\_\^+ \^ \_, \[equLC1\]\ \_&=& §\_ \^+ \^\_\_\[equLC2\], where we defined the new matrices \^ && (V\^[-1]{})\^ ,\^\_\^ M\_,\ \[equMatrices\]\ §\_ && S\_ - M\_ \^ M\_ .In this representation it is easy to see that terms of the form , e.g. in the stress–energy tensor (\[equStressEnergy\]), can be written in the “quasi separated” form \_\^= \^§\_ \^+ \_ \^ \_, \[equCross\] where the effect of “mixed” entrainment between superfluids and normal fluids is hidden in the use of the matrix $\S$. As we consistently wrote lower constituent indices for currents and upper constituent indices for momenta, we can now use this convention to introduce a very convenient and suggestive notation, namely to use $\S_{\sX\sY}$ to [*lower*]{} superfluid indices $\sX,\,\sY$ etc., and $\V^{\A\B}$ to [*raise*]{} normal fluid indices $\A,\,\B$ etc. This can formally be understood as choosing $\S$ and $\V$ as the [*metric tensors*]{} in the respective constituent vector spaces of the superfluids and the normal fluids, but can also just be seen as a shorthand notation for \_§\_ \^,\^\^ \_. \[equMetrics\] In this notation, stress–energy contributions take the simple and concise form \_\^= \^\_+ \^\_, \[equCross2\] where all the information about entrainment has been encoded in the respective metrics of the superfluid and normal constituent spaces. We note that the superfluid constraint (\[equQuantisation\]) generally applies to the [*canonical*]{} momenta $\cvec{\pi}^\sX$, which only in the case of uncharged superfluids coincide with the dynamical momenta . This implies a qualitative difference between charged and uncharged superfluids, and it will be useful to separate the superfluid constituent space into the two orthogonal subspaces that are naturally defined by the superfluid “charge vector” with components $e^\sX$. The respective subspaces are defined by parallel and orthogonal projection via the projection tensors \^\_ ,\^\_\^\_- \^\_, \[equProjections\] where again we have used the notation . Now we can decompose constituent vectors, e.g. the superfluid momenta as , where \^\_\^\_\^,\^\_\^\_\^. The subtlety of this notation is that even though a “parallel” constituent vector $\cvec{\mu}^\sX_\para$ only has nonvanishing components for charged superfluid constituents, and respectively $\cvec{\mu}^\sX_\ortho$ only for uncharged superfluids, the [*values*]{} of the respective components may depend on all the other superfluids [*and*]{} normal fluids, as the projection tensors contain the entrainment matrix $\S$. The stationary cylindrical vortex configuration {#secVortex} =============================================== In this work we will consider the simplest, because maximally symmetric type of vortex configuration, which is characterized by both stationarity and cylindrical symmetry. This means that there are three independent, commuting (in the sense of Lie brackets) symmetry generators $k^\a$, $l^\a$ and $m^\a$, which can be taken to correspond to time translations, longitudinal space translations (along the vortex axis) and axial rotations, respectively. The geometric picture of the symmetry surfaces generated by $k^\a$, $l^\a$ and $m^\a$ are cylindrical hypersurfaces that build a well behaved foliation of spacetime, and can therefore be parametrized by a “radial” coordinate $r$. Let us introduce the corresponding cylindrical coordinates , adapted to these symmetries, i.e. k\^={1, 0, 0, 0},l\^={0, 1, 0, 0},m\^={0, 0, 1, 0}. The symmetry requirements and the property of conserved currents (\[equEOM\]), i.e. , restrict the flow to be purely helical, i.e., to have no radial components. Therefore the currents are confined to timelike hypersurfaces generated by the symmetry vectors and can be written as n\_\^= { n\_\^t(r), n\_\^z(r), n\_\^(r), 0 }. \[equCurrents\] A further consequence of the symmetry is that any physically well defined quantity $Q$ of the flow must be invariant under symmetry translations, which means that the corresponding Lie derivatives must vanish, i.e.  , for $\xi^\a$ being any linear combination (with constant coefficients) of the symmetry vectors $k^\a$, $m^\a$ and $l^\a$. This also holds for gauge dependent quantities like the canonical momentum $\pi^\X_\a$, provided we fix the gauge in a way that respects the same symmetries, i.e. when . Such a gauge choice is given by A\_= { A\_t(r), A\_z(r), A\_(r), 0 }. \[equGaugeChoice\] The components $A_t$ and $A_z$ are still subject to the residual gauge freedom of an additive constant, i.e. A\_t A\_t + [G]{}\_t,A\_z A\_z + [G]{}\_z, \[equGaugeFreedom\] but because $\varphi$ is an angle variable, corresponding to a compact dimension, the gauge of the axial component $A_\varphi$ is completely fixed by (\[equGaugeChoice\]). This is most easily seen by applying Stoke’s theorem to a $\{r,\,\varphi\}$–surface integral over $F_{\a\b}$, i.e. , which in this trivial symmetric case just reduces to , and so the gauge is fixed as A\_(0) = 0 . \[equPhiGauge\] With the gauge choice (\[equGaugeChoice\]), the symmetry condition for $\pi^\X_\a$ reads (\_\^)\_= 0, \[equSymmetry\] where $\xi^\a$ can be any linear combination of the three symmetry generators. The well known Cartan formula for the Lie derivative of a p–form $w_{\a\b\gamma\ldots}$, namely (\_w)\_[…]{} = (p+1)\^\_[\[]{} w\_[…\]]{} + p\_[\[]{}(\^w\_[…\]]{}), can be applied to the 1–form $\pi^\X_\a$ in (\[equSymmetry\]), and so we obtain the explicit symmetry condition, 2 \^\_[\[]{}\^\_[\]]{} + \_(\^\^\_) = 0. \[equSymmetry2\] For superfluids (in the superfluid domain), the first term vanishes because of the irrotationality property (\[equIrrot\]), and so the second term provides us with three independent integrals of motion, corresponding to the three symmetry generators, namely -E\^k\^\^\_, L\^l\^\^\_, M\^m\^\^\_, \[equConstants\] interpretable respectively as the [*energy*]{}, (canonical) [*longitudinal momentum*]{}, and (canonical) [*angular momentum*]{} per particle. While $E^\sX$ and $L^\sX$ are generally subject to the residual gauge freedom (\[equGaugeFreedom\]) of an additive constant (except in the uncharged cases ), the axial constant $M^\sX$ is [*not*]{}, because there is no gauge freedom for $A_\varphi$. In order to calculate the winding numbers $N^\sX$ of the vortex by (\[equWinding\]), we have to choose a path $\Gamma$ enclosing the vortex axis. Such a path can always be continuously deformed into a path generated by $m^\a$ alone, and so by (\[equConstants\]) the integration simply yields N\^= [M\^]{}. \[equWinding2\] Therefore the constant (canonical) angular momentum per particle, $M^\sX$, is an integer multiple of $\hbar$, the fundamental quantum of angular momentum, and the corresponding angular momentum “quantum number” is just the winding number $N^\sX$. The superfluid canonical momenta are thereby completely determined (in the superfluid domain) by the integrals of motion (\[equConstants\]) (modulo the gauge freedom (\[equGaugeFreedom\])), namely \^\_= {-E\^, L\^, N\^, 0}, N\^, \[equSuperfluidPi\] where the vanishing of the radial component $\pi^\sX_r$ follows from the helical direction (\[equCurrents\]) of the currents $n_\X^\a$, and the entrainment relation (\[equEntrainment\]) together with (\[equPiMu\]) and the gauge choice (\[equGaugeChoice\]). In a more realistic treatment, the normal fluids are expected to have some amount of viscosity, in which case the condition of stationarity, which excludes all dissipative motion, restricts all the normal currents to be comoving with the same uniform rotation $\Omega$, i.e. n\_\^= n\_\^t v\^, v\^k\^+ m\^= { 1, 0, , 0} . \[equNormalCurrent\] We could also have allowed for a constant longitudinal velocity along $l^\a$, but this is trivially annihilated by a Lorentz boost, and so we have chosen our reference frame at rest with respect to the longitudinal motion of the normal fluids. The symmetry condition (\[equSymmetry\]) along the flowlines of the normal fluids, i.e. with , together with the equation of motion (\[equEOM\]) yields one integral of motion for each normal fluid, namely -|[E]{}\^= v\^\^\_. With the given restrictions on the currents (\[equCurrents\]) and (\[equNormalCurrent\]), the integrals of motion $E^\sX$, $L^\sX$, $N^\sX$, $\bar{E}^\A$ and $\Omega$ are sufficient for the equations of motion (\[equEOM\]) to be satisfied. But in order to actually integrate these differential equations, one is still left with the generally nontrivial problem of solving equations for the spacetime metric $g_{\a\b}$, together with Maxwell’s equation (\[equMaxwell\]) for the gauge field $A_\a$. However, for most vortex applications of practical interest (including those in neutron stars), the gravitational self–interaction of the vortex can be completely neglected, so the background metric can in any case be considered as given in advance. Furthermore, as the radial dimensions of vortices are generally much smaller than the lengthscale of gravitational curvature, the local spacetime metric of the vortex can safely be considered as flat, and so in cylindrical coordinates we can write it as ds\^2 g\_dx\^dx\^= -dt\^2 + dz\^2 + r\^2d\^2 + dr\^2 . \[equMetric\] The remaining differential equation to be solved is (\[equMaxwell\]) for the electromagnetic gauge field $A_\a$. The necessary coefficients of the metric connection can easily be calculated for the flat metric (\[equMetric\]), and we find the explicit Maxwell equations for the gauge field $A_\a$ in the form ( r A\_t’)’ = 4r j\^t , -(r A\_z’)’ = 4r j\^z ,\[equMax1\] -([A\_’ r]{})’ = 4r j\^, \[equMaxwell2\] where the prime denotes differentiation with respect to $r$. Equations (\[equMax1\]) describe a radial electric field $A_t'$ created by the charge distribution $j^t$, and an axial magnetic field $A_z'$ around a longitudinal current $j^z$. These equations will result in exponentially “screened” solutions, typical of charged superfluids. As we saw in section \[secSuperfluid\], the vortex is characterized by nonvanishing winding numbers $N^\sX$, which by (\[equPiMu\]) and (\[equEntrainment\]) are seen to be directly related to the axial components $j^\varphi$ and will result in a screened longitudinal magnetic field $B_z$, which is conventionally defined as B\_z . \[equDefB\] Reference state and vortex properties {#secReference} ===================================== The reference state ------------------- In the previous section we have completely specified the fluid configuration containing a vortex, but in order to separate the quantities attributed to the vortex from the fluid “background”, we first have to specify this reference “background” state, which will be denoted by the subscript $\bg$. For any quantity $Q$, the part attributed to the vortex is defined as the difference with respect to the corresponding reference value $Q_\bg$, i.e. Q Q - Q\_. The reference state should respect at least the same symmetries as the vortex state, and can therefore, by the reasoning in Sec. \[secVortex\], be characterized completely by constants $E^\sX_\bg$, $L^\sX_\bg$, $N^\sX_\bg$, $\bar{E}^\A_\bg$ and $\Omega_\bg$. Furthermore, we naturally want the the reference background to be “vortex free”, which means that the topological constants characterizing a vortex have to vanish, i.e. . Another natural prescription is that the uniform rotation of the normal fluids should be the same in the reference state as in the vortex state, i.e. . However, there is no such “natural” choice for the remaining constants $E^\sX_\bg$, $L^\sX_\bg$ and $\bar{E}^\A_\bg$, if one allows for compressibility of the fluids. The compressibility is described by the fact that the entrainment matrix (\[equDefK\]) is in general a function of the momentum scalars, i.e.  , and therefore, if , this generally entails that . Now, if we consider for example the $t$ component of the relation (\[equEntrainment\]) between currents and momenta, and if for illustration we suppose for a moment that there are no normal fluids, then , and . Choosing for example the straightforward reference constants and , leads to changed particle densities , and especially changed [*mean*]{} particle number densities (in the region of integration with the upper cutoff radius $\ru$), i.e. . We see that with this choice of reference constants, we compare a vortex state with a reference state that does not have the same number of particles in the region of integration. Another physically interesting choice of reference state would therefore rather consist in readjusting the reference constants $E^\sX_\bg$ in such a way as to obtain the same [*mean*]{} particle number densities (and therefore total number of particles in the region of integration) in the reference state. These different choices have been analyzed and properly accounted for in [@cl95b] for the case of a vortex in an uncharged superfluid, and are found to be inequivalent to each other, even in the limit . Due to the additional complications of multiple entrainment and charged fluids in the present analysis, we will postpone this problem of compressibility effects to future work, and restrict our attention here to the simpler case of a “stiff” equation of state that is characterized by a constant entrainment matrix, i.e. = 0 K\_\^= K\_. \[equStiff\] In this “stiff” case, the most natural reference state is unambiguously characterized just by choosing the longitudinal superfluid momentum components $E^\sX_\bg$, $L^\sX_\bg$ to be the same as in the vortex state, i.e. \^\_{-E\^, L\^, 0, 0} , \[equRefSuperfluid\] while the constants $\bar{E}^\A$ can be fixed by taking the normal particle densities to be unchanged with respect to the vortex state, i.e. n\_\^n\_\^t v\^, v\^= {1, 0, , 0}. \[equRefNormal\] Due to the assumption of a stiff equation of state (\[equStiff\]), all longitudinal current components $n^t_\X$ and $n^z_\X$ remain unchanged in the reference state. Furthermore we will assume the electric current to vanish in the reference state, i.e. j\^\_= 0, \[equRefElectric\] which implies that the longitudinal electric current also vanishes in the vortex state, j\^t = j\^t\_=0,j\^z = j\^z\_=0, and so we also have from (\[equMax1\]) (in an appropriate gauge) A\_t = A\_t\^= 0,A\_z = A\_z\^= 0. \[equRefA\] The reference state is now completely fixed by the properties (\[equRefSuperfluid\]), (\[equRefNormal\]) and (\[equRefElectric\]). The vortex modifies only the $\varphi$ components of currents and momenta, so it will be convenient to introduce for covectors $Q_\a$ the short notation for the part of the $Q_\varphi$ that is due to the vortex, and for the part that is still present in the reference state, e.g. \^\_= \^+ \_\^,A\_= + A\_. From (\[equSuperfluidPi\]) and (\[equRefSuperfluid\]) it is easy to see that \^= N\^- e\^ , \^\_= -e\^A\_. \[equMus\] The London field ---------------- Contrary to the longitudinal components $A^\bg_t$ and $A^\bg_z$, the axial gauge field $A_\bg$ in the reference state will not be trivial, due to the uniform rotation of the charged normal fluids. The Maxwell equation (\[equMaxwell2\]) for the $\varphi$ component in the reference state, i.e. , allows for a uniform magnetic field $B_\bg$ in $z$ direction (defined as in (\[equDefB\])), namely by integration and using (\[equPhiGauge\]) one gets, B\_ = [2r\^2]{} A\_= , \[equLondon\] where $B_\bg$ is in fact the well known uniform London field of rotating superconductors. An explicit expression for the London gauge field $A_\bg$ can be obtained simply from the reference property , together with the “mongrel” entrainment expression (\[equLC2\]), and relation (\[equMus\]), which yields A\_= r\^2 (e\^e\_)\^[-1]{} (e\^+ e\^\_\^) n\_\^, and after using (\[equNormalCurrent\]) to write , we get the London field $B_\bg$ as B\_= 2 (e\^e\_)\^[-1]{} (e\^+ e\^\_\^) n\^t\_. \[equLondon1\] The London field $B_\bg$ is seen to be proportional to the uniform rotation $\Omega$ of the normal fluids. If we now use the additional property of the vanishing charge density (\[equRefElectric\]) in the reference state, i.e. , then we can finally obtain the very simple expression for the London field, B\_= -2 [e\^§\_ E\^e\^§\_ e\^]{} = -2 , \[equLondon2\] where we have used the notation of lowering and raising constituent indices via the matrix $\S$ introduced in Sec. \[secMongrel\]. If we consider in particular the case of a single charged superfluid with mass per particle $m$ and charge per particle $e$, this expression in the Newtonian limit, where , reduces to the well known expression . The question of whether $m$ in this formula should represent the bare mass or some “effective” mass per particle will be discussed briefly in the concluding section \[secDiscussion\]. The Magnetic field of the vortex -------------------------------- The reference state properties (\[equRefSuperfluid\]) and (\[equRefNormal\]) further allow us to rewrite the axial current $j^\varphi$ in the form , and with (\[equLC2\]) we obtain the compact form j\^= [1r\^2]{} e\^§\_\^= [1r\^2]{} e\_ \^. Inserting this into the corresponding Maxwell equation (\[equMaxwell2\]) gives e\^ §\_ \^= -[r4]{} ’, \[equBessel1\] which can be written more explicitly as a differential equation for $\proper{A}$, containing the winding numbers $N^\sX$ as parameters, namely (e\^e\_) = (e\_N\^) + [r4]{} ’ , \[equBessel2\] where the longitudinal magnetic field of the vortex, , is defined following (\[equDefB\]) as . This second order differential equation for $\proper{A}$ (or $\proper{B}$) is of the modified Bessel type, and the asymptotic behavior of the solutions in the limit can be derived directly from this equation, namely (where “$\sim$” means asymptotically proportional) \~’ \~e\^[-r/ł]{},\ \[equAsymptInfty\]\ \_[r]{} = [e\_N\^e\^e\_]{},where $\l$ is the so–called London penetration depth, which is given by the expression ł\^[-2]{} 4 e\^e\_. \[equPenetration\] In the Newtonian limit of a single superfluid with charge per particle $e$, mass per particle $m$ and a particle number density $n$, the matrix $\S$ reduces to $n/m$, and (\[equPenetration\]) reduces to the standard expression . The total electromagnetic flux of the vortex, , for a circuit at sufficiently large radial distance, is easily seen from (\[equAsymptInfty\]) to be given as = 2[e\_N\^e\^e\_]{}, \[equFlux\] which again reduces to the standard expression in the Newtonian limit of a single charged superfluid with charge per particle $e$. The explicit solution of equation (\[equBessel2\]) is expressible in terms of the (modified) Bessel functions $K_0$ and $K_1$, namely (r) &=& C\_0 K\_0(r/ł) ,\ \[equSolution\]\ (r) &=& [2]{} - C\_0 [r ł]{} K\_1(r/ł) .This solution is only valid in the “common superfluid domain”, i.e. in , where all the constant winding numbers $N^\sX$ are defined. From the divergence of $\proper{B}(r)$ on the axis it is evident that the common superfluid domain must have a finite separation, $\xi$ say, from the axis, which can be used to define what is usually called the “vortex core”, with $\xi$ being the “core radius”. The constant of integration $C_0$ is to be determined from the matching of (\[equSolution\]) with the “inner” vortex solution, i.e. for . By integrating (\[equSolution\]) for , we get the vortex flux outside the core, i.e. , and so $C_0$ can be expressed in terms of the quantities $\xi$ and the core magnetic flux $\Phi_{\rm core}$, namely C\_0 = [- \_[core]{} 2ł\^2 x\_0 K\_1(x\_0) ]{}, \[equC0\] where $x_0$ is the rescaled core radius, , which corresponds to the inverse of the Ginzburg–Landau parameter of the Ginzburg–Landau model. The limit of an extreme type–II superconductor is characterized by , i.e. , , so the core structure becomes negligible, , and we get C\_0 = 4 e\_N\^,ł. The Vortex energy {#secEnergy} ================= In this section we will consider the “macroscopic” properties of the vortex, namely its total energy per unit length and the tension of the vortex line. These quantities are obtained by integrating the local stress–energy tensor of the vortex, , over the spatial section orthogonal to the (“longitudinal”) vortex symmetry axes, whose coordinates are the subset , for . The local stress–energy coefficients of the vortex are seen from (\[equStressEnergy\]) to have the form \_&=& ( n\_\^\^\_) + [14]{} ( F\^ F\_)\ & &+ \_. \[equRecall\] The “sectional” –integral is only meaningful for quantities that are scalars with respect to the sectional coordinates $r$ and $\varphi$, and so we have to consider only the “longitudinally” projected tensor $\dbg {T^i}_j$. Another “sectional” scalar of the stress–energy tensor is the trace of the orthogonally projected components, which defines the local lateral pressure $\Pi$ of the vortex, 2 ([T\^]{}\_- [T\^i]{}\_i) . \[equLateral\] In the case of a “stiff” equation of state (\[equStiff\]), the Taylor expansion of $\P(\mu^\X_\a\mu^{\Y\a})$ around the reference state value has only two terms (using (\[equDefK\])), namely ¶(\^\_\^) = ¶\_- [12]{} K\_ (\^\_\^). The mongrel representation (Sec. \[secMongrel\]) is particularly convenient to evaluate contributions of this type, because by the reference property (\[equRefNormal\]) we have , and so we find, using (\[equEntrainment\]) and (\[equCross\]), K\_ (\^\^) = (\_\^) = §\_ ( \^\^). The relevant contributions (\[equRecall\]) of are now straightforward to evaluate, and are found to be given by ( n\_\^i \^\_j ) = 0,(F\^[i]{}F\_[j]{}) = 0, (F\^ F\_) = 2 \^2 + 4 B\_, \_= -(n\_\^\^\_) = 2¶, ¶= - [12r\^2]{} §\_ ( \^ \^+ 2 \^ \_\^). Putting these results into the expression for the vortex stress–energy tensor (\[equRecall\]), we find that the longitudinally projected tensor is proportional to the unit tensor, i.e \_j = - \_j , \[equ69\] with = [116]{} (F\^F\_) - ¶, and so the vortex energy density, $\dbg T^{00}$, is equal to the (local) longitudinal tension of the vortex, $-\dbg T^{zz}$, a property that is characteristic of the stiff equation of state (\[equStiff\]). The vortex energy per unit length $U$ is defined as the sectional integral U - 2\_0\^drr\_0 = 2\_0\^drr . \[equVortexEnergy\] The energy density $\proper{T}$ can be decomposed into two parts, = \_+ \_, \[equ70\] where $\proper{T}_\vort$ is the part that is independent of the rotation $\Omega$ of the normal fluids, \_= [12r\^2]{} \_\^ + [18]{} \^2 , \[equTvort\] while $\proper{T}_\rot$ is proportional to $\Omega$ (via $B_\bg$, see equ. (\[equLondon2\])), \_= B\_( [14]{} - [12]{} e\_\^) , and the lateral pressure $\Pi$, defined in (\[equLateral\]), is found to be given by = [18]{}. Expression (\[equTvort\]) for $\proper{T}_\vort$ can be transformed using Maxwell’s equation (\[equBessel1\]) into the “nearly integrated” form \_&=& [\^22 r\^2]{}\ & & - [e\_N\^e\_e\^]{} [8r]{} ’ + [18r]{}( )’ . \[equTV\] The easiest way to see this is to first expand only one $\proper{\mu}^\sX$ in (\[equTvort\]) using (\[equMus\]) and apply (\[equBessel1\]), then expand the remaining $\proper{\mu}^\sY$ and use the second form of Maxwell’s equation (\[equBessel2\]) for $\proper{A}$. In order to regroup the derivatives, one also has to expand one $\proper{B}$ as $\proper{A}'/r$ in the last term of (\[equTvort\]). In a similar way, $\proper{T}_\rot$ can be reduced to \_= [B\_8r]{} ( r\^2 )’ . \[equTrot\] As anticipated from the divergence of the magnetic field (\[equSolution\]) on the vortex axis, we encounter the same problem in the energy density (\[equTV\]). This well known fact is due to the constant superfluid (canonical) angular momentum per particle, , in the superfluid domain $\SD^\sX$. Therefore each superfluid with a nonvanishing winding number , must have some finite “core” region separating the respective superfluid domain $\SD^\sX$ from the vortex axis. The actual size of the respective core region is determined by a trade–off between the loss of condensation energy associated with the core region, and the diverging energy density (\[equTV\]) in the superfluid domain. The detailed description of this superfluid–normal transition would ask for either a microscopic theory, or at least some phenomenological, e.g. Ginzburg–Landau type description of the involved superfluids. However, such detailed descriptions turn out to be unnecessary for our present purpose, as we can proceed on the basis of a very general hydrodynamic description of the vortex core, based only on the necessary “minimal assumptions” needed to avoid the energy divergence. Namely, as the superfluid constraint (\[equQuantisation\]) does no longer apply in the respective “core” regions, the (canonical) angular momentum $\pi^\sX_\varphi$ there is not quantized, and is allowed to depend on the radial variable $r$. The winding number $N^\sX$ is strictly speaking not defined in the core region, but we can keep the same symbol as a shorthand notation for , so we cast our general description of the core region in the simple form N\^(r) = { [ll]{} N\^ &r &gt;\ \^(r) &r . , \[equCore\] where $\N^{\,\sX}(r)$ is a continuous, monotonic function, which has to ensure the vortex energy density $\proper{T}$ to remain finite on the vortex axis, i.e. in the limit . Note that the “core radius” $\xi$ is defined, as in Sec. \[secVortex\], as the radial distance of the “common superfluid domain” for the vortex axis, and is therefore the maximum core radius of the individual superfluids. This obviously does not restrict the generality of the core description (\[equCore\]), as the $\N^\sX(r)$ are allowed to remain constant until some smaller radius . In order to have a regular behavior of the energy density $\proper{T}$ near the axis, it is sufficient to demand that $\N(r)$ and $\proper{A}(r)$ vanish on the vortex axis [*at least*]{} as (r) \~r , \~r\^2 r0 , \[equAsymptAxis\] where by “$\sim$” we mean “asymptotically proportional” (and not necessarily equal). This phenomenological description is based on only two parameters, the “core radius” $\xi$ and the core condensation energy per unit length $U_\con$. These two phenomenological parameters would have to be determined either from experiment or from a microscopic theory, but the model is now sufficiently determined to allow the integration of the vortex energy, without the need of further assumptions concerning the underlying physical processes of superfluidity. The total vortex energy per unit length is U = U\_+ U\_+ U\_, \[equTotalEnergy\] where according to (\[equVortexEnergy\]) and (\[equ70\]) we have defined U\_&& 2\_0\^drr\_,\ \[equDefUs\]\ U\_&& 2\_0\^drr\_.The energy contribution $U_\rot$, which is proportional to the rotation $\Omega$ of the normal fluids, is found from (\[equTrot\]) to be U\_= [B\_4]{} . ( r\^2 ) |\_0\^= 0 , \[equRotCancel\] where the vanishing of the integral follows from the asymptotic properties (\[equAsymptInfty\]) and (\[equAsymptAxis\]) of the magnetic field $\proper{B}$. In the Newtonian description of a rotating superconductor, the vortex energy was already found [@cpl00] to be unchanged by the rotating charged background, and this lemma is seen here to still hold under quite general conditions:\ [**Rotation energy cancellation lemma:**]{} [ *The “hydrodynamic” energy per unit length (i.e. excluding the core condensation energy $U_\con$) of a cylindrically symmetric and stationary vortex in a “stiff” mixture of interacting superfluids, superconductors and normal fluids (\[equStiff\]) is independent of the uniform rotation rate $\Omega$ of the normal fluids, despite the fact that the radial distribution of the hydrodynamic energy density is modified by $\Omega$, as seen in (\[equTrot\]).*]{}\ The vortex energy contribution $U_\vort$ in (\[equDefUs\]) is found by integrating (\[equTV\]), which yields U\_&=& \^2 + [() 8]{},\ & &0 &lt; , , \[equResult1\] where we used the asymptotic properties (\[equAsymptInfty\]), (\[equAsymptAxis\]), and the (first) mean value theorem of integration with the intermediate values $\zeta$ and $\eta$, after a partial integration in the core region. We recognize two qualitatively different energy contributions; the first one from a “global” vortex, diverging logarithmically with the upper cutoff radius $\ru$, which is characteristic for vortices in uncharged superfluids, and the second one from a “local” vortex, whose energy contribution has the standard “axis field” form , which is typical for vortices in charged superfluids. Using the decomposition into charged and uncharged superfluid subspaces via the charge projection tensors defined in (\[equProjections\]), we can rewrite the first term in brackets in the form = N\^\_N\_\^. Concerning the second term in (\[equResult1\]), if the magnetic field $\proper{B}(r)$ is slowly varying inside the vortex core, then we can approximately replace , and use the explicit expression (\[equSolution\]) with (\[equC0\]) and (\[equFlux\]) to write () = 4(e\_N\^\_) ( 1 - [\_[core]{} ]{} ) [K\_0(x\_0) x\_0 K\_1(x\_0)]{} , \[equVortexB\] where . In the extreme type–II limit, where the core structure becomes negligible, i.e. in the limit , where , , and , equation (\[equResult1\]) with (\[equVortexB\]) finally gives the simple expression for the vortex energy U\_= \^2 §\_. \[equResultII\] This “quasi separated” form clearly shows the respective contributions from a global vortex and a local vortex, but as mentioned above, even for vortices which have nonvanishing winding numbers only in either charged or uncharged constituents, there will generally be contributions from [*both*]{} terms, due to the entrainment matrix $\S$ involved in the projections. Discussion of some Applications {#secDiscussion} =============================== In order to illustrate the foregoing general results, we will in this section discuss some applications to well known standard examples of “realistic” superfluid systems, ordered by increasing complexity. Single uncharged superfluid --------------------------- Probably the simplest case are single, uncharged (isotropic) superfluids like $^4$He. We note that vortices in $^3$He show a much richer structure than in $^4$He (e.g. see [@LesHouches99]), due to the anisotropic type of the microscopic Cooper pairing responsible for the superfluidity of $^3$He. But the present approach should still be a good approximation at least for the $^3$He–B superfluid [@volovik2000], because sufficiently far from the vortex core the additional (anisotropic) degrees of freedom of the order parameter are “frozen” and the dynamics is again mainly governed by the phase $\varphi^\sX$.\ [**a) at $\bbox{T=0}$:**]{} In the case of a single superfluid at zero temperature, the “entrainment matrix” $K_{\X\Y}$ of (\[equEntrainment\]) reduces to , where $n^\a$ is the particle current and $\mu_\a$ the momentum per particle of the superfluid. There are no normal fluids, so $\S$ of (\[equMatrices\]) is given trivially by . The charge vector vanishes, , and the charge projection tensors are trivial, so and . The vortex energy (\[equResult1\]) then simply reduces to U\_= N\^2 \^2 [n\^0 \^0]{} , which is the same expression as found in [@cl95a] for the single superfluid. In the nonrelativistic limit, where and (where $m$ is the rest mass of the superfluid particles, and $n$ their number density), we recover the usual expression for the (hydrodynamic) superfluid vortex energy in the zero temperature limit (e.g. see [@Tilley]).\ [**b) at $\bbox{T\not=0}$:**]{} In the case of a finite temperature, the system can be described as an effective superfluid–normal fluid mixture, where the normal fluid consists of the viscous gas of excitations in the superfluid. The superfluid and normal particle currents are $\cvec{n}_\sf$ and $\cvec{n}_\nf$, and their respective momenta per particle $\cvec{\mu}^\sf$ and $\cvec{\mu}^\nf$, say. There are no charged fluids, so $N^\sf_\para = 0$ and $N^\sf_\ortho = N$. The entrainment matrix (\[equDefK\]) reads K\_ = ( [cc]{} K\_ & K\_\ K\_ & K\_ ) , and is decomposed in the mongrel representation of Sec. \[secMongrel\] as $\V=1/K_{\nf\nf}$, and $\S = K_{\sf\sf} - K_{\sf\nf}^2/K_{\nf\nf}$, so the vortex energy would simply be given by inserting this expression for $\S$ into equation (\[equResult1\]). However, in order to compare this result to the usual expression for the vortex energy in superfluids at $T\not=0$, we have to link the present entrainment formalism to the more common language of Landau’s two–fluid model [@Landau] that is expressed in terms of a “superfluid density” $\rho_\sf$ and a “normal density” $\rho_\nf$. This “translation” has been achieved in a rigorous and extensive manner by Carter and Khalatnikov [@ck94], but for the present purpose of an illustrative example, the following very simple argument should show in a sufficiently convincing way how to translate between the respective quantities. Namely, consider the total (spatial) momentum density $T^{0i}$ (with ) of the fluid mixture, for which from (\[equStressEnergy\]) we have . Using the mongrel relations (\[equLC1\]) and (\[equLC2\]), this can be rewritten as . Now we introduce the normal velocity , which is the real mean velocity of the excitations, and the superfluid “pseudo–velocity” , which is not a “real” velocity in the sense of a particle transport. In the nonrelativistic limit, where $\mu^{\sf0}$ tends to the constant rest mass of the superfluid particles, , the irrotationality property of superfluids (\[equIrrot\]) implies , in other words . In these variables the total momentum density now reads p\^i = \_\^i + v\_\^i. Comparing this to the orthodox expression p\^i = \_\_\^i + \_v\_\^i, we can identify \_= (\^\_0)\^2 §,\_= (n\^0\_)\^2 . \[equMatchLandau\] This is consistent with the additivity postulate , namely using (\[equCross\]) we obtain the expression , which effectively reduces to the total mass density in the Newtonian limit. In the present case we have for the superfluid, while , as the normal fluid is identified with the gas of excitations, so the total mass density reduces to . In the nonrelativistic limit, expression (\[equMatchLandau\]) yields , and so the equation (\[equResult1\]) for the vortex energy can explicitly be written as U\_= \^2 N\^2 [\_m\_\^2]{}, in agreement with the well known result in Landau’s two–fluid model (e.g. see [@Tilley]). Two uncharged superfluids ------------------------- In the next step, let us consider a vortex in a mixture of two uncharged superfluids, as first considered by Andreev and Bashkin [@ab75] for a mixture of $^3$He and $^4$He. Again, at $T=0$ there are no normal fluids, so we have §\_ = K\_ = ( [cc]{} K\_[33]{} & K\_[34]{}\ K\_[43]{} & K\_[44]{} ) . The charge vector vanishes, , and so and . The expression (\[equResult1\]) for the vortex energy in this case explicitly reads U\_= \^2 . We see that there is a purely hydrodynamic interaction energy due to entrainment (i.e. not related to the condensation energy in the core) from the last term in brackets, which is either attractive or repulsive depending on the sign of the entrainment coefficient $K_{34}$. Conventional Superconductors ---------------------------- When we consider cases with charged superfluids, the simplest example is already a two constituent system, because a second charged component is necessary to allow for global charge neutrality. This picture applies for example to conventional laboratory superconductors, where the charged superfluid (charge $e^-$ and particle density $n_-$) consists of Cooper paired conduction electrons, while the second component is the “normal” background of positively charged ions (charge $e^+$ and particle density $n_+$). In the maximally symmetric and stationary situations considered in the present work, “normal” components are naturally restricted to uniform rotation (\[equNormalCurrent\]), and therefore it makes no difference whether the normal component is actually a real “fluid” or a solid lattice like in the present example. Because of the Cooper pairing mechanism, the fundamental superfluid charge carriers have to be considered as electron pairs, and therefore the charge per superfluid particle $e^-$ should be twice the electron charge, i.e. , and consequently the rest mass is , where $m_\e$ is the electron rest mass. The entrainment matrix $K_{\X\Y}$, defined in (\[equDefK\]) can be written as K\_ = ( [cc]{} K\_[–]{} & K\_[-+]{}\ K\_[+-]{} & K\_[++]{} ) , and the transformation into the mongrel representation of Sec. \[secMongrel\] yields and . The charge vector is just , and so and .\ [**The London field:**]{} In the simple case of a vortex–free state, i.e. with $N=0$, there is nevertheless a nonvanishing uniform London field $B_\bg$ if the superconductor is rotating (rotation rate $\Omega$). Equation (\[equLondon2\]) for the London field immediately yields for this simple case , where $E$ is the energy per superfluid particle, i.e. . If we choose a reference frame with , i.e. comoving with the superconductor in $z$ direction, then $E$ can be identified with the (relativistic) chemical potential . In the Newtonian limit, where , the conventional Newtonian chemical potential $\mu^-_{\rm chem}$ is related to the relativistic chemical potential $\mu^-$ as \^- = m\^- ( 1 + [\^-\_[chem]{}m\^-]{} + Ø( \^2 )), where . The London field for a rotating superconductor can therefore be written in the form B\_= -2 (1 + [\^-\_[chem]{} m\^-]{} + Ø(\^2) ) . \[equLondon99\] It is well known that that the “entrainment” formalism for interacting constituents can equivalently be expressed in the more conventional (albeit sometimes less convenient) language of “effective masses” [@ab75]. We see that in the case of two–constituent superconductors, the effect of entrainment (i.e. effective masses) cancels out in the expression (\[equLondon99\]) for the London field, which therefore depends quite naturally on the “bare” electron rest mass to charge ratio $m^-/e^-$, including a “relativistic” correction due to the finite chemical potential $\mu^-_{\rm chem}$ of the electrons. We note that this cancellation only occurs for systems with a single superfluid constituent, where $\S$ is consequently a scalar and cancels out in (\[equLondon2\]). As soon as there is a second (interacting) superfluid constituent involved, as in the following example of neutron star matter, the London field [*does*]{} depend on the effective masses of the constituents. We further note that the present covariant treatment is intrinsically frame–independent, and contrary to the analysis of [@liu98], we find that $B_\bg$ does [*not*]{} depend on the chemical potential $\mu^+$ of the “normal” component of positively charged ions. A very crude estimate of the relativistic correction term for a Nb superconductor at $T=0$, taking $\mu^-_{\rm chem}$ simply to be the Fermi energy of a free electron gas, yields a (positive) correction of the order $10^{-4}$. This is in qualitative and nearly quantitative agreement with precision measurements performed on a rotating Nb superconductor [@cabrera89]. But in order to effectively compare expression (\[equLondon99\]) with experimental results, a more careful estimation of $\mu^-_{\rm chem}$ would be necessary.\ [**Vortices:**]{} Now let us consider a vortex configuration, i.e. with . We see that a similar cancellation of the entrainment effect as for the London field (\[equLondon99\]) arises for the total flux of the vortex, which is seen by (\[equFlux\]) to give the usual = N \_0, \_0 , while the London penetration depth (\[equPenetration\]) [*is*]{} modified by entrainment, namely . To write this more explicitly, we note that $\S$ can be written in the absence of entrainment as and further , where is the same relativistic correction factor encountered in the expression for the London field (\[equLondon99\]). A nonvanishing entrainment interaction between the constituents will add an additional correction term $\delta_{\rm entr}$ proportional to the matrix element $K_{+-}$, so that $\S$ can be written as , and so the London penetration depth reads ł\^[-2]{} = 4(e\^-)\^2 [n\_- m\^-]{} (1+\_[entr]{} - \_[rel]{}) . The vortex energy is given by the “magnetic” term in (\[equResult1\]) alone, due to and , so we recover the usual “axis–field” expression U\_ , which is seen in the more explicit form (\[equResultII\]) (for the type–II limit, for simplicity) to depend on the effect of entrainment, namely U\_= N\^2 \^2 [n\_-m\^-]{}(1+\_[entr]{} - \_[rel]{}) , but as the total vortex energy also depends on the largely unknown condensation energy of the core, the relativistic and entrainment corrections in this expression seem unlikely to be of observable interest. Neutron star matter (Outer core) -------------------------------- In this last example we consider the case of a (cold) degenerate plasma consisting of neutrons, protons and electrons in $\beta$ equilibrium, as relevant for the outer core of neutron stars (i.e. at densities $\gtrsim$ nuclear density). In this case one usually assumes that there is an important entrainment between neutrons and protons due to their strong interactions, while the entrainment with electrons is generally supposed to be negligible. We will follow this assumption and denote the entrainment matrix as K\_ = ( [ccc]{} K\_ & K\_ & 0\ K\_ & K\_ & 0\ 0 & 0 & K\_ ) . The calculations of the superfluid gaps for this neutron star matter generally suggest (see for example [@bcjl92]) that the protons will be superconducting and the neutrons superfluid, while the electrons remain “normal”, so this system would represent a superconducting–superfluid–normal mixture. The matrices of the mongrel representation of Sec. \[secMongrel\] for this system read , and §\_ = ( [cc]{} K\_ & K\_\ K\_ & K\_ ) , and we define an “entrainment coefficient” . For this system the charge vectors and projections are nontrivial, namely e\^= ( [c]{} 0\ q ) , e\_= q K\_ ( , 1 ) , \^\_= ( [cc]{} 0 & 0\ & 1 ),\^\_= ( [cc]{} 1 & 0\ -& 0 ) , where $q$ is the charge of a proton Cooper pair, i.e., and we further have N\^\_&=& (N\^+ N\^) ( [c]{} 0\ 1 ) ,\ N\^\_&=& N\^( [c]{} 1\ - ). The London penetration depth (\[equPenetration\]) is ł\^[-2]{} = 4q\^2 K\_ , and the vortex flux (\[equFlux\]) is found as = (N\^+ N\^) \_0, \_0 = [2q]{} , in agreement with earlier results in the literature [@ss80; @vs81; @als84]. The vortex energy in the type–II limit (\[equResultII\]) reads U\_&= &\^2 (N\^)\^2\ & & + \^2 (N\^)\^2 K\_ \[equNPVortex\]\ & & + 2 \^2 N\^N\^K\_ .Similar to the case of a mixture of two uncharged superfluids, we see that the total vortex energy consists of a pure n–vortex term, a pure p–vortex term (each of which is modified by the entrainment), while the last term represents an attractive or repulsive (depending on the sign of $K_{\n\p}$) interaction term with respect to infinite separation. It has been suggested [@sed95] that the effect of entrainment between neutrons and protons could energetically favor a “vortex cluster” structure (i.e. a neutron vortex surrounded by a dense lattice of proton vortices) with respect to a single neutron vortex. This question can strictly speaking not be addressed in the present framework of perfectly axially symmetric configurations, and will be subject of future investigation, but the energy of a single n–vortex ($N^\p=0$, $N^\n\not=0$) is seen from expression (\[equNPVortex\]) to be of the same order of magnitude if not smaller than in the absence of entrainment ($\alpha\rightarrow 0$), i.e. . Any configuration containing more vortices is therefore rather expected to have a higher energy, but the possibly attractive interaction term in (\[equNPVortex\]) could lead to an effective “clustering” of already present vortices, namely a n–vortex that “accretes” p–vortices until saturation. The author thanks B. Carter and D. Langlois for many instructive discussions and helpful advice. [99]{} I.M. Khalatnikov, Sov. Phys. JETP [**5**]{} (4), 542-545 (1957). A.F. Andreev, E.P. Bashkin, Sov. Phys. JETP [**42 (1)**]{}, 164 (1975). G.A. Vardanian, D.M. Sedrakian, Sov. Phys. JETP [**54**]{} (5), 919-921 (1981). G. Mendell, L. Lindblom, Ann. Phys. [**205**]{}, 110 (1991). D.M. Sedrakian, K.M. Shahabasian, Astrophysics [**16**]{}, 417-422 (1980). M.A. Alpar, S.A. Langer, J.A. Sauls, Astrophys. J. [**282**]{}, 533 (1984). G. Mendell, Astrophys. J. [**380**]{}, 515 (1991). B. Carter, D. Langlois, Phys. Rev. [**D52**]{}, 4640 (1995). B. Carter, R. Prix, D. Langlois, ([cond-mat/9910240]{}). B. Carter, in [*Relativistic Fluid Dynamics (Noto, 1987)*]{}, Lecture Notes in Mathematics [**1385**]{}, ed. A. Anile and M. Choquet–Bruhat (Springer–Verlag, Heidelberg 1989), pp. 1–64. B. Carter, D. Langlois, Nucl. Phys. [**B 531**]{}, 478 (1998). V.B. Eltsov, M. Krusius, in [*Topological Defects and the Non–Equilibrium Dynamics of Symmetry Breaking Phase Transitions*]{}, proceedings of the NATO Advanced Study Institute, Les Houches, 1999, edited by Y.M. Bunkov and H. Godfrin, (Kluwer Academic Publisher, Dortrecht, 2000), pp. 325–344. L.D. Landau and E.M. Lifshitz, [*Course of Theoretical Physics*]{}, Vol. 6, [*Fluid Mechanics*]{}, (Pergamon, Oxford, 1959). B. Carter, D. Langlois, Phys. Rev. [**D51**]{}, 5855 (1995). G.E. Volovik, private communication. D.R. Tilley and J. Tilley, [*Superfluidity and Superconductivity*]{}, (I.O.P, Bristol, 1990). B. Carter, I.M. Khalatnikov, Rev. Math. Phys. [**6**]{} (2), 277-304 (1994) M. Liu, Phys. Rev. Lett. [**81**]{}, 3223 (1998). J. Tate, B. Cabrera, S.B. Felch, J.T.Anderson, Phys. Rev. Lett. [**62**]{}, 845 (1989). M. Baldo, J. Cugnon, A. Lejeune, U. Lombardo, Nucl. Phys. [**A 536**]{}, 349 (1992). A.D. Sedrakian, D.M. Sedrakian, Astrophys. J. [**447**]{}, 305-323 (1995).
--- abstract: | The work explores the fundamental limits of coded caching in heterogeneous networks where multiple ($N_0$) senders/antennas, serve different users which are associated (linked) to shared caches, where each such cache helps an arbitrary number of users. Under the assumption of uncoded cache placement, the work derives the exact optimal worst-case delay and DoF, for a broad range of user-to-cache association profiles where each such profile describes how many users are helped by each cache. This is achieved by presenting an information-theoretic converse based on index coding that succinctly captures the impact of the user-to-cache association, as well as by presenting a coded caching scheme that optimally adapts to the association profile by exploiting the benefits of encoding across users that share the same cache. The work reveals a powerful interplay between shared caches and multiple senders/antennas, where we can now draw the striking conclusion that, as long as each cache serves at least $N_0$ users, adding a single degree of cache-redundancy can yield a DoF increase equal to $N_0$, while at the same time — irrespective of the profile — going from 1 to $N_0$ antennas reduces the delivery time by a factor of $N_0$. Finally some conclusions are also drawn for the related problem of coded caching with multiple file requests. author: - 'Emanuele Parrinello, Ayşe Ünsal and Petros Elia[^1] [^2]' bibliography: - 'final\_refs2.bib' nocite: - '[@MN14; @WanTP15; @YuMA16]' - '[@lampiris2018lowCSIT; @LampirisZE17]' title: Fundamental Limits of Caching in Heterogeneous Networks with Uncoded Prefetching --- Caching networks, coded caching, shared caches, delivery rate, uncoded cache placement, index coding, MISO broadcast channel, multiple file requests, network coding. Introduction \[sec:intro\] ========================== In the context of communication networks, the emergence of predictable content, has brought to the fore the use of caching as a fundamental ingredient for handling the exponential growth in data volumes. A recent information theoretic exposition of the cache-aided communication problem [@MN14], has revealed the potential of caching in allowing for the elusive scaling of networks, where a limited amount of (bandwidth and time) resources can conceivably suffice to serve an ever increasing number of users. Coded Caching ------------- This exposition in [@MN14] considered a shared-link broadcast channel (BC) scenario where a single-antenna transmitter has access to a library of $N$ files, and serves (via a single bottleneck link) $K$ receivers, each having a cache of size equal to the size of $M$ files. In a normalized setting where the link has capacity 1 file per unit of time, the work in [@MN14] showed that any set of $K$ simultaneous requests (one file requested per user) can be served with a normalized delay (worst-case completion time) which is at most $T = \frac{K(1-\gamma)}{1+K\gamma}$ where $\gamma {\triangleq}\frac{M}{N} $ denotes the normalized cache size. This implied an ability to treat $K\gamma+1$ users at a time; a number that is often referred to as the cache-aided sum *degrees of freedom* (DoF) $d_{\Sigma} {\triangleq}\frac{K(1-\gamma)}{T}$, corresponding to a caching gain of $K\gamma$ additional served users due to caching. For this same shared-link setting, this performance was shown to be approximately optimal (cf. [@MN14]), and under the basic assumption of uncoded cache placement where caches store uncoded content from the library, it was shown to be exactly optimal (cf. [@WanTP15] as well as [@YuMA16]). Such high coded caching gains have been shown to persist in a variety of settings that include uneven popularity distributions [@JiTLC14; @NiesenMtit17Popularity; @ZhangLW:18], uneven topologies [@BidokhtiWT16isit; @ZhangE16b], a variety of channels such as erasure channels [@GhorbelKY:16], MIMO broadcast channels with fading [@ZE:17tit], a variety of networks such as D2D networks [@JiCM16D2D], coded caching under secrecy constraints [@RPK+:16], and in other settings as well [@SenguptaTS15; @CaoTXL16; @RoigTG17a; @CaoTaoMultiAntenna18; @PiovClerckISIT18; @BayatMC:18arxiv]. Recently some progress has also been made in ameliorating the well known subpacketization bottleneck; for this see for example [@ShanmugamJTLD16it; @JiSVLTC15; @YanCTC:17tit; @TangR:17isit; @ShangguanZG:18tit; @ShanmugamTD:17isit; @LampirisEliaJsac18]. ![Shared-link or multi-antenna broadcast channel with shared caches.](drawing_groupedcaching2_lambda_modified2.pdf "fig:"){width="0.4\linewidth"} ![Shared-link or multi-antenna broadcast channel with shared caches.](system_example.jpg "fig:"){width="0.4\linewidth"} \[system\_pic\] Cache-aided Heterogeneous Networks: Coded Caching with Shared Caches -------------------------------------------------------------------- Another step in further exploiting the use of caches, was to explore coded caching in the context of the so-called heterogeneous networks which better capture aspects of more realistic settings such as larger wireless networks. Here the term *heterogeneous* refers to scenarios where one or more (typically) multi-antenna transmitters (base-stations) communicate to a set of users, with the assistance of smaller nodes. In our setting, these smaller helper nodes will serve as caches that will be shared among the users. This cache-aided heterogeneous topology nicely captures an evolution into denser networks where many wireless access points work in conjunction with bigger base stations, in order to better handle interference and to alleviate the backhaul load by replacing backhaul capacity with storage capacity at the communicating nodes. The use of caching in such networks was famously explored in the *Femtocaching* work in [@GSDMC:12], where wireless receivers are assisted by helper nodes of a limited cache size, whose main role is to bring content closer to the users. A transition to coded caching can be found in [@Diggavi_IT] which considered a similar shared-cache heterogeneous network as here, where each receiving user can have access to a main single-antenna base station (single-sender) and to different helper caches. In this context, under a uniform user-to-cache association where each cache serves an equal number of users, [@Diggavi_IT] proposes a coded caching scheme which was shown to perform to within a certain constant factor from the optimal. This uniform setting is addressed also in [@MND13], again for the single antenna case. Interesting work can also be found in [@XuGW:18sharedArxiv] which explores the single-stream (shared-link) coded caching scenario with shared caches, where the uniformity condition is lifted, and where emphasis is placed on designing schemes with centralized coded prefetching with small sum-size caches where the total cache size is smaller than the library size (i.e., where $KM < N$). #### Coded caching with multiple file requests {#coded-caching-with-multiple-file-requests .unnumbered} Related work can also be found on the problem of coded caching with multiple file requests per receiver, which — for the single-stream, error free case — is closely related to the shared cache problem here. Such work — all in the shared-link case ($N_0=1$) — appears in [@Ji2015CachingaidedCM; @SenguptaT:17TCOMM] in the context of single-layer coded caching. Somewhat related work also appears in [@ZhangWXWL16; @KaramchandaniNMD16IT] in the context of hierarchical coded caching. Recent progress can also be found in [@WeUlukus17] which establishes the exact optimal worst-case delay — under uncoded cache placement, for the shared-link case — for the uniform case where each user requests an equal number of files[^3]. As a byproduct of our results here, in the context of worst-case demands, we establish the exact optimal performance of the multiple file requests problem for any (not necessarily uniform) user-to-file association profile. #### Current work {#current-work .unnumbered} In the heterogeneous setting with shared caches, we here explore the effect of user-to-cache association profiles and their non-uniformity, and we characterize how this effect is scaled in the presence of multiple antennas or multiple senders. Such considerations are motivated by realistic constraints in assigning users to caches, where these constraints may be due to topology, cache capacity or other factors. As it turns out, there is an interesting interplay between all these aspects, which is crisply revealed here as a result of a new scheme and an outer bound that jointly provide exact optimality results. Throughout the paper, emphasis will be placed on the shared-cache scenario, but some of the results will be translated directly to the multiple file request problem which will be described later on. Notation -------- For $n$ being a positive integer, $[n]$ refers to the following set $[n]\triangleq \{1,2,\dots,n\}$, and $2^{[n]}$ denotes the power set of $[n]$. The expression $\alpha | \beta$ denotes that integer $\alpha$ divides integer $\beta$. Permutation and binomial coefficients are denoted and defined by $P(n,k){\triangleq}\frac{n!}{(n-k)!}$ and $\binom{n}{k}{\triangleq}\frac{n!}{(n-k)!k!}$, respectively. For a set $\mathcal{A}$, $|\mathcal{A}|$ denotes its cardinality. $\mathbb{N}$ represents the natural numbers. We denote the lower convex envelope of the points $\{(i, f(i)) | i \in [n]\cup \{0\}\}$ for some $n\in \mathbb{N}$ by $Conv(f(i))$. The concatenation of a vector $\vv$ with itself $N$ times is denoted by $(\vv \Vert \vv)_{N}$. For $n\in \mathbb{N}$, we denote the symmetric group of all permutations of $[n]$ by $S_n$. To simplify notation, we will also use such permutations $\pi\in S_n$ on vectors $\vv \in \mathbb{R}^n$, where $\pi(\vv)$ will now represent the action of the permutation matrix defined by $\pi$, meaning that the first element of $\pi(\vv)$ is $\vv_{\pi(1)}$ (the $\pi(1)$ entry of $\vv$), the second is $\vv_{\pi(2)}$, and so on. Similarly $\pi^{-1}(\cdot )$ will represent the inverse such function and $\pi_s(\vv)$ will denote the sorted version of a real vector $\vv$ in descending order. Paper Outline ------------- In Section \[sec:systemModel\] we give a detailed description of the system model and the problem definition, followed by the main results in Section \[sec:results\], first for the shared-link setting with shared caches[^4], and then for the multi-antenna/multi-sender setting. In Section \[sec:scheme\], we introduce the scheme for a broad range of parameters. The scheme is further explained with an example in this section. We present the information theoretic converse along with an explanatory example for constructing the lower bound in Section \[sec:converse\]. Lastly, in Section \[sec:discussion\] we draw some basic conclusions, while in the Appendix Section \[sec:Appendix\] we present some proof details. System Model\[sec:systemModel\] =============================== We consider a basic broadcast configuration with a transmitting server having $N_0$ transmitting antennas and access to a library of $N$ files $W^{1},W^{2},\dots ,W^{N}$, each of size equal to one unit of ‘file’, where this transmitter is connected via a broadcast link to $K$ receiving users and to $\Lambda\leq K$ helper nodes that will serve as caches which store content from the library[^5]. The communication process is split into $a)$ the cache-placement phase, $b)$ the user-to-cache assignment phase during which each user is assigned to a single cache, and $c)$ the delivery phase where each user requests a single file independently and during which the transmitter aims to deliver these requested files, taking into consideration the cached content and the user-to-cache association. #### Cache placement phase During this phase, helper nodes store content from the library without having knowledge of the users’ requests. Each helper cache has size $M\leq N$ units of file, and no coding is applied to the content stored at the helper caches; this corresponds to the common case of *uncoded cache placement*. We will denote by $\mathcal{Z}_{\lambda}$ the content stored by helper node $\lambda$ during this phase. The cache-placement algorithm is oblivious of the subsequent user-to-cache association $\mathcal{U}$. #### User-to-cache association After the caches are filled, each user is assigned to exactly *one* helper node/cache, from which it can download content at zero cost. Specifically, each cache $ \lambda = 1,2,\dots,\Lambda$, is assigned to a set of users $\mathcal{U}_\lambda$, and all these disjoint sets $$\mathcal{U}{\stackrel{\triangle}{=}}\{\mathcal{U}_1,\mathcal{U}_2,\dots ,\mathcal{U}_\Lambda\}$$ form the partition of the set of users $\{1,2,\dots,K\}$, describing the overall association of the users to the caches. This cache assignment is independent of the cache content and independent of the file requests to follow. We here consider any arbitrary user-to-cache association $\mathcal{U}$, thus allowing the results to reflect both an ability to choose/design the association, as well as to reflect possible association restrictions due to randomness or topology. Similarly, having the user-to-cache association being independent of the requested files, is meant to reflect the fact that such associations may not be able to vary as quickly as a user changes the requested content. #### Content delivery The delivery phase commences when each user $k = 1,\dots,K$ requests from the transmitter, any *one* file $W^{d_{k}}$, $d_{k}\in\{1,\dots,N\}$ out of the $N$ library files. Upon notification of the entire *demand vector* $\dv=(d_1,d_2,\dots,d_{K})\in\{1,\dots,N\}^K$, the transmitter aims to deliver the requested files, each to their intended receiver, and the objective is to design a *caching and delivery scheme $\chi$* that does so with limited (delivery phase) duration $T$, where the delivery algorithm has full knowledge of the user-to-cache association $\mathcal{U}$. For each transmission, the received signals at user $k$, take the form $$\begin{aligned} y_{k}=\hv_{k}^{T} \xv + w_{k}, ~~ k = 1, \dots, K\end{aligned}$$ where $\xv\in\mathbb{C}^{N_0\times 1}$ denotes the transmitted vector satisfying a power constraint $\E(||\xv||^2)\leq P$, $\hv_{k}\in\mathbb{C}^{N_0\times 1}$ denotes the channel of user $k$, and $w_{k}$ represents unit-power AWGN noise at receiver $k$. We will assume that the allowable power $P$ is high (i.e., we will assume high signal-to-noise ratio (SNR)), that there exists perfect channel state information throughout the (active) nodes, that fading is statistically symmetric, and that each link (one antenna to one receiver) has ergodic capacity $\log(SNR)+o(log(SNR))$. #### User-to-cache association profiles, and performance measure As one can imagine, some user-to-cache association instances $\mathcal{U}$ may allow for higher performance than others; for instance, one can suspect that more uniform profiles may be preferable. Part of the objective of this work is to explore the effect of such associations on the overall performance. Toward this, for any given $\mathcal{U}$, we define the association *profile* (sorted histogram) $$\Lc=(\mathcal{L}_{1},\dots,\mathcal{L}_{\Lambda})$$ where $\mathcal{L}_{\lambda}$ is the number of users assigned to the $\lambda$-th *most populated* helper node/cache[^6]. Naturally, $\sum_{\lambda=1}^\Lambda \mathcal{L}_{\lambda} = K$. Each profile $\Lc$ defines a *class* $\mathcal{U}_{\Lc}$ comprising all the user-to-cache associations $\mathcal{U}$ that share the same[^7] profile $\Lc$. As in [@MN14], the measure of interest $T$ is the number of time slots, per file served per user, needed to complete delivery of any file-request vector[^8] $\dv$. We use $T(\mathcal{U},\dv,\chi)$ to define the delay required by some generic caching-and-delivery scheme $\chi$ to satisfy demand $\dv$ in the presence of a user-to-cache association described by $\mathcal{U}$. To capture the effect of the user-to-cache association, we will characterize the optimal worst-case delivery time $$T^*(\Lc){\triangleq}\min_{\chi} \max_{(\mathcal{U},\dv) \in (\mathcal{U}_{\Lc},\{1,\dots,N\}^K)} T(\mathcal{U},\dv,\chi) \label{eq:T*_def}$$ for each class. Our interest is in the regime of $N\geq K$ where there are more files than users. Main Results \[sec:results\] ============================ We first describe the main results for the single antenna case[^9] (shared-link BC), and then generalize to the multi-antenna/multi-sender case. Shared-Link Coded Caching with Shared Caches\[subsec:single\_antenna\_lower\] ----------------------------------------------------------------------------- The following theorem presents the main result for the shared-link case ($N_0 = 1$). \[thm:PerClassSingleAntenna\] In the $K$-user shared-link broadcast channel with $\Lambda$ shared caches of normalized size $\gamma$, the optimal delivery time within any class/profile $\Lc$ is $$\label{eq:TS_L} T^*(\Lc)=Conv\bigg(\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_r{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}}\bigg)$$ at points $\gamma\in \{\frac{1}{\Lambda},\frac{2}{\Lambda},\dots,1\}$. *Proof.* The achievability part of the proof is given in Section \[sec:scheme\], and the converse is proved in Section \[sec:converse\] after setting $N_0 = 1$. We note that the converse that supports Theorem \[thm:PerClassSingleAntenna\], encompasses the class of all caching-and-delivery schemes $\chi$ that employ uncoded cache placement under a general sum cache constraint $\frac{1}{\Lambda}\sum_{\lambda=1}^\Lambda |\mathcal{Z}_\lambda | = M$ which does not *necessarily* impose an individual cache size constraint. The converse also encompasses all scenarios that involve a library of size $\sum_{n\in[N]}|W^{n}| = N$ but where the files may be of different size. In the end, even though the designed optimal scheme will consider an individual cache size $M$ and equal file sizes, the converse guarantees that there cannot exist a scheme (even in settings with uneven cache sizes or uneven file sizes) that exceeds the optimal performance identified here. From Theorem \[thm:PerClassSingleAntenna\], we see that in the uniform case[^10] where $\Lc=(\frac{K}{\Lambda},\frac{K}{\Lambda},\dots,\frac{K}{\Lambda})$, the expression in  reduces to $$T^*(\Lc)=\frac{K(1-\gamma)}{\Lambda\gamma+1}$$ matching the achievable delay presented in [@MND13]. It also matches the recent result by [@WeUlukus17] which proved that this performance — in the context of the multiple file request problem — is optimal under the assumption of uncoded cache placement. The following corollary relates to this uniform case. \[cor:ressym\] In the uniform user-to-cache association case where $\Lc=(\frac{K}{\Lambda},\frac{K}{\Lambda},\dots,\frac{K}{\Lambda})$, the aforementioned optimal delay $T^*(\Lc)=\frac{K(1-\gamma)}{\Lambda\gamma+1}$ is smaller than the corresponding delay $T^*(\Lc)$ for any other non-uniform class. The proof that the uniform profile results in the smallest delay among all profiles, follows directly from the fact that in , both $\mathcal{L}_r$ and ${\Lambda-r\choose \Lambda\gamma}$ are non-increasing with $r$. Multi-antenna/Multi-sender Coded Caching with Shared Caches ----------------------------------------------------------- The following extends Theorem \[thm:PerClassSingleAntenna\] to the case where the transmitter is equipped with multiple ($N_0>1$) antennas. The results hold for any $\Lc$ as long as any non zero $\mathcal{L}_\lambda$ satisfies $\mathcal{L}_\lambda\geq N_0, ~\forall\lambda\in[\Lambda]$. \[thm:resmultiant\] In the $N_0$-antenna $K$-user broadcast channel with $\Lambda$ shared caches of normalized size $\gamma$, the optimal delivery time within any class/profile $\Lc$ is $$\label{eq:multi_delay} T^*(\Lc,N_0)=\frac{1}{N_0}Conv\bigg(\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_{r}{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}}\bigg)\\$$ for $\gamma\in \left\{\frac{1}{\Lambda},\frac{2}{\Lambda},\dots,1\right\}$. This reveals a multiplicative gain of $N_0$ with respect to the single antenna case. *Proof.* The scheme that achieves (\[eq:multi\_delay\]) is presented in Section \[sec:scheme\], and the converse is presented in Section \[sec:converse\]. The following extends Corollary \[cor:ressym\] to the multi-antenna case, and the proof is direct from Theorem \[thm:resmultiant\]. \[cor:ressymMulti\] In the uniform user-to-cache association case of $\Lc=\left(\frac{K}{\Lambda},\frac{K}{\Lambda},\dots,\frac{K}{\Lambda}\right)$ where $N_0\leq \frac{K}{\Lambda}$, the optimal delay is $$\label{delay_unif_multi} T^*(\Lc)=\frac{K(1-\gamma)}{N_0(\Lambda\gamma+1)}$$ and it is smaller than the corresponding delay $T^*(\Lc)$ for any other non-uniform class. \[rem:multipleFilerequstsResult\] In the error-free shared-link case ($N_0 = 1$), with file-independence and worst-case demand assumptions, the shared-cache problem here is closely related to the coded caching problem with multiple file requests per user, where now $\Lambda$ users with their own cache, request in total $K\geq \Lambda$ files. In particular, changing a bit the format, now each demand vector $\dv = (d_1,d_2,\dots,d_K)$ would represent the vector of the indices of the $K$ requested files, and each user $\lambda = \{1,2,\dots,\Lambda\}$, would request those files from this vector $\dv$, whose indices[^11] form the set $\mathcal{U}_\lambda \subset [K]$. At this point, as before, the problem is now defined by the user-to-file association $\mathcal{U} = \{\mathcal{U}_1,\mathcal{U}_2,\dots ,\mathcal{U}_\Lambda\}$ which describes — given a fixed demand vector $\dv$ — the files requested by any user. From this point on, the equivalence with the original shared cache problem is complete. As before, each such $\mathcal{U}$ again has a corresponding (sorted) profile $\Lc=(\mathcal{L}_{1},\mathcal{L}_{2},\dots,\mathcal{L}_{\Lambda})$, and belongs to a class $\mathcal{U}_{\Lc}$ with all other associations $\mathcal{U}$ that share the same profile $\Lc$. As we quickly show in the Appendix Section \[sec:AppendixMultipleFileRequests\], our scheme and converse can be adapted to the multiple file request problem, and thus directly from Theorem \[cor:ressym\] we conclude that for this multiple file request problem, the optimal delay $T^*(\Lc){\triangleq}\min_{\chi} \max_{(\mathcal{U},\dv) \in (\mathcal{U}_{\Lc},\{1,\dots,N\}^K)} T(\mathcal{U},\dv,\chi)$ corresponding to any user-to-file association profile $\Lc$, takes the form $T^*(\Lc)= Conv\bigg(\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_{r}{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}}\bigg)$. At this point we close the parenthesis regarding multiple file requests, and we refocus exclusively on the problem of shared caches. Interpretation of Results ------------------------- ### Capturing the effect of the user-to-cache association profile In a nutshell, Theorems \[thm:PerClassSingleAntenna\],\[thm:resmultiant\] quantify how profile non-uniformities bring about increased delays. What we see is that, the more skewed the profile is, the larger is the delay. This is reflected in Figure \[fig:performance\] which shows — for a setting with $K=30$ users and $\Lambda=6$ caches — the memory-delay trade-off curves for different user-to-cache association profiles. As expected, Figure \[fig:performance\] demonstrates that when all users are connected to the same helper cache, the only gain arising from caching is the well known *local caching gain*. On the other hand, when users are assigned uniformly among the caches (i.e., when $\mathcal{L}_{\lambda}=\frac{K}{\Lambda},\forall\lambda\in[\Lambda]$) the caching gain is maximized and the delay is minimized. ![Optimal delay for different user-to-cache association profiles $\Lc$, for $K=30$ users and $\Lambda=6$ caches.[]{data-label="fig:performance"}](converse_thin.pdf){width="0.95\linewidth"} ### A multiplicative reduction in delay Theorem \[thm:resmultiant\] states that, as long as each cache is associated to at least $N_0$ users, we can achieve a delay $T(\Lc,N_0)= \frac{1}{N_0}\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_{r}{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}}$. The resulting reduction $$\label{eq:ratioT} \frac{T(\Lc,N_0=1)}{T(\Lc,N_0) }= N_0$$ as compared to the single-stream case, comes in strong contrast to the case of $\Lambda = K$ where, as we know from [@ShariatpanahiMK16it], this same reduction takes the form $$\label{eq:ratioTold} \frac{T(\Lambda = K,N_0=1)}{T(\Lambda = K,N_0)} = \frac{\frac{K(1-\gamma)}{1+\Lambda\gamma}}{\frac{K(1-\gamma)}{N_0+\Lambda\gamma}} = \frac{N_0+\Lambda\gamma}{1+\Lambda\gamma}$$ which approaches $N_0$ only when $\gamma \rightarrow 0$, and which decreases as $\gamma$ increases. In the uniform case ($\mathcal{L}_\lambda = \frac{K}{\Lambda}$) with $\Lambda \leq \frac{K}{N_0}$, Corollary \[cor:ressymMulti\] implies a sum-DoF $$d_\Sigma(\gamma)= \frac{K(1-\gamma)}{T} = N_0(1+\Lambda \gamma)$$ which reveals that every time we add a single degree of cache-redundancy (i.e., every time we increase $\Lambda\gamma$ by one), we gain $N_0$ degrees of freedom. This is in direct contrast to the case of $\Lambda = K$ (for which case we recall from [@ShariatpanahiMK16it] that the DoF is $N_0+\Lambda\gamma$) where the same unit increase in the cache redundancy yields only one additional DoF. ### Impact of encoding over users that share the same cache As we know, both the MN algorithm in [@MN14] and the multi-antenna algorithm in [@ShariatpanahiMK16it], are designed for users with different caches, so — in the uniform case where $\mathcal{L}_\lambda = K/\Lambda$ — one conceivable treatment of the shared-cache problem would have been to apply these algorithms over $\Lambda$ users at a time, all with different caches[^12]. As we see, in the single antenna case, this implementation would treat $1+\Lambda\gamma$ users at a time thus yielding a delay of $T = \frac{K(1-\gamma)}{1+\Lambda\gamma}$, while in the multi-antenna case, this implementation would treat $N_0+\Lambda\gamma$ users at a time (see [@ShariatpanahiMK16it]) thus yielding a delay of $T = \frac{K(1-\gamma)}{N_0+\Lambda\gamma}$. What we see here is that while this direct implementation is optimal (this is what we also do here in the uniform-profile case) in the single antenna case (see [@WeUlukus17], see also Corollary \[cor:ressym\]), in the multi-antenna case, this same approach can have an unbounded performance gap $$\label{eq:gapNaive} \frac{\frac{K(1-\gamma)}{N_0+\Lambda\gamma}}{\frac{K(1-\gamma)}{N_0(1+\Lambda\gamma)}} = \frac{N_0(1+\Lambda\gamma)}{N_0+\Lambda\gamma}$$ from the derived optimal performance from Corollary \[cor:ressymMulti\]. These conclusions also apply when the user-to-cache association profiles are not uniform; again there would be a direct implementation of existing multi-antenna coded caching algorithms, which would though again have an unbounded performance gap from the optimal performance achieved here. Coded Caching Scheme\[sec:scheme\] ================================== This section is dedicated to the description of the placement-and-delivery scheme achieving the performance presented in the general Theorem \[thm:resmultiant\] (and hence also in Theorem \[thm:PerClassSingleAntenna\] and the corollaries). The formal description of the optimal scheme in the upcoming subsection will be followed by a clarifying example in Section \[subsec:example\_scheme\] that demonstrates the main idea behind the design. Description of the General Scheme --------------------------------- The placement phase, which uses exactly the algorithm developed in [@MN14] for the case of $(\Lambda=K,M,N)$, is independent of $\mathcal{U},\Lc$, while the delivery phase is designed for any given $\Uc$, and will achieve the optimal worst-case delivery time stated in and . As mentioned, we will assume that any non zero $\mathcal{L}_{\lambda}$ satisfies $\mathcal{L}_{\lambda}\geq N_0, \forall \lambda\in[\Lambda]$. ### Cache Placement Phase \[sec:SchemePlacement\] The placement phase employs the original cache-placement algorithm of [@MN14] corresponding to the scenario of having only $\Lambda$ users, each with their own cache. Hence — recalling from [@MN14] — first each file $W^n$ is split into $\Lambda \choose \Lambda\gamma$ disjoint subfiles $W^n_\Tau$, for each $\Tau \subset [\Lambda]$, $|\Tau|=\Lambda\gamma$, and then each cache stores a fraction $\gamma$ of each file, as follows $$\Zc_\lambda=\{W^n_\Tau :\Tau\ni\lambda,~ \forall n\in[N]\}.$$ ### Delivery Phase\[sec:SchemeDelivery\] For the purpose of the scheme description only, we will assume without loss of generality that $|\mathcal{U}_1| \geq |\mathcal{U}_2| \geq \dots \geq |\mathcal{U}_{\Lambda}|$ (any other case can be handled by simple relabeling of the caches), and we will use the notation $\mathcal{L}_\lambda {\triangleq}|\Uc_\lambda|$. Furthermore, in a slight abuse of notation, we will consider here each $\mathcal{U}_\lambda$ to be an *ordered vector* describing, in order, the users associated to cache $\lambda$. We will also use $$\label{eq:Alambda} \boldsymbol{s_{\lambda}} = (\mathcal{U}_\lambda \Vert \mathcal{U}_\lambda )_{N_0}, \lambda\in[\Lambda]$$ to denote the $N_0$-fold concatenation of each $\mathcal{U}_\lambda$. Each such $N_0\mathcal{L}_{\lambda}$-length vector $\boldsymbol{s_{\lambda}}$ can be seen as the concatenation of $\mathcal{L}_\lambda$ different $N_0$-tuples $\boldsymbol{s_{\lambda,j}}$, $j=1,2,\dots, \mathcal{L}_\lambda$, i.e., each $\boldsymbol{s_{\lambda}}$ takes the form[^13] $$\boldsymbol{s_{\lambda}} = \boldsymbol{s_{\lambda,1}} \Vert \boldsymbol{s_{\lambda,2}} \Vert \dots \Vert \underbrace{\boldsymbol{s_{\lambda,\mathcal{L}_\Lambda}}}_{N_0-\text{length}}.$$ The delivery phase commences with the demand vector $\dv$ being revealed to the server. Delivery will consist of $\mathcal{L}_1$ rounds, where each round $j\in[\mathcal{L}_1]$ serves users $$\label{eq:UsersPerRound} \mathcal{R}_j=\bigcup_{\lambda\in[\Lambda]} \big( \boldsymbol{s_{\lambda,j}}:\mathcal{L}_\lambda \geq j \big).$$ #### Transmission scheme {#transmission-scheme .unnumbered} Once the demand vector $\dv$ is revealed to the transmitter, each requested subfile $W^{n}_{\Tau}$ (for any $n$ found in $\dv$) is further split into $N_0$ mini-files $\{W^{n}_{\Tau,l}\}_{l\in[N_0]}$. During round $j$, serving users in $\mathcal{R}_j$, we create $\Lambda \choose \Lambda\gamma+1$ sets $\mathcal{Q}\subseteq [\Lambda]$ of size $|\mathcal{Q}|=\Lambda\gamma+1$, and for each set $\mathcal{Q}$, we pick the set of users $$\label{eq:UsersServedPerXOR} \chi_\mathcal{Q}=\bigcup_{\lambda\in \mathcal{Q}}\big( \boldsymbol{s_{\lambda,j}}:\mathcal{L}_\lambda \geq j \big).$$ If $\chi_\mathcal{Q} = \emptyset$, then there is no transmission, and we move to the next $\mathcal{Q}$. If $\chi_\mathcal{Q}\neq \emptyset$, the server — during this round $j$ — transmits the following vector[^14] $$\label{eq:TransmitSignalGeneral} \xv_{\chi_{\mathcal{Q}}}=\!\!\!\!\sum_{\lambda\in \mathcal{Q}:\mathcal{L}_\lambda \geq j}\!\!\!\!\mathbf{H}^{-1}_{\boldsymbol{s_{\lambda,j}}}\cdot \begin{bmatrix} W^{d_{\boldsymbol{s_{\lambda,j}}(1)}}_{\mathcal{Q}\backslash{\{\lambda\}},l} &\dots & W^{d_{\boldsymbol{s_{\lambda,j}}(N_0)}}_{\mathcal{Q}\backslash{\{\lambda\}},l} \end{bmatrix}^T$$ where $W^{d_{\boldsymbol{s_{\lambda,j}}(k)}}_{\mathcal{Q}\backslash{\{\lambda\}},l}$ is a mini-file intended for user $\boldsymbol{s_{\lambda,j}}(k)$, i.e., for the user labelled by the $k$th entry of vector $\boldsymbol{s_{\lambda,j}}$ . The choice of $l$ is sequential, guaranteeing that no subfile $W^{d_{\boldsymbol{s_{\lambda,j}}(k)}}_{\mathcal{Q}\backslash{\{\lambda\}},l}$ is transmitted twice. Since each user appears in $\boldsymbol{s_{\lambda}}$ (and consequently in $\bigcup_{j\in[\mathcal{L}_1]} \mathcal{R}_j$) exactly $N_0$ times, at the end of the $\mathcal{L}_1$ rounds, all the $N_0$ mini-files $W^{d_{\boldsymbol{s_{\lambda,j}}(k)}}_{\mathcal{Q}\backslash{\{\lambda\}},l}$, $l\in[N_0]$ will be sent once. In the above, $\mathbf{H}^{-1}_{\boldsymbol{s_{\lambda,j}}}$ denotes the inverse of the channel matrix between the $N_0$ transmit antennas and the users in vector $\boldsymbol{s_{\lambda,j}}$. #### Decoding {#decoding .unnumbered} Directly from , we see that each receiver $\boldsymbol{s_{\lambda,j}}(k)$ obtains a received signal whose noiseless version takes the form $$y_{\boldsymbol{s_{\lambda,j}}(k)}=W^{d_{\boldsymbol{s_{\lambda,j}}(k)}}_{\mathcal{Q}\backslash{\{\lambda\},l}} + \iota_{\boldsymbol{s_{\lambda,j}}(k)}$$ where $\iota_{\boldsymbol{s_{\lambda,j}}(k)}$ is the $k$th entry of the interference vector $$\label{eq:received48} \sum_{\lambda'\in \mathcal{Q}\setminus{\{\lambda\}}:\mathcal{L}_{\lambda'}\geq j}\!\!\!\mathbf{H}^{-1}_{\boldsymbol{s_{\lambda',j}}}\cdot \begin{bmatrix} W^{d_{\boldsymbol{s_{\lambda',j}}(1)}}_{\mathcal{Q}\backslash{\{\lambda'\},l}} &\dots & W^{d_{\boldsymbol{s_{\lambda',j}}(N_0)}}_{\mathcal{Q}\backslash{\{\lambda'\},l}} \end{bmatrix}^T .$$ In the above, we see that the entire interference term $\iota_{\boldsymbol{s_{\lambda,j}}(k)}$ experienced by receiver $\boldsymbol{s_{\lambda,j}}(k)$, can be removed (cached-out) because all appearing subfiles $W^{d_{\boldsymbol{s_{\lambda',j}}(1)}}_{\mathcal{Q}\backslash{\{\lambda'\},l}}, \dots, W^{d_{\boldsymbol{s_{\lambda',j}}(N_0)}}_{\mathcal{Q}\backslash{\{\lambda'\},l}}$, for all $\lambda'\in \mathcal{Q}\setminus{\{\lambda\}}, \mathcal{L}_{\lambda'}\geq j$, can be found in cache $\lambda$ associated to this user, simply because $\lambda\in \mathcal{Q}\backslash\{\lambda'\}$. This completes the proof of the scheme for the multi-antenna case. ### Small modification for the single antenna case\[sec:schemeSingleAntenna\] For the single-antenna case, the only difference is that now $\boldsymbol{s_{\lambda}} = \mathcal{U}_\lambda$, and that each transmitted vector in  during round $j$, becomes a scalar of the form[^15] $$\label{eq:TransmitSignalSingleAntenna} x_{\chi_{\mathcal{Q}}}=\!\!\!\!\bigoplus_{\lambda\in \mathcal{Q}:\mathcal{L}_\lambda \geq j} W^{d_{\boldsymbol{s_{\lambda,j}}}}_{\mathcal{Q}\backslash{\{\lambda\}},1}.$$ The rest of the details from the general scheme, as well as the subsequent calculation of the delay, follow directly. Calculation of Delay -------------------- To first calculate the delay needed to serve the users in $\mathcal{R}_j$ during round $j$, we recall that there are $\Lambda \choose \Lambda\gamma+1$ sets $$\chi_\mathcal{Q}=\bigcup_{\lambda\in \mathcal{Q}}\big( \mathcal{U}_{\lambda}(j):\mathcal{L}_\lambda \geq j \big), \mathcal{Q}\subseteq [\Lambda]$$ of users, and we recall that $|\mathcal{U}_1| \geq |\mathcal{U}_2| \geq \dots \geq |\mathcal{U}_{\Lambda}|$. For each such non-empty set, there is a transmission. Furthermore we see that for $a_j{\stackrel{\triangle}{=}}\Lambda - \frac{|\mathcal{R}_j|}{N_0}$, there are ${a_j \choose \Lambda\gamma+1}$ such sets $\chi_\mathcal{Q}$ which are empty, which means that round $j$ consists of $$\label{eq:totsubround} {\Lambda \choose \Lambda\gamma+1}-{a_j \choose \Lambda\gamma+1}$$ transmissions. Since each file is split into ${\Lambda\choose \Lambda\gamma}N_0$ subfiles, the duration of each such transmission is $$\label{eq:dureachtransmission} \frac{1}{{\Lambda\choose \Lambda\gamma}N_0}$$ and thus summing over all $\mathcal{L}_1$ rounds, the total delay takes the form $$\label{eq:totdelay1} T=\frac{\sum_{j=1}^{\mathcal{L}_1}{{\Lambda \choose \Lambda\gamma+1}-{a_j \choose \Lambda\gamma+1}}}{{\Lambda\choose \Lambda\gamma}{N_0}}$$ which, after some basic algebraic manipulation (see Appendix \[sec:BinomialChangeProof\] for the details), takes the final form $$\label{eq:totdelay2} T=\frac{1}{N_0}\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_r{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}}$$ which concludes the achievability part of the proof. Scheme Example: $K=N=15$, $\Lambda=3$, $N_0=2$ and $\Lc=(8,5,2)$\[subsec:example\_scheme\] ------------------------------------------------------------------------------------------ Consider a scenario with $K=15$ users $\{1,2,\dots,15\}$, a server equipped with $N_0=2$ transmitting antennas that stores a library of $N=15$ equally-sized files $W^1,W^2,\dots,W^{15}$, and consider $\Lambda=3$ helper caches, each of size equal to $M=5$ units of file. In the cache placement phase, we split each file $W^n$ into $3$ equally-sized disjoint subfiles denoted by $W^n_{1},W^n_{2},W^n_{3}$ and as in [@MN14], each cache $\lambda$ stores $W^n_{\lambda}, \forall n\in [15]$. We assume that in the subsequent cache assignment, users $\mathcal{U}_1=(1,2,3,4,5,6,7,8)$ are assigned to helper node $1$, users $\mathcal{U}_2=(9,10,11,12,13)$ to helper node $2$ and users $\mathcal{U}_3=(14,15)$ to helper node $3$. This corresponds to a profile $\Lc=(8,5,2)$. We also assume without loss of generality that the demand vector is $\dv=(1,2,\dots,15)$. Delivery takes place in $|\mathcal{U}_1|=8$ rounds, and each round will serve either $N_0=2$ users or no users from each of the following three ordered user groups $$\begin{aligned} \boldsymbol{s_1}&=\mathcal{U}_1 || \mathcal{U}_1 = (1,2,\dots,7,8,1,2,\dots,7,8),\\ \boldsymbol{s_2}&=\mathcal{U}_2 || \mathcal{U}_2 = (9,10,11,12,13,9,10,11,12,13),\\ \boldsymbol{s_3}&=\mathcal{U}_3 || \mathcal{U}_3 = (14,15,14,15).\end{aligned}$$ Specifically, rounds 1 through 8, will respectively serve the following sets of users $$\begin{aligned} \mathcal{R}_1&=\{1,2,9,10,14,15\}\\ \mathcal{R}_2&=\{3,4,11,12,14,15\}\\ \mathcal{R}_3&=\{5,6,13,9\}\\ \mathcal{R}_4&=\{7,8,10,11\}\\ \mathcal{R}_5&=\{1,2,12,13\}\\ \mathcal{R}_6&=\{3,4\}\\ \mathcal{R}_7&=\{5,6\}\\ \mathcal{R}_8&=\{7,8\}.\end{aligned}$$ Before transmission, each requested subfile $W^n_\Tau$ is further split into $N_0=2$ mini-files $W^n_{\Tau,1}$ and $W^n_{\Tau,2}$. As noted in the general description of the scheme, the transmitted vector structure within each round, draws from [@LampirisEliaJsac18] as it employs the linear combination of ZF-precoded vectors. In the first round, the server transmits, one after the other, the following $3$ vectors $$\begin{aligned} \label{eq:ex1} \xv_{\{1,2,9,10\}}=&\mathbf{H}^{-1}_{\{1,2\}} \begin{bmatrix} W^1_{2,1}\\ W^2_{2,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{9,10\}} \begin{bmatrix} W^{9}_{1,1}\\ W^{10}_{1,1} \end{bmatrix}\\ \label{eq:ex2} \xv_{\{1,2,14,15\}}=&\mathbf{H}^{-1}_{\{1,2\}} \begin{bmatrix} W^1_{3,1}\\ W^2_{3,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{14,15\}} \begin{bmatrix} W^{14}_{1,1}\\ W^{15}_{1,1} \end{bmatrix}\\ \label{eq:ex3} \xv_{\{9,10,14,15\}}=&\mathbf{H}^{-1}_{\{9,10\}} \begin{bmatrix} W^9_{3,1}\\ W^{10}_{3,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{14,15\}} \begin{bmatrix} W^{14}_{2,1}\\ W^{15}_{2,1} \end{bmatrix}\end{aligned}$$ where $\mathbf{H}^{-1}_{\{i,j\}}$ is the zero-forcing (ZF) precoder[^16] that inverts the channel $\mathbf{H}_{\{i,j\}}=[\mathbf{h}_i^T \mathbf{h}_j^T]$ from the transmitter to users $i$ and $j$. To see how decoding takes place, let us first focus on users 1 and 2 during the transmission of $\xv_{\{1,2,9,10\}}$, where we see that, due to ZF precoding, the users’ respective received signals take the form $$\begin{aligned} &y_1=W^1_{2,1}+\underbrace{\mathbf{h}_1^T\mathbf{H}^{-1}_{\{9,10\}} \begin{bmatrix} W^{9}_{1,1}\\ W^{10}_{1,1} \end{bmatrix}}_{\text{interference}}+w_1\\ &y_2=W^2_{2,1}+\underbrace{\mathbf{h}_2^T\mathbf{H}^{-1}_{\{9,10\}} \begin{bmatrix} W^{9}_{1,1}\\ W^{10}_{1,1} \end{bmatrix}}_{\text{interference}}+w_2.\end{aligned}$$ Users 1 and 2 use their cached content in cache node 1, to remove files $W^9_{1,1},W^{10}_{1,1}$, and can thus directly decode their own desired subfiles. The same procedure is applied to the remaining users served in the first round. Similarly, in the second round, we have $$\begin{aligned} \xv_{\{3,4,11,12\}}=&\mathbf{H}^{-1}_{\{3,4\}} \begin{bmatrix} W^3_{2,1}\\ W^4_{2,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{11,12\}} \begin{bmatrix} W^{11}_{1,1}\\ W^{12}_{1,1} \end{bmatrix}\\ \xv_{\{3,4,14,15\}}=&\mathbf{H}^{-1}_{\{3,4\}} \begin{bmatrix} W^3_{3,1}\\ W^4_{3,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{14,15\}} \begin{bmatrix} W^{14}_{1,2}\\ W^{15}_{1,2} \end{bmatrix}\\ \xv_{\{11,12,14,15\}}=&\mathbf{H}^{-1}_{\{11,12\}} \begin{bmatrix} W^{11}_{3,1}\\ W^{12}_{3,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{14,15\}} \begin{bmatrix} W^{14}_{2,2}\\ W^{15}_{2,2} \end{bmatrix}\end{aligned}$$ and again in each round, each pair of users can cache-out some of the files, and then decode their own file due to the ZF precoder. The next three transmissions, corresponding to the third round, are as follows $$\begin{aligned} &\xv_{\{5,6,13,9\}}=\mathbf{H}^{-1}_{\{5,6\}} \begin{bmatrix} W^5_{2,1}\\ W^6_{2,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{13,9\}} \begin{bmatrix} W^{13}_{1,1}\\ W^9_{1,2} \end{bmatrix} \\[3pt] &\xv_{\{5,6\}}=\mathbf{H}^{-1}_{\{5,6\}} \begin{bmatrix} W^5_{3,1}\\ W^6_{3,1} \end{bmatrix} \ \ \xv_{\{13,9\}}=\mathbf{H}^{-1}_{\{13,9\}} \begin{bmatrix} W^{13}_{3,1}\\ W^{9}_{3,2} \end{bmatrix}\end{aligned}$$ where the transmitted vectors $\xv_{\{5,6\}}$ and $\xv_{\{13,9\}}$ simply use zero-forcing. Similarly round 4 serves the users in $\mathcal{R}_4$ by sequentially sending $$\begin{aligned} &\xv_{\{7,8,10,11\}}=\mathbf{H}^{-1}_{\{7,8\}} \begin{bmatrix} W^7_{2,1}\\ W^8_{2,1} \end{bmatrix} + \mathbf{H}^{-1}_{\{10,11\}} \begin{bmatrix} W^{10}_{1,2}\\ W^{11}_{1,2} \end{bmatrix} \\[3pt] &\xv_{\{7,8\}}=\mathbf{H}^{-1}_{\{7,8\}} \begin{bmatrix} W^7_{3,1}\\ W^8_{3,1} \end{bmatrix} \ \ \xv_{\{10,11\}}=\mathbf{H}^{-1}_{\{10,11\}} \begin{bmatrix} W^{10}_{3,2}\\ W^{11}_{3,2} \end{bmatrix}\end{aligned}$$ and round 5 serves the users in $\mathcal{R}_5$ by sequentially sending $$\begin{aligned} &\xv_{\{1,2,12,13\}}=\mathbf{H}^{-1}_{\{1,2\}} \begin{bmatrix} W^1_{2,2}\\ W^2_{2,2} \end{bmatrix} + \mathbf{H}^{-1}_{\{12,13\}} \begin{bmatrix} W^{12}_{1,2}\\ W^{13}_{1,2} \end{bmatrix} \\[3pt] &\xv_{\{1,2\}}=\mathbf{H}^{-1}_{\{1,2\}} \begin{bmatrix} W^1_{3,2}\\ W^2_{3,2} \end{bmatrix} \ \ \xv_{\{12,13\}}=\mathbf{H}^{-1}_{\{12,13\}} \begin{bmatrix} W^{12}_{3,2}\\ W^{13}_{3,2} \end{bmatrix}.\end{aligned}$$ Finally, for the remaining rounds $6,7,8$ which respectively involve user sets $\mathcal{R}_6,\mathcal{R}_7$ and $\mathcal{R}_8$ that are connected to the same helper cache 1, data is delivered using the following standard ZF-precoded transmissions $$\begin{aligned} &\xv_{\{3,4\}}=\mathbf{H}^{-1}_{\{3,4\}} \begin{bmatrix} W^3_{2,2}||W^3_{3,2}\\ W^4_{2,2}||W^4_{3,2} \end{bmatrix}\\[3pt] &\xv_{\{5,6\}}=\mathbf{H}^{-1}_{\{5,6\}} \begin{bmatrix} W^5_{2,2}||W^5_{3,2}\\ W^6_{2,2}||W^6_{3,2} \end{bmatrix}\\ &\xv_{\{7,8\}}=\mathbf{H}^{-1}_{\{7,8\}} \begin{bmatrix} W^{7}_{2,2}||W^{7}_{3,2}\\ W^{8}_{2,2}||W^{8}_{3,2} \end{bmatrix}.\end{aligned}$$ The overall delivery time required to serve all users is $$T=\frac{1}{6}\cdot 15+\frac{1}{3}\cdot 3 =\frac{21}{6}$$ where the first summand is for rounds 1 through 5, and the second summand is for rounds 6 through 8. It is very easy to see that this delay remains the same — given again worst-case demand vectors — for any user-to-cache association $\mathcal{U}$ with the same profile $\Lc=(8,5,2)$. Every time, this delay matches the converse $$\begin{aligned} T^*((8,5,2))&\geq \frac{\sum_{r=1}^{2}\mathcal{L}_r{{3-r}\choose 1}}{2{3\choose 1}}= \frac{8\cdot 2+5\cdot 1}{6}=\frac{21}{6}\end{aligned}$$ of Theorem \[thm:resmultiant\]. Information Theoretic Converse\[sec:converse\] ============================================== Toward proving Theorems \[thm:PerClassSingleAntenna\] and \[thm:resmultiant\], we develop a lower bound on the normalized delivery time in (\[eq:T\*\_def\]) for each given user-to-cache association profile $\Lc$. The proof technique is based on the breakthrough in [@WanTP15] which — for the case of $\Lambda = K$, where each user has their own cache — employed index coding to bound the performance of coded caching. Part of the challenge here will be to account for having shared caches, and mainly to adapt the index coding approach to reflect non-uniform user-to-cache association classes. We will begin with lower bounding the normalized delivery time $T(\mathcal{U},\dv,\chi)$, for any user-to-cache association $\mathcal{U}$, demand vector $\dv$ and a generic caching-delivery strategy $\chi$. #### Identifying the distinct problems {#identifying-the-distinct-problems .unnumbered} The caching problem is defined when the user-to-cache association $\mathcal{U}=\{\mathcal{U}_\lambda \}_{\lambda=1}^\Lambda$ and demand vector $\dv$ are revealed. What we can easily see is that there are many combinations of $\{\mathcal{U}_\lambda \}_{\lambda=1}^\Lambda$ and $\dv$ that jointly result in the same coded caching problem. After all, any permutation of the file indices requested by users assigned to the same cache, will effectively result in the same coded caching problem. As one can see, every *distinct* coded caching problem is fully defined by $\{\dvlambda\}_{\lambda=1}^\Lambda$, where $\dvlambda$ denotes the vector of file indices requested by the users in $\mathcal{U}_\lambda$, i.e., requested by the $|\mathcal{U}_\lambda|$ users associated to cache $\lambda$. The analysis is facilitated by reordering the demand vector $\dv$ to take the form $$\label{eq:OrderDemand2} \dv(\Uc){\stackrel{\triangle}{=}}(\boldsymbol{d_1}, \dots, \boldsymbol{d_\lambda}).$$ Based on this, we define the set of worst-case demands associated to a given profile $\Lc$, to be $$\mathcal{D}_{\Lc} = \{\dv(\mathcal{U}): \dv\in \mathcal{D}_{wc}, \mathcal{U} \in \mathcal{U}_{\Lc} \}$$ where $\mathcal{D}_{wc}$ is the set of demand vectors $\dv$ whose $K$ entries are all different (i.e., where $d_i \neq d_j, ~i,j\in[\Lambda],~i\neq j$, corresponding to the case where all users request different files). We will convert each such coded caching problem into an index coding problem. #### The corresponding index coding problem {#the-corresponding-index-coding-problem .unnumbered} To make the transition to the index coding problem, each requested file $W^{\dvlambda(j)}$ is split into $2^\Lambda$ disjoint subfiles $W^{\dvlambda(j)}_\Tau,\Tau\in 2^{[\Lambda]}$ where $\Tau\subset[\Lambda]$ indicates the set of helper nodes in which $W^{\dvlambda(j)}_\Tau$ is cached[^17]. Then — in the context of index coding — each subfile $W^{\dvlambda(j)}_\Tau$ can be seen as being requested by a different user that has as side information all the content $\Zc_\lambda$ of the same helper node $\lambda$. Naturally, no subfile of the form $W^{\dvlambda(j)}_\Tau, \; ~\Tau \ni\lambda$ is requested, because helper node $\lambda$ already has this subfile. Therefore the corresponding index coding problem is defined by $K2^{\Lambda-1}$ requested subfiles, and it is fully represented by the side-information graph $\mathcal{G}=(\mathcal{V}_{\mathcal{G}},\mathcal{E}_{\mathcal{G}})$, where $\mathcal{V}_{\mathcal{G}}$ is the set of vertices (each vertex/node representing a different subfile $W^{\dvlambda(j)}_\Tau, \Tau\not\ni\lambda$) and $\mathcal{E}_{\mathcal{G}}$ is the set of direct edges of the graph. Following standard practice in index coding, a directed edge from node $W^{\dvlambda(j)}_\Tau$ to $W^{\boldsymbol{d_{\lambda'}}(j')}_{\Tau'}$ exists if and only if $\lambda'\in\Tau$. For any given $\mathcal{U}$, $\dv$ (and of course, for any scheme $\chi$) the total delay $T$ required for this index coding problem, is the completion time for the corresponding coded caching problem. #### Lower bounding $T(\Uc,\dv,\chi)$ {#lower-bounding-tucdvchi .unnumbered} We are interested in lower bounding $T(\Uc,\dv,\chi)$ which represents the total delay required to serve the users for the index coding problem corresponding to the side-information graph $\mathcal{G}_{\Uc,\dv}$ defined by $\Uc,\dv,\chi$ or equivalently by $\dv(\Uc),\chi$. In the next lemma, we remind the reader — in the context of our setting — the useful index-coding converse from [@li2017cooperative]. (Cut-set-type converse [@li2017cooperative])\[cor\_dof\] For a given $\Uc,\dv,\chi$, in the corresponding side information graph $\mathcal{G}_{\Uc,\dv}=(\mathcal{V}_{\mathcal{G}},\mathcal{E}_{\mathcal{G}})$ of the $N_0$-antenna MISO broadcast channel with $\mathcal{V}_{\mathcal{G}}$ vertices/nodes and $\mathcal{E}_{\mathcal{G}}$ edges, the following inequality holds $$\label{eq:indexbound}T\geq \frac{1}{N_0}\sum_{{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }\in \mathcal{V_{J}}}|{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }|$$ for every acyclic induced subgraph $\mathcal{J}$ of $\mathcal{G}_{\Uc,\dv}$, where $\mathcal{V}_{\mathcal{J}}$ denotes the set of nodes of the subgraph $\mathcal{J}$, and where $|{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }|$ is the size of the message/subfile/node ${ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }$. *Proof.* The above lemma draws from [@li2017cooperative Corollary 1] (see also [@Sadeghi:16 Corollary 2] for a simplified version), and is easily proved in the Appendix Section \[proof:cor\_dof\]. #### Creating large acyclic subgraphs {#creating-large-acyclic-subgraphs .unnumbered} Lemma \[cor\_dof\] suggests the need to create (preferably large) acyclic subgraphs of $\mathcal{G}_{\mathcal{U},\dv}$. The following lemma describes how to properly choose a set of nodes to form a large acyclic subgraph. \[lem:cons\_acyclic\] An acyclic subgraph $\mathcal{J}$ of $\mathcal{G}_{\Uc,\dv}$ corresponding to the index coding problem defined by $\Uc,\dv,\chi$ for any $\Uc$ with profile $\Lc$, is designed here to consist of all subfiles $W^{\boldsymbol{d_{\sigma_{s}(\lambda)}}(j)}_{\Tau_{\lambda}},~\forall j\in [\mathcal{L}_{\lambda}],~\forall \lambda\in [\Lambda]$ for all $\Tau_{\lambda}\subseteq [\Lambda]\setminus \{\sigma_s(1),\dots,\sigma_s(\lambda)\}$ where $\sigma_s\in S_{\Lambda}$ is the permutation such that $|\mathcal{U}_{\sigma_s(1)}|\geq |\mathcal{U}_{\sigma_s(2)}|\geq\dots\geq |\mathcal{U}_{\sigma_s(\Lambda)}|$. *Proof.* The proof, which can be found in the Appendix Section \[proof:cons\_acyclic\], is an adaptation of [@WanTP15 Lemma 1] to the current setting. The choice of the permutation $\sigma_s$ is critical for the development of a tight converse. Any other choice $\sigma\in S_\Lambda$ may result — in some crucial cases — in an acyclic subgraph with a smaller number of nodes and therefore a looser bound. This approach here deviates from the original approach in [@WanTP15 Lemma 1], which instead considered — for each $\dv,\chi$, for the uniform user-to-cache association case of $K = \Lambda$ — the set of *all* possible permutations, that jointly resulted in a certain symmetry that is crucial to that proof. Here in our case, such symmetry would not serve the same purpose as it would dilute the non-uniformity in $\Lc$ that we are trying to capture. Our choice of a single carefully chosen permutation, allows for a bound which — as it turns out — is tight even in non-uniform cases. The reader is also referred to Section \[subsec:example\] for an explanatory example. Having chosen an acyclic subgraph according to Lemma \[lem:cons\_acyclic\], we return to Lemma \[cor\_dof\] and form — by adding the sizes of all subfiles associated to the chosen acyclic graph — the following lower bound $$T(\Uc,\dv,\chi)\geq T^{LB}(\Uc,\dv,\chi)$$ where $$\begin{aligned} &T^{LB}(\Uc,\dv,\chi) {\triangleq}\frac{1}{N_0}\Bigg( \sum_{j=1}^{\mathcal{L}_{1}}\sum_{\Tau_{1}\subseteq [\Lambda]\setminus \{\sigma_s(1)\}}|W^{\boldsymbol{d_{\sigma_s(1)}}(j)}_{\Tau_{1}}|\nonumber \\ &+ \sum_{j=1}^{\mathcal{L}_{2}}\sum_{\Tau_{2}\subseteq [\Lambda]\setminus \{\sigma_s(1),\sigma_s(2)\}}|W^{\boldsymbol{d_{\sigma_s(2)}}(j)}_{\Tau_{2}}|+\dots \nonumber \\ &+ \sum_{j=1}^{\mathcal{L}_{\Lambda}}\sum_{\Tau_{\Lambda}\subseteq [\Lambda]\setminus \{\sigma_s(1),\dots,\sigma_s(\Lambda)\}}|W^{\boldsymbol{d_{\sigma_s(\Lambda)}}(j)}_{\Tau_{\Lambda}}| \Bigg). \label{eq:TLB}\end{aligned}$$ Our interest lies in a lower bound for the worst-case delivery time/delay associated to profile $\Lc$. Such a worst-case naturally corresponds to the scenario where all users request different files, i.e., where all the entries of the demand vector $\dv(\mathcal{U})$ are different. The corresponding lower bound can be developed by averaging over worst-case demands. Recalling our set $\mathcal{D}_{\Lc}$, the worst-case delivery time can thus be written as $$\begin{aligned} T^*(\Lc)&{\triangleq}\min_{\chi} \max_{(\mathcal{U},\dv) \in (\mathcal{U}_{\Lc},[N]^K)} T(\mathcal{U},\dv,\chi)\\ &\overset{(a)}{\geq} \min_{\chi} \frac{1}{|\mathcal{D}_{\Lc}|} \sum_{\dv(\mathcal{U}) \in \mathcal{D}_{\Lc}} T(\dv(\mathcal{U}),\chi)\label{eq:alternativedefinitionofT}\end{aligned}$$ where in step (a), we used the following change of notation $T(\dv(\mathcal{U}),\chi){\stackrel{\triangle}{=}}T(\mathcal{U},\dv,\chi)$ and averaged over worst-case demands. With a given class/profile $\Lc$ in mind, in order to construct $\mathcal{D}_{\Lc}$ (so that we can then average over it), we will consider all demand vectors $\dv\in \mathcal{D}_{wc}$ for all permutations $\pi\in S_{\Lambda}$. Then for each $\dv$, we create the following set of $\Lambda$ vectors $$\begin{aligned} &\boldsymbol{d^{'}_1}= (d_1 : d_{\mathcal{L}_1}),\\ &\boldsymbol{d^{'}_2}= (d_{\mathcal{L}_1+1} : d_{\mathcal{L}_1+\mathcal{L}_2}),\\ & \vdots \\ &\boldsymbol{d^{'}_{\Lambda}}= (d_{\sum_{i=1}^{\Lambda-1}\mathcal{L}_{i}~+1} : d_{K})\end{aligned}$$ and for each permutation $\pi\in S_{\Lambda}$ applied to the set $\{1,2,\dots,\Lambda\}$, a demand vector $\dv(\mathcal{U})$ is constructed as follows $$\begin{aligned} \dv(\mathcal{U})&{\stackrel{\triangle}{=}}(\boldsymbol{d_1},\boldsymbol{d_2},\dots,\boldsymbol{d_\Lambda})\\ &=(\boldsymbol{d^{'}_{\pi^{-1}(1)}},\boldsymbol{d^{'}_{\pi^{-1}(2)}},\dots,\boldsymbol{d^{'}_{\pi^{-1}(\Lambda)}}).\end{aligned}$$ This procedure is repeated for all $\Lambda!$ permutations ${\pi\in S_{\Lambda}}$ and all $P(N,K)$ worst-case demands $\dv\in \mathcal{D}_{wc}$. This implies that the cardinality of $\mathcal{D}_{\Lc}$ is ${|\mathcal{D}_{\Lc}|=P(N,K)\cdot \Lambda!}$. Using this designed set $\mathcal{D}_{\Lc}$, now the optimal worst-case delivery time in (\[eq:alternativedefinitionofT\]) is bounded as $$\begin{aligned} T^{*}(\Lc) &= \min_{\chi}T(\Lc,\chi)\\ & \geq \min_{\chi} \frac{1}{P(N,K)\Lambda!} \sum_{\dv(\mathcal{U}) \in \mathcal{D}_{\Lc}} T^{LB}(\dv(\mathcal{U}),\chi) \label{eq:lowerboundcompact}\end{aligned}$$ where $T^{LB}(\dv(\mathcal{U}),\chi)$ is given by (\[eq:TLB\]) for each reordered demand vector $\dv(\mathcal{U})\in \mathcal{D}_{\Lc}$. Rewriting the summation in (\[eq:lowerboundcompact\]), we get $$\begin{aligned} \label{eq:longinequality} &\sum_{\dv(\mathcal{U})\in \mathcal{D}_{\Lc}} T^{LB}(\dv(\mathcal{U}),\chi)= \nonumber \\ &\frac{1}{N_0}\sum_{i=0}^{\Lambda}\sum_{n\in[N]}\sum_{\Tau\subseteq[\Lambda]:|\Tau|=i} |W^n_{\Tau}| \cdot \underbrace{\sum_{\dv(\mathcal{U})\in \mathcal{D}_{\Lc}} \mathds{1}_{\mathcal{V}_{\mathcal{J}_s^{\dv(\mathcal{U})}}}(W^n_{\Tau})}_{{\triangleq}Q_{i}(W^n_\Tau)}\end{aligned}$$ where $\mathcal{V}_{\mathcal{J}_s^{\dv(\mathcal{U})}}$ is the set of vertices in the acyclic subgraph chosen according to Lemma \[lem:cons\_acyclic\] for a given $\dv(\mathcal{U})$. In the above, $\mathds{1}_{\mathcal{V}_{\mathcal{J}_s^{\dv(\mathcal{U})}}}(W^n_{\Tau})$ denotes the indicator function which takes the value of 1 only if $W^n_{\Tau} \subset \mathcal{V}_{\mathcal{J}_s^{\dv(\mathcal{U})}}$, else it is set to zero. A crucial step toward removing the dependence on $\Tau$, comes from the fact that $$\begin{aligned} \label{eq:Qi} Q_{i} &= Q_{i}(W^n_\Tau){\stackrel{\triangle}{=}}\sum_{\dv(\mathcal{U})\in \mathcal{D}_{\Lc}} \mathds{1}_{\mathcal{V}_{\mathcal{J}_s^{\dv(\Uc)}}}(W^n_{\Tau}) \nonumber\\ =&{N-1 \choose K-1}\sum_{r=1}^{\Lambda}P(\Lambda-i-1,r-1)(\Lambda-r)!\mathcal{L}_{r} \nonumber\\ &\times P(K-1,\mathcal{L}_{r}-1) (K-\mathcal{L}_{r})! (\Lambda-i)\end{aligned}$$ where we can see that the total number of times a specific subfile appears — in the summation in , over the set of all possible $\dv(\mathcal{U}) \in \mathcal{D}_{\Lc}$, and given our chosen permutation $\sigma_s$ — is not dependent on the subfile itself but is dependent only on the number of caches $i=|\Tau|$ storing that subfile. The proof of can be found in Section \[proof:lemmaQi\]. In the spirit of [@WanTP15], defining $$x_i{\stackrel{\triangle}{=}}\sum_{n\in[N]}\sum_{\Tau\subseteq[\Lambda]:|\Tau|=i}|W^n_{\Tau}|$$ to be the total amount of data stored in exactly $i$ helper nodes, we see that $$\label{eq:sumfiles} N=\sum_{i=0}^{\Lambda}x_i=\sum_{i=0}^{\Lambda}\sum_{n\in[N]}\sum_{\Tau\subseteq[\Lambda]:|\Tau|=i}|W^n_{\Tau}|$$ and we see that combining , and , gives $$\label{eq:compacteq} T(\Lc,\chi)\geq \frac{1}{N_{0}}\sum_{i=0}^{\Lambda}\frac{Q_{i}}{P(N,K)\Lambda!}x_{i}.$$ Now substituting into , after some algebraic manipulations, we get that $$\begin{aligned} T(\Lc,\chi)&\geq \frac{1}{N_0}\sum_{i=0}^{\Lambda}\frac{\sum_{r=1}^{\Lambda-i}\mathcal{L}_{r} {\Lambda-r\choose i}}{N{\Lambda\choose i}}x_{i} \label{eq:LBwithxi}\\ &=\frac{1}{N_0}\sum_{i=0}^{\Lambda}\frac{x_{i}}{N}c_{i} \label{eq:LBwithxi_2}\end{aligned}$$ where $c_{i}\triangleq \frac{\sum_{r=1}^{\Lambda-i}\mathcal{L}_r{\Lambda-r\choose i}}{{\Lambda\choose i}}$ decreases with $i\in \{0,1,\dots,\Lambda\}$. The proof of the transition from to , as well as the monotonicity proof for the sequence $\{c_i\}_{i\in [\Lambda]\cup \{0\}}$, are given in Appendix Sections \[proof:transition\] and \[sec:monotonicity\] respectively. Under the file-size constraint given in , and given the following cache-size constraint $$\sum_{i=0}^{\Lambda}i \cdot x_{i}\leq \Lambda M \label{eq:constr2}$$ the expression in  serves as a lower bound on the delay of any caching-and-delivery scheme $\chi$ whose caching policy implies a set of $\{x_i\}$. We then employ the Jensen’s-inequality based technique of [@YuMA16 Proof of Lemma 2] to minimize the expression in , over all admissible $\{x_i\}$. Hence for any integer $\Lambda\gamma$, we have $$\label{eq:optimization1} T(\Lc,\chi)\geq \frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_r{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}}$$ whereas for all other values of $\Lambda\gamma$, this is extended to its convex lower envelop. The detailed derivation of can again be found in Appendix Section \[lastproof\]. This concludes lower bounding $\max_{(\mathcal{U},\dv) \in (\mathcal{U}_{\Lc},[N]^K)} T(\mathcal{U},\dv,\chi)$, and thus — given that the right hand side of is independent of $\chi$ — lower bounds the performance for any scheme $\chi$, which hence concludes the proof of the converse for Theorem \[thm:resmultiant\] (and consequently for Theorem \[thm:PerClassSingleAntenna\] after setting $N_0 = 1$). Proof of the Converse for Corollary \[cor:ressymMulti\] \[sec:ConverseUniform\] ------------------------------------------------------------------------------- For the uniform case of $\Lc = [\frac{K}{\Lambda},\frac{K}{\Lambda},\dots,\frac{K}{\Lambda}]$, the lower bound in becomes $$\begin{aligned} \frac{1}{N_0}\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}\mathcal{L}_r{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}} & =\frac{1}{N_0}\frac{K}{\Lambda}\frac{\sum_{r=1}^{\Lambda-\Lambda\gamma}{\Lambda-r\choose \Lambda\gamma}}{{\Lambda\choose \Lambda\gamma}} \\ &\overset{(a)}{=} \frac{1}{N_0}\frac{K}{\Lambda}\frac{{\Lambda \choose \Lambda\gamma+1}}{{\Lambda\choose \Lambda\gamma}} \\ & = \frac{K(1-\gamma)}{N_0(\Lambda\gamma+1)} \end{aligned}$$ where the equality in step (a) is due to Pascal’s triangle. Example for $N=K=9$, $N_{0}=2$ and $\Lc=(4,3,2)$\[subsec:example\] ------------------------------------------------------------------ We here give an example of deriving the converse for Theorem \[thm:resmultiant\], emphasizing on how to convert the caching problem to the index-coding problem, and how to choose acyclic subgraphs. We consider the case of having $K=9$ receiving users, and a transmitter with $N_{0}=2$ transmit antennas having access to a library of $N=9$ files of unit size. We also assume that there are $\Lambda=3$ caching nodes, of average normalized cache capacity $\gamma$. We will focus on deriving the bound for user-to-cache association profile $\Lc=(4,3,2)$, meaning that we are interested in the setting where one cache is associated to $4$ users, one cache to $3$ users and one cache associated to $2$ users. Each file $W^n$ is split into $2^{\Lambda}=8$ disjoint subfiles $W^{i}_{\Tau}, \Tau\in 2^{[3]}$ where each $\Tau$ describes the set of helper nodes in which $W^{i}_{\Tau}$ is cached. For instance, $W^1_{13}$ refers to the part of file $W^1$ that is stored in the first and third caching nodes. As a first step, we present the construction of the set $\mathcal{D}_{\Lc}$. To this end, let us start by considering the demand $\dv=(1,2,3,4,5,6,7,8,9)$ and one of the $6$ permutations $\pi\in S_{3}$; for example, let us start by considering $\pi(1)=2,\pi(2)=3,\pi(3)=1$. Toward reordering $\dv$ to reflect $\Lc$, we construct $$\begin{aligned} &\boldsymbol{d^{'}_1}=(1,2,3,4), ~~\boldsymbol{d^{'}_2}=(5,6,7), ~~\boldsymbol{d^{'}_3}=(8,9)\end{aligned}$$ to obtain the reordered demand vector $$\begin{aligned} \dv(\mathcal{U})&=(\boldsymbol{d^{'}_{\pi^{-1}(1)}},\boldsymbol{d^{'}_{\pi^{-1}(2)}},\boldsymbol{d^{'}_{\pi^{-1}(3)}})\\ &=(\boldsymbol{d^{'}_{3}},\boldsymbol{d^{'}_{1}},\boldsymbol{d^{'}_{2}})\end{aligned}$$ which in turn yields $\boldsymbol{d_1}=(8,9),\boldsymbol{d_2}=(1,2,3,4),\boldsymbol{d_3}=(5,6,7)$. Similarly, we can construct the remaining $5$ demands $\dv(\mathcal{U})$ associated to the other $5$ permutations $\pi\in S_{3}$. Finally, the procedure is repeated for all other worst-case demand vectors. These vectors are part of set $\mathcal{D}_{\Lc}$. With the users demands $\dv(\mathcal{U})$ known to the server, the delivery problem is translated into an index coding problem with a side information graph of $K 2^{\Lambda-1}=9\cdot 2^{2}$ nodes. For each requested file $W^{\boldsymbol{d_\lambda}(j)}$, we write down the $4$ subfiles that the requesting user does not have in its assigned cache. Hence, a given user of the caching problem requiring $4$ subfiles from the main server, is replaced by $4$ different new users in the index coding problem. Each of these users request a different subfile and are connected to the same cache $\lambda$ as the original user. $$\begin{array}{c@{}c@{}ccc} \boldsymbol{d_1}=(1,2,3,4),\boldsymbol{d_2}=(5,6,7), &~~& \boldsymbol{d_1}=(1,2,3,4),\boldsymbol{d_2}=(8,9), & \boldsymbol{d_1}=(5,6,7),\boldsymbol{d_2}=(1,2,3,4),\\ \boldsymbol{d_3}=(8,9)&~~&\boldsymbol{d_3}=(5,6,7)&\boldsymbol{d_3}=(8,9)\\ ~~&~~&~~\\ \begin{array}{cccc} \underline{W^{1}_{\emptyset}} & \underline{W^{1}_{2}} & \underline{W^{1}_{3}} & \underline{W^{1}_{23}}\\ \underline{W^{2}_{\emptyset}} & \underline{W^{2}_{2}} & \underline{W^{2}_{3}} & \underline{W^{2}_{23}}\\ \underline{W^{3}_{\emptyset}} & \underline{W^{3}_{2}} & \underline{W^{3}_{3}} & \underline{W^{3}_{23}}\\ \underline{W^{4}_{\emptyset}} & \underline{W^{4}_{2}} & \underline{W^{4}_{3}} & \underline{W^{4}_{23}}\\ \underline{W^{5}_{\emptyset}} & W^{5}_{1} & \underline{W^{5}_{3}} & W^{5}_{13}\\ \underline{W^{6}_{\emptyset}} & W^{6}_{1} & \underline{W^{6}_{3}} & W^{6}_{13}\\ \underline{W^{7}_{\emptyset}} & W^{7}_{1} & \underline{W^{7}_{3}} & W^{7}_{13}\\ \underline{W^{8}_{\emptyset}} & W^{8}_{1} & W^{8}_{2} & W^{8}_{12}\\ \underline{W^{9}_{\emptyset}} & W^{9}_{1} & W^{9}_{2} & W^{9}_{12}\\ \end{array} &~~& \begin{array}{cccc} \underline{W^{1}_{\emptyset}} & \underline{W^{1}_{2}} & \underline{W^{1}_{3}} & \underline{W^{1}_{23}}\\ \underline{W^{2}_{\emptyset}} & \underline{W^{2}_{2}} & \underline{W^{2}_{3}} & \underline{W^{2}_{23}}\\ \underline{W^{3}_{\emptyset}} & \underline{W^{3}_{2}} & \underline{W^{3}_{3}} & \underline{W^{3}_{23}}\\ \underline{W^{4}_{\emptyset}} & \underline{W^{4}_{2}} & \underline{W^{4}_{3}} & \underline{W^{4}_{23}}\\ \underline{W^{5}_{\emptyset}} & W^{5}_{1} & \underline{W^{5}_{2}} & W^{5}_{12}\\ \underline{W^{6}_{\emptyset}} & W^{6}_{1} & \underline{W^{6}_{2}} & W^{6}_{12}\\ \underline{W^{7}_{\emptyset}} & W^{7}_{1} & \underline{W^{7}_{2}} & W^{7}_{12}\\ \underline{W^{8}_{\emptyset}} & W^{8}_{1} & W^{8}_{3} & W^{8}_{13}\\ \underline{W^{9}_{\emptyset}} & W^{9}_{1} & W^{9}_{3} & W^{9}_{13}\\ \end{array} & \begin{array}{cccc} \underline{W^{1}_{\emptyset}} & \underline{W^{1}_{1}} & \underline{W^{1}_{3}} & \underline{W^{1}_{13}} \\ \underline{W^{2}_{\emptyset}} & \underline{W^{2}_{1}} & \underline{W^{1}_{3}} & \underline{W^{2}_{13}} \\ \underline{W^{3}_{\emptyset}} & \underline{W^{3}_{1}} & \underline{W^{3}_{3}} & \underline{W^{3}_{13}} \\ \underline{W^{4}_{\emptyset}} & \underline{W^{4}_{1}} & \underline{W^{4}_{3}} & \underline{W^{4}_{13}} \\ \underline{W^{5}_{\emptyset}} & W^{5}_{2} & \underline{W^{5}_{3}} & W^{5}_{23} \\ \underline{W^{6}_{\emptyset}} & W^{6}_{2} & \underline{W^{6}_{3}} & W^{6}_{23} \\ \underline{W^{7}_{\emptyset}} & W^{7}_{2} & \underline{W^{7}_{3}} & W^{7}_{23} \\ \underline{W^{8}_{\emptyset}} & W^{8}_{1} & W^{8}_{2} & W^{8}_{12} \\ \underline{W^{9}_{\emptyset}} & W^{9}_{1} & W^{9}_{2} & W^{9}_{12} \\ \end{array} \\ ~~&~~&~~\\ \boldsymbol{d_1}=(5,6,7),\boldsymbol{d_2}=(8,9), &~~& \boldsymbol{d_1}=(8,9),\boldsymbol{d_2}=(1,2,3,4), & \boldsymbol{d_1}=(8,9),\boldsymbol{d_2}=(5,6,7),\\ \boldsymbol{d_3}=(1,2,3,4)&~~&\boldsymbol{d_3}=(5,6,7)&\boldsymbol{d_3}=(1,2,3,4)\\ ~~&~~&~~\\ \begin{array}{cccc} \underline{W^{1}_{\emptyset}} & \underline{W^{1}_{1}} & \underline{W^{1}_{2}} & \underline{W^{1}_{12}} \\ \underline{W^{2}_{\emptyset}} & \underline{W^{2}_{1}} & \underline{W^{2}_{2}} & \underline{W^{2}_{12}} \\ \underline{W^{3}_{\emptyset}} & \underline{W^{3}_{1}} & \underline{W^{3}_{2}} & \underline{W^{3}_{12}} \\ \underline{W^{4}_{\emptyset}} & \underline{W^{4}_{1}} & \underline{W^{4}_{2}} & \underline{W^{4}_{12}} \\ \underline{W^{5}_{\emptyset}} & \underline{W^{5}_{2}} & W^{5}_{3} & W^{5}_{23} \\ \underline{W^{6}_{\emptyset}} & \underline{W^{6}_{2}} & W^{6}_{3} & W^{6}_{23} \\ \underline{W^{7}_{\emptyset}} & \underline{W^{7}_{2}} & W^{7}_{3} & W^{7}_{23} \\ \underline{W^{8}_{\emptyset}} & W^{8}_{1} & W^{8}_{3} & W^{8}_{13} \\ \underline{W^{9}_{\emptyset}} & W^{9}_{1} & W^{9}_{3} & W^{9}_{13} \\ \end{array} &~~& \begin{array}{cccc} \underline{W^{1}_{\emptyset}} & \underline{W^{1}_{1}} & \underline{W^{1}_{3}} & \underline{W^{1}_{13}} \\ \underline{W^{2}_{\emptyset}} & \underline{W^{2}_{1}} & \underline{W^{2}_{3}} & \underline{W^{2}_{13}} \\ \underline{W^{3}_{\emptyset}} & \underline{W^{3}_{1}} & \underline{W^{3}_{3}} & \underline{W^{3}_{13}} \\ \underline{W^{4}_{\emptyset}} & \underline{W^{4}_{1}} & \underline{W^{4}_{3}} & \underline{W^{4}_{13}} \\ \underline{W^{5}_{\emptyset}} & \underline{W^{5}_{1}} & W^{5}_{2} & W^{5}_{12} \\ \underline{W^{6}_{\emptyset}} & \underline{W^{6}_{1}} & W^{6}_{2} & W^{6}_{12} \\ \underline{W^{7}_{\emptyset}} & \underline{W^{7}_{1}} & W^{7}_{2} & W^{7}_{12} \\ \underline{W^{8}_{\emptyset}} & W^{8}_{2} & W^{8}_{3} & W^{8}_{23} \\ \underline{W^{9}_{\emptyset}} & W^{9}_{2} & W^{9}_{3} & W^{9}_{23} \\ \end{array} & \begin{array}{cccc} \underline{W^{1}_{\emptyset}} & \underline{W^{1}_{1}} & \underline{W^{1}_{2}} & \underline{W^{1}_{12}} \\ \underline{W^{2}_{\emptyset}} & \underline{W^{2}_{1}} & \underline{W^{2}_{2}} & \underline{W^{2}_{12}} \\ \underline{W^{3}_{\emptyset}} & \underline{W^{3}_{1}} & \underline{W^{3}_{2}} & \underline{W^{3}_{12}} \\ \underline{W^{4}_{\emptyset}} & \underline{W^{4}_{1}} & \underline{W^{4}_{2}} & \underline{W^{4}_{12}} \\ \underline{W^{5}_{\emptyset}} & \underline{W^{5}_{1}} & W^{5}_{3} & W^{5}_{13} \\ \underline{W^{6}_{\emptyset}} & \underline{W^{6}_{1}} & W^{6}_{3} & W^{6}_{13} \\ \underline{W^{7}_{\emptyset}} & \underline{W^{7}_{1}} & W^{7}_{3} & W^{7}_{13} \\ \underline{W^{8}_{\emptyset}} & W^{8}_{2} & W^{8}_{3} & W^{8}_{23} \\ \underline{W^{9}_{\emptyset}} & W^{9}_{2} & W^{9}_{3} & W^{9}_{23} \ \end{array} \\ \end{array}$$ The nodes of the $6$ side-information graphs corresponding to the aforementioned vectors $\dv(\mathcal{U})$ (one for each permutation $\pi\in S_{3}$) for demand $\dv=(1,2,3,4,5,6,7,8,9)$, are depicted in Figure \[fig:graphs\]. For each side-information graph, we develop a lower bound as in Lemma \[cor\_dof\]. We recall that the lemma applies to acyclic subgraphs, which we create as follows; for each permutation[^18] $\sigma\in S_3$, a set of nodes forming an acyclic subgraph is $$\begin{aligned} &\{W^{\boldsymbol{d_{\sigma(1)}}(j)}_{\Tau_{1}}\}_{j=1}^{|\mathcal{U}_{\sigma(1)}|}\;\text{ for all}\; \Tau_{1}\subseteq \{1,2,3\}\setminus{\{\sigma(1)\}},\\ &\{W^{\boldsymbol{d_{\sigma(2)}}(j)}_{\Tau_{2}}\}_{j=1}^{|\mathcal{U}_{\sigma(2)}|} \;\text{ for all}\; \Tau_{2}\subseteq \{1,2,3\}\setminus{\{\sigma(1),\sigma(2)\}},\\ &\{W^{\boldsymbol{d_{\sigma(3)}}(j)}_{\Tau_{3}}\}_{j=1}^{|\mathcal{U}_{\sigma(3)}|} \;\text{ for all}\; \Tau_{3}\subseteq \{1,2,3\}\setminus{\{\sigma(1),\sigma(2),\sigma(3)\}}.\end{aligned}$$ Based on this construction of acyclic graphs, our task now is to choose a permutation $\sigma_s\in S_3$ that forms the maximum-sized acyclic subgraph. For the case where $\boldsymbol{d_1}=(8,9),\boldsymbol{d_2}=(1,2,3,4)$ and $\boldsymbol{d_3}=(5,6,7)$, it can be easily verified that such a permutation $\sigma_s$ is the one with $\sigma_s(1)=2$,$\sigma_s(2)=3$ and $\sigma_s(3)=1$. In Figure \[fig:graphs\], for each of the six graphs, we underline the nodes corresponding to the acyclic subgraph that is formed by such permutation $\sigma_s$. The outer bound now involves adding the sizes of these chosen (underlined) nodes. For example, for the demand $\dv(\mathcal{U})=((8,9),(1,2,3,4),(5,6,7))$ (this corresponds to the lower center graph), the lower bound in  becomes $$\begin{aligned} T(\dv(\mathcal{U}))&\geq \frac{1}{2}\left(|W^{1}_{\emptyset}|+|W^{1}_{1}|+|W^{1}_{3}|+|W^{1}_{13}|+|W^{2}_{\emptyset}|\right.\nonumber\\ &+|W^{2}_{1}|+|W^{2}_{3}|+|W^{2}_{13}|+|W^{3}_{\emptyset}|+|W^{3}_{1}|\nonumber\\ &+|W^{3}_{3}|+|W^{3}_{13}|+|W^{4}_{\emptyset}|+|W^{4}_{1}|+|W^{4}_{3}|\nonumber\\ &+|W^{4}_{13}|+|W^{5}_{\emptyset}|+|W^{5}_1|+|W^6_{\emptyset}|+|W^6_1|\nonumber\\ &\left.+|W^7_{\emptyset}|+|W^7_1|+|W^{8}_{\emptyset}|+|W^{9}_{\emptyset}|\right).\end{aligned}$$ The lower bounds for the remaining $5$ vectors $\dv(\mathcal{U})$ for the same $\dv=(1,2,3,4,5,6,7,8,9)$, are given in a similar way, again by adding the (underlined) nodes of the corresponding acyclic subgraphs (again see Figure \[fig:graphs\]). Subsequently, the procedure is repeated for all $P(N,K)=K!=9!$ worst-case demand vectors $\dv\in \mathcal{D}_{wc}$. Finally, all the $P(N,K)\cdot\Lambda!=9!\cdot 3!$ bounds are averaged to get $$\begin{aligned} \label{eq:example_final} &T(\Lc,\chi) \geq \frac{1}{2}\frac{1}{9!\cdot3!}\nonumber \\ &\!\!\! \sum_{\dv(\mathcal{U})\in \mathcal{D}_{\Lc}}\sum_{\lambda\in[3]}\sum_{j=1}^{\mathcal{L}_{\lambda}}\sum_{\Tau_{\lambda}\subseteq [3]\setminus \{\sigma_s(1),\dots,\sigma_s(\lambda)\}}\!\!\!\!\!\!\!\!\!\!|W^{\boldsymbol{d_{\boldsymbol{\sigma_s(\lambda)}}}(j)}_{\Tau_{\lambda}}|\end{aligned}$$ which is rewritten as $$\begin{aligned} \label{eq:example_step1} &T(\Lc,\chi) \geq \frac{1}{2}\frac{1}{9!\cdot3!} \nonumber \\ &\sum_{i=0}^{3}\sum_{n\in[9]}\sum_{\Tau\subseteq[3]:|\Tau|=i} |W^n_{\Tau}| \cdot \underbrace{\sum_{\dv(\mathcal{U})\in \mathcal{D}_{\Lc}} \mathds{1}_{\mathcal{V}_{\mathcal{J}_s^{\dv(\mathcal{U})}}}(W^n_{\Tau})}_{Q_{i}(W^n_\Tau)}.\end{aligned}$$ After the evaluation of the term $Q_i(W^n_\Tau)$, the bound in can be written in a more compact form as $$\begin{aligned} \label{eq:exbound} T(\Lc,\chi)& \geq \frac{1}{2}\sum_{i=0}^{3}\frac{\sum_{r=1}^{3-i}\mathcal{L}_r{3-r\choose i}}{9{3\choose i}}x_{i}\\ &\geq Conv\Bigg( \frac{1}{2}\frac{\sum_{r=1}^{3-i}\mathcal{L}_r{3-r\choose i}}{{3\choose i}}\Bigg)\label{eq:exbound2}\end{aligned}$$ where the proof of the transition from (\[eq:example\_step1\]) to (\[eq:exbound\]) and from (\[eq:exbound\]) to (\[eq:exbound2\]) can be found in the general proof (Section \[sec:converse\]). Conclusions\[sec:discussion\] ============================= We have treated the multi-sender coded caching problem with shared caches which can be seen as an information-theoretically simplified representation of some instances of the so-called cache-aided heterogeneous networks, where one or more transmitters communicate to a set of users, with the assistance of smaller nodes that can serve as caches. The work is among the first — after the work in [@WanTP15] — to employ index coding as a means of providing (in this case, exact) outer bounds for more involved cache-aided network topologies that better capture aspects of cache-aided wireless networks, such as having shared caches and a variety of user-to-cache association profiles. Dealing with such non uniform profiles, raises interesting challenges in redesigning converse bounds as well as redesigning coded caching which is known to generally thrive on uniformity. Our effort also applied to the related problem of coded caching with multiple file requests. In addition to crisply quantifying the (adverse) effects of user-to-cache association non-uniformity, the work also revealed a multiplicative relationship between multiplexing gain and cache redundancy, thus providing further evidence of the powerful impact of jointly introducing a modest number of antennas and a modest number of helper nodes that serve as caches. We believe that the result can also be useful in providing guiding principles on how to assign shared caches to different users, especially in the presence of multiple senders. Finally we believe that the current presented adaptation of the outer bound technique to non-uniform settings may also be useful in analyzing different applications like distributed computing [@LiAliAvestimehrComputIT18; @PLE:18a; @KonstantinidisRamamoorthyArxiv18; @YanYangWiggerArxiv18; @MingyueJiISIT18] or data shuffling [@AttiaTandon16; @AttiaTandonISIT18; @WanTuninettiShuffling18; @MohajerISIT18] which can naturally entail such non uniformities. Appendix \[sec:Appendix\] ========================= Proof of Lemma \[cor\_dof\] \[proof:cor\_dof\] ---------------------------------------------- In the addressed problem, we consider a MISO broadcast channel with $N_0$ antennas at the transmitter serving $K$ receivers with some side information due to caches. In the wired setting this (high-SNR setting) is equivalent to the distributed index coding problem with $N_0$ senders $J_1,\dots,J_{N_0}$, all having knowledge of the entire set of messages, and each being connected via an (independent) broadcast line link of capacity $C_{J_i}=1, i\in[N_0]$ to the $K$ receivers which hold side information. This multi-sender index coding problem is addressed in [@li2017cooperative]. By adapting the achievable rate result in [@li2017cooperative Corollary1] to our problem, we get $$\sum_{{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }\in \mathcal{V}_{\mathcal{J}}}R_{{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }}\leq \sum_{i\in[N_0]}C_{J_i}$$ ($R_{{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }} = \frac{|{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }|}{T}$ is the rate for message ${ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }$), that yields $$\label{eq:lemma_1} \sum_{{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }\in \mathcal{V}_{\mathcal{J}}}\frac{|{ \mathchoice {{\scriptstyle\mathcal{V}}} {{\scriptstyle\mathcal{V}}} {{\scriptscriptstyle\mathcal{V}}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$}} }|}{T}\leq N_0$$ which, when inverted, gives the bound in Lemma \[cor\_dof\]. Proof of Lemma \[lem:cons\_acyclic\] \[proof:cons\_acyclic\] ------------------------------------------------------------ Consider a permutation $\sigma$ where the subfiles $W^{\boldsymbol{d_{\sigma(\lambda)}}(j)}_{\Tau_\lambda},\forall j\in\mathcal{U}_{\sigma(\lambda)}$ for all $\Tau_\lambda\subseteq[\Lambda]\setminus\{\sigma(1),\dots,\sigma(\lambda)\}$ are all placed in row $\lambda$ of a matrix whose rows are labeled by $\lambda = 1,2,\dots,\Lambda$. The index coding users corresponding to subfiles in row $\lambda$ only know (as side information) subfiles $W^{d_k}_{\Tau}, \ \Tau\ni \sigma(\lambda)$. Consequently each user/node of row $\lambda$ does not know any of the subfiles in the same row[^19] nor in the previous rows. As a result, the proposed set of subfiles chosen according to permutation $\sigma$, forms a subgraph that does not contain any cycle. A basic counting argument can tell us that the number of subfiles — in the acyclic subgraph formed by any permutation $\sigma\in S_{\Lambda}$ — that are stored in exactly $i$ caches, is $$\label{eq:no_subfiles} \sum_{r=1}^{\Lambda-i}|\Uc_{\sigma(r)}|{\Lambda-r\choose i}.$$ This means that the total number of subfiles in the acyclic subgraph is simply $$\sum_{i=0}^{\Lambda}\sum_{r=1}^{\Lambda-i}|\Uc_{\sigma(r)}|{\Lambda-r\choose i}.$$ This number is maximized when the permutation $\sigma$ guarantees that the vector $(|\Uc_{\sigma(1)}|,|\Uc_{\sigma(2)}|,\dots,|\Uc_{\sigma(\Lambda)}|)$ is in descending order. This maximization is achieved with our choice of the ordering permutation $\sigma_s$ (as this was defined in the notation part) when constructing the acyclic graphs. Proof of Equation (\[eq:Qi\]) \[proof:lemmaQi\] ----------------------------------------------- Here, through a combinatorial argument, we derive $Q_i(W^n_\Tau)$, that is the number of times that a subfile $W^n_\Tau$ with index size $|\Tau|=i$ appears in all the acyclic subgraphs chosen to develop the lower bound. There are ${N-1\choose K-1}$ subsets $\Upsilon_{m},m\in[{N-1\choose K-1}]$ out of $N\choose K$ unordered subsets of $K$ files from the set $\{W^{j},j\in[N]\}$ that contain file $W^{n}$, and for each $\Upsilon_{m}$ there exists $K!$ different demand vectors $\dv'$. For each $\Upsilon_{m}$, among all possible demand vectors, a subfile $W^{n}_{\Tau}: |\Tau|=i$ appears in the side information graph an equal number of times. For a fixed $\Upsilon_{m}$, file $W^{n}$ is requested by a user connected to any helper node with a certain cardinality $\mathcal{L}_r$. By construction, $Q_{i}(W^n_\Tau)$ can be rewritten as $$\begin{aligned} Q_{i}(W^n_\Tau)&=\sum_{\dv\in \mathcal{D}_{wc}}\sum_{\pi\in S_{\Lambda}} \mathds{1}_{{\mathcal{V}_{\mathcal{J}_{s}^{\dv_r(\Uc)}}}}(W^{n}_{\Tau})\notag\\ &={N-1 \choose K-1}\sum_{r=1}^{\Lambda}\sum_{\dv'_{r}\in \mathcal{D}_{wc}}\sum_{\pi\in S_{\Lambda}} \mathds{1}_{{\mathcal{V}_{\mathcal{J}_{s}^{\dv'_r(\Uc)}}}}(W^{n}_{\Tau})\label{Qi} \notag\end{aligned}$$ where $\dv'_{r}$ denotes the subset of all demand vectors from $\Upsilon_{m}$ such that $n\in \boldsymbol{d_\lambda}:|\boldsymbol{d_\lambda}|=\mathcal{L}_r$. The number of chosen maximum acyclic subgraphs containing $W^{n}_{\Tau}$ that arise from all the demand vectors $\dv'_{r}(\Uc)$ is evaluated as follows. After fixing the demands such that $n\in\boldsymbol{d_\lambda}:|\boldsymbol{d_\lambda}|=\mathcal{L}_r $, then $W^n_\Tau$ appears in the side information graph only if it is requested by a user connected to helper node $\lambda$ such that $\lambda\notin\Tau$, which corresponds to $(\Lambda-i)$ different *available* positions in the demand vector $\boldsymbol{d'}_{r}(\Uc)$, since $|\Tau|=i$. After fixing one of the $(\Lambda-i)$ positions occupied by $\boldsymbol{d_\lambda}:|\boldsymbol{d_\lambda}|=\mathcal{L}_r$, for the remaining demands ${\boldsymbol{d_\lambda}}:|\boldsymbol{d_\lambda}|=\mathcal{L}_j,\forall j\in[\Lambda]\setminus\{r\}$ there are $P(\Lambda-i-1,r-1)\cdot(\Lambda-r)!$ possible ways to be placed into $\dv$. After fixing the order of $\boldsymbol{d_\lambda},\forall \lambda\in [\Lambda]$ in $\dv$ and $n\in \boldsymbol{d_\lambda}:|\boldsymbol{d_\lambda}|=\mathcal{L}_r$, there are $\mathcal{L}_r$ different positions in which $n$ can be placed in $\boldsymbol{d_\lambda}:|\boldsymbol{d_\lambda}|=\mathcal{L}_r$. This leaves out $\mathcal{L}_{r}-1$ positions with $K-1$ different numbers from the considered set $\Upsilon_{m}\setminus{\{n\}}$, and the remaining $K-\mathcal{L}_r$ positions in $\dv'_{r}$ are filled with $K-\mathcal{L}_r$ numbers. Therefore, there exist $\mathcal{L}_rP(K-1,\mathcal{L}_r-1)(K-\mathcal{L}_r)!$ different demand vectors where the subfile $W^{n}_{\Tau}$ will appear in the associated maximum acyclic subgraphs. Hence, the above jointly tell us that $$\begin{aligned} &Q_{i}(W^n_\Tau)={N-1 \choose K-1}\sum_{r=1}^{\Lambda}P(\Lambda-i-1,r-1)\nonumber\\ &\times(\Lambda-r)!\mathcal{L}_rP(K-1,\mathcal{L}_r-1)(K-\mathcal{L}_r)!(\Lambda-i)\end{aligned}$$ which concludes the proof. Transition from Equation (\[eq:compacteq\]) to (\[eq:LBwithxi\]) \[proof:transition\] ------------------------------------------------------------------------------------- The coefficient of $x_i$ in equation (\[eq:compacteq\]), can be further simplified as follows $$\begin{aligned} \label{eq:finanumber} &\frac{Q_{i}}{\Lambda!P(N,K)} \\ =&\frac{(N-1)!(N-K)!}{(K-1)!(N-K)!\Lambda!N!}\sum_{r=1}^{\Lambda}\mathcal{L}_rP(K-1,\mathcal{L}_r-1) \\ &(K-\mathcal{L}_r)!(\Lambda-i)P(\Lambda-i-1,r-1)(\Lambda-r)!\nonumber\\ =&\frac{1}{(K-1)!\Lambda!N}\sum_{r=1}^{\Lambda}\mathcal{L}_r\nonumber\\ &\frac{(K-1)!(K-\mathcal{L}_r)!(\Lambda-i)(\Lambda-i-1)!(\Lambda-r)!}{(K-\mathcal{L}_r)!(\Lambda-i-r)!}\nonumber\\ =&\frac{1}{\Lambda!N}\sum_{r=1}^{\Lambda}\mathcal{L}_r\frac{(K-1)!(\Lambda-i)!(\Lambda-r)!}{(K-1)!(\Lambda-i-r)!}\nonumber\\ =&\frac{1}{N}\sum_{r=1}^{\Lambda}L_{\pi_s(r)}\frac{(\Lambda-i)!(\Lambda-r)!i!}{\Lambda!(\Lambda-i-r)!i!}\nonumber\\ =&\frac{1}{N}\sum_{r=1}^{\Lambda}\mathcal{L}_r\frac{{\Lambda-r\choose i}}{{\Lambda\choose i}}\end{aligned}$$ which concludes the proof. Monotonicity of $\{c_i\}$ \[sec:monotonicity\] ---------------------------------------------- Let us define the following sequences $$\begin{aligned} (a_n)_{n\in[\Lambda-i]}&{\stackrel{\triangle}{=}}& \bigg\{\frac{{{\Lambda-n}\choose i}}{{\Lambda \choose i}}, n\in [\Lambda-i]\bigg\} \\ (b_n)_{n\in[\Lambda-i-1]}&{\stackrel{\triangle}{=}}&\bigg\{\frac{{{\Lambda-n}\choose {i+1}}}{{\Lambda \choose {i+1}}}, n\in [\Lambda-i-1]\bigg\}.\end{aligned}$$ It is easy to verify that $a_n\geq b_n, \; \forall n\in [\Lambda-i]$. Consider now the set of scalar numbers $\{V_j, j\in[\Lambda-i], V_j\in \mathbb{N}\}$. The inequality $a^{*}_n\geq b^{*}_n, \; \forall n\in [\Lambda-i]$ holds for $${(a^{*}_n)_{n\in[\Lambda-i]}{\stackrel{\triangle}{=}}\big\{V_n\cdot a_n, n\in [\Lambda-i]\big\}}$$ and $${(b^{*}_n)_{n\in[\Lambda-i-1]}{\stackrel{\triangle}{=}}\big\{V_n\cdot b_n, n\in [\Lambda-i-1]\big\}}.$$ As a result, we have $$\sum_{n\in[\Lambda-i]}V_n\cdot a_n \geq \sum_{n\in[\Lambda-i]}V_n\cdot b_n \label{ineq:Lnan_Lnbn}$$ which proves that $c_i\geq c_{i+1}$. Proof of (\[eq:optimization1\]) \[lastproof\] --------------------------------------------- Through the respective change of variables $t{\stackrel{\triangle}{=}}\Lambda\frac{M}{N}$, $x'_{i}{\stackrel{\triangle}{=}}\frac{x_{i}}{N}$ and $c'_{i}{\stackrel{\triangle}{=}}\frac{c_i}{N_0}$, in equations (\[eq:LBwithxi\]), (\[eq:sumfiles\]) and (\[eq:constr2\]), we obtain $$\begin{aligned} T(\Lc,\chi)&\geq &\sum_{i=0}^{\Lambda}x'_{i}c'_i \label{eq:compact2}\\ \sum_{i=0}^{\Lambda}x'_{i}&=&1 \label{eq:constr111}\\ \sum_{i=0}^{\Lambda}ix'_{i}&\leq& t .\label{eq:constr22}\end{aligned}$$ Let $X$ denote a discrete integer-valued random variable with probability mass function $f_{X}(x)=\{x'_i ~\text{if}~x=i, \forall i\in \{0,1,\dots,\Lambda\}\}$, where the $x'_i$ are those that satisfy equation . The value $c'_i$ can also be seen as the realization of a random variable $Y{\stackrel{\triangle}{=}}g(X)$, where $g(x)=\frac{\sum_{r=1}^{\Lambda-x}\mathcal{L}_r{\Lambda-r\choose x}}{N_0{\Lambda\choose x}}$, having the same probability mass function as $X$, i.e. $f_{Y}(y)=\{x'_i ~\text{if}~y=c'_i, \forall i\in \{0,1,\dots,\Lambda\}\}$. Due to the equation in (\[eq:constr22\]), the expectation of $X$ is bounded as $\mathbb{E}[X]\leq t$. Similarly, (\[eq:compact2\]) is equivalent to $T(\Lc,\chi)\geq \mathbb{E}[Y]$. From Jensen’s inequality, we have $ T(\Lc,\chi)\geq \mathbb{E}[Y]\geq g(\mathbb{E}[X]) $. Since the sequence $\{c'_i\}$ (and equivalently the function $g(x)$) is monotonically decreasing, the following lower bound holds $$T(\Lc,\chi)\geq g(\mathbb{E}[X])\geq g(t)=\frac{\sum_{r=1}^{\Lambda-t}\mathcal{L}_r{\Lambda-r\choose t}}{N_0{\Lambda\choose t}}.$$ This concludes the proof. Proof of Equation (\[eq:totdelay2\]) \[sec:BinomialChangeProof\] ---------------------------------------------------------------- We remind the reader that (for brevity of exposition, and without loss of generality) this part assumes that the $|\Uc_{\lambda}|$ are in decreasing order. We define the following quantity $$b_\lambda{\stackrel{\triangle}{=}}|\Uc_{1}|-|\Uc_{\lambda}|$$ and rewrite the total number of transmissions using the above definition as $$\begin{aligned} &\sum_{j=1}^{|\Uc_{1}|}{{\Lambda \choose \Lambda\gamma+1}-{a_j \choose \Lambda\gamma+1}}\notag \allowbreak\\ &=|\Uc_{1}|{\Lambda \choose \Lambda\gamma+1}-\sum_{j=1}^{|\Uc_{1}|}{{a_j \choose \Lambda\gamma+1}}\notag \allowbreak\\ &=\sum_{i=1}^{\Lambda-\Lambda\gamma}{(|\Uc_{i}|+b_i){\Lambda-i \choose \Lambda\gamma}}-\sum_{j=1}^{|\Uc_{1}|}\sum_{i=\Lambda\gamma}^{a_j-1}{{i \choose \Lambda\gamma}}\allowbreak \notag \\ &=\sum_{i=1}^{\Lambda-\Lambda\gamma}{|\Uc_{i}|{\Lambda-i \choose \Lambda\gamma}}+{\sum_{i=\Lambda\gamma}^{\Lambda-1}{b_{\Lambda-i}{i \choose \Lambda\gamma}}}\allowbreak \notag\\ &-{\sum_{j=1}^{|\Uc_{1}|}\sum_{i=\Lambda\gamma}^{a_j-1}{{i \choose \Lambda\gamma}}}\allowbreak\notag\end{aligned}$$ $$\begin{aligned} &\overset{(a)}{=}\sum_{i=1}^{\Lambda-\Lambda\gamma}{|\Uc_{i}|{\Lambda-i \choose \Lambda\gamma}}+{\sum_{i=\Lambda\gamma}^{\Lambda-1}{\sum_{j:a_j\geq i+1}^{|\Uc_{1}|}{i \choose \Lambda\gamma}}}\notag\allowbreak\\ &-{\sum_{j:a_j-1\geq \Lambda\gamma}^{|\Uc_{1}|}\sum_{i=\Lambda\gamma}^{a_j-1}{{i \choose \Lambda\gamma}}}\notag\allowbreak\\ &\overset{(b)}{=}\sum_{i=1}^{\Lambda-\Lambda\gamma}{|\Uc_{i}|{\Lambda-i \choose \Lambda\gamma}}+\sum_{j:a_j\geq \Lambda\gamma+1}^{|\Uc_{1}|}{\sum_{i=\Lambda\gamma}^{a_j-1}{i \choose \Lambda\gamma}}\notag \allowbreak\\ &-\sum_{j:a_j-1\geq \Lambda\gamma}^{|\Uc_{1}|}\sum_{i=\Lambda\gamma}^{a_j-1}{{i \choose \Lambda\gamma}}\notag\\ &=\sum_{i=1}^{\Lambda-\Lambda\gamma}{|\Uc_{i}|{\Lambda-i \choose \Lambda\gamma}}\label{eq:finalform}\end{aligned}$$ where step $(a)$ uses the equality $b_{\Lambda-i}=\sum_{j:a_j\geq i+1}^{|\Uc_{1}|}{1}$, and where step $(b)$ follows by changing the counting order of the double summation in the second summand. Substituting (\[eq:finalform\]) into the numerator of (\[eq:totdelay1\]) yields the overall delivery time given in (\[eq:totdelay2\]). The same performance holds for any $\Uc$ with the same profile $\Lc$. Transition to the Multiple File Request Problem\[sec:AppendixMultipleFileRequests\] ----------------------------------------------------------------------------------- We here briefly describe how the converse and the scheme presented in the shared cache problem, can fit the multiple file request problem. #### Converse {#converse .unnumbered} In Remark \[rem:multipleFilerequstsResult\] we described the equivalence between the two problems. Based on this equivalence, we will describe how the proof of the converse in Section \[sec:converse\] holds in the multiple file request problem with $N_0=1$, where now simply some terms carry a different meaning. Firstly, each entry $\boldsymbol{d_{\lambda}}$ of the vector defined in equation now denotes the vector of file indices requested by user $\lambda$. Then we see that Lemma \[lem:cons\_acyclic\] (proved in Section \[proof:cons\_acyclic\]) directly applies to the equivalent index coding problem of the multiple file requests problem, where now, for a given permutation $\sigma$ (see Section \[proof:cons\_acyclic\]), all the subfiles placed in row $\lambda$ — i.e., subfiles $W^{\boldsymbol{d_{\sigma(\lambda)}}(j)}_{\Tau_\lambda},\forall j\in\mathcal{U}_{\sigma(\lambda)}$ for all $\Tau_\lambda\subseteq[\Lambda]\setminus\{\sigma(1),\dots,\sigma(\lambda)\}$ — are obtained from different files requested by the same user, and therefore any two of these subfiles/nodes are not connected by any edge in the side information graph. After these two considerations, the rest of the proof of Lemma \[lem:cons\_acyclic\] is exactly the same. The remaining of the converse consists only of mathematical manipulations which remain unchanged and which yield the same lower bound expression. #### Scheme {#scheme .unnumbered} The cache placement phase is identical to the one described in Section \[sec:SchemePlacement\], where now each cache $\lambda$ is associated to the single user $\lambda$. In the delivery phase, the scheme now follows directly the steps in Section \[sec:SchemeDelivery\] applied to the shared-link (single antenna) setting, where now $\mathcal{A}_{\lambda}=\mathcal{U}_{\lambda}$ (cf. ). As in the case with shared caches, the scheme consists of $\mathcal{L}_1$ rounds, each serving users $$\label{eq:UsersServerPerRound2} \mathcal{R}_j=\bigcup_{\lambda\in[\Lambda]} \big( \mathcal{U}_{\lambda}(j):\mathcal{L}_\lambda \geq j \big)$$ where $\mathcal{U}_{\lambda}(j)$ is the $j$-th user in set $\mathcal{U}_{\lambda}$. The expression in  now means that the multiple files requested by each user are transmitted in a *time-sharing* manner, and at each round the transmitter serves at most one file per user. Next, equation is replaced by $$\chi_\mathcal{Q}=\bigcup_{\lambda\in \mathcal{Q}}\big( \mathcal{U}_{\lambda}(j):\mathcal{L}_\lambda \geq j \big)$$ and then each transmitted vector described in equation , is substituted by the scalar $$x_{\chi_{\mathcal{Q}}}=\!\!\!\!\bigoplus_{\lambda\in \mathcal{Q}:\mathcal{L}_\lambda \geq j} W^{d_{\mathcal{U}_{\lambda}(j)}}_{\mathcal{Q}\backslash{\{\lambda\}},1}.$$ Finally decoding remains the same, and the calculation of delay follows directly. [^1]: The authors are with the Communication Systems Department at EURECOM, Sophia Antipolis, 06410, France (email: parrinel@eurecom.fr, unsal@eurecom.fr, elia@eurecom.fr). The work is supported by the European Research Council under the EU Horizon 2020 research and innovation program / ERC grant agreement no. 725929 (project DUALITY). [^2]: This work is to appear in part in the proceedings of ITW 2018. A preliminary version of this work, focusing only on the single-stream setting, can be found in [@PEU_arxiv_single]. [^3]: This work explores other cases as well, such as that where the performance measure is the average delivery delay, for which various bounds are presented. [^4]: We also introduce a very brief parenthetical note that translates these results to the multiple file request scenario. [^5]: We note that while the representation here is of a wireless model, the results apply directly to the standard wired multi-sender setting. In the high-SNR regime of interest, when $N_0=1$ and $\Lambda = K$ (where each cache is associated to one user), the setting matches identically the original single-stream shared-link setting in [@MN14]. In particular, the file size and log(SNR) are here scaled so that, as in [@MN14], each point-to-point link has (ergodic) capacity of 1 file per unit of time. When $N_0>1 $ and $\Lambda = K$ (again each cache is associated to one user), the setting matches the multi-server *wireline* setting of [@ShariatpanahiMK16it] with a fully connected linear network, which we now explore in the presence of fewer caches serving potentially different numbers of users. [^6]: Here $\Lc$ is simply the vector of the cardinalities of $\mathcal{U}_\lambda, ~\forall\lambda\in\{1,\dots,\Lambda\}$, sorted in descending order. For example, $\mathcal{L}_{1}=6$ states that the highest number of users served by a single cache, is $6$. [^7]: An example of a user-to-cache assignment could have that users $\mathcal{U}_1=(14,15)$ are assigned to helper node $1$, users $\mathcal{U}_2=(1,2,3,4,5,6,7,8)$ are assigned to helper node $2$, and users $\mathcal{U}_3=(9,10,11,12,13)$ to helper node $3$. This corresponds to a profile $\Lc=(8,5,2)$. The assignment $\mathcal{U}_1=(1,3,5,7,9,11,13,15)$, $\mathcal{U}_2=(2,4)$, $\mathcal{U}_3=(6,8,10,12,14)$ would have the same profile, and the two resulting $\mathcal{U}$ would belong to the same class labeled by $\Lc=(8,5,2)$. [^8]: The time scale is normalized such that one time slot corresponds to the optimal amount of time needed to send a single file from the transmitter to the receiver, had there been no caching and no interference. [^9]: This is also presented in the preliminary version of this work in [@PEU_arxiv_single]. [^10]: Here, this uniform case, naturally implies that $\Lambda|K$. [^11]: For example, having $\mathcal{U}_2 = \{3,5,7\}$, means that user 2 has requested files $W^{d_{3}},W^{d_{5}},W^{d_{7}}$. [^12]: This would then require $\frac{K}{\Lambda}$ such rounds in order to cover all $K$ users. [^13]: Note also that having $\mathcal{L}_\lambda\geq N_0, \forall\lambda\in[\Lambda]$ guarantees that in any given $\boldsymbol{s_{\lambda,j}}, j\in[\mathcal{L}_\lambda]$, a user appears at most once. [^14]: The transmitted-vector structure below draws from the structure in [@LampirisEliaJsac18], in the sense that it involves the linear combination of one or more *Zero Forcing* precoded (ZF-precoded) vectors of subfiles that are labeled (as we see below) in the spirit of [@MN14]. [^15]: A similar transmission method can be found also in the work of [@JinCaireGlobecom16] for the setting of decentralized coded caching with reduced subpacketization. [^16]: Instead of ZF, one can naturally use a similar precoder with potentially better performance in different SNR ranges. [^17]: Notice that by considering a subpacketization based on the power set $2^{[\Lambda]}$, and by allowing for any possible size of these subfiles, the generality of the result is preserved. Naturally, this does not impose any sub-packetization related performance issues because this is done only for the purpose of creating a converse. [^18]: We caution the reader not to confuse the current permutations ($\sigma$) that are used to construct large-sized acyclic graphs, with the aforementioned permutations $\pi$ which are used to construct $\mathcal{D}_{\Lc}$. [^19]: Notice that the index coding users/nodes who are associated to the same cache, are not linked by any edge in the corresponding graph.
--- abstract: 'In this review, we present an overview of the main aspects related to the statistical evaluation of medical tests for diagnosis and prognosis. Measures of diagnostic performance for binary tests, such as sensitivity, specificity, and predictive values, are introduced, and extensions to the case of continuous-outcome tests are detailed. Special focus is placed on the receiver operating characteristic (ROC) curve and its estimation, with the topic of covariate adjustment receiving a great deal of attention. The extension to the case of time-dependent ROC curves for evaluating prognostic accuracy is also touched upon. We apply several of the approaches described to a dataset derived from a study aimed to evaluate the ability of HOMA-IR (homeostasis model assessment of insulin resistance) levels to identify individuals at high cardio-metabolic risk and how such discriminatory ability might be influenced by age and gender. We also outline software available for the implementation of the methods.' author: - 'Vanda Inácio,$^1$ María Xosé Rodríguez-Álvarez,$^2$ and Pilar Gayoso-Diz$^3$' bibliography: - 'references.bib' title: Statistical Evaluation of Medical Tests --- =1 accuracy, classification, covariates, decision thresholds, diagnostic test, prognostic test, receiver operating characteristic curve INTRODUCTION ============ Evaluating and ranking the performance of medical tests for screening and diagnosing disease greatly contributes to the health promotion of individuals and communities. Throughout this article we will be using the term ‘diagnostic test’ to broadly include any continuous classifier, such as, a single biological marker or a univariate composite score obtained from a combination of biomarkers. The primary goal of a diagnostic test is to distinguish between individuals with and without a well-defined condition (termed ‘disease’, with ‘nondisease’ used to indicate the absence of the condition). For some diseases, there might exist a gold standard test that perfectly classifies all individuals as diseased or nondiseased. However, gold standard tests (e.g., a biopsy) might not only be expensive, but also invasive and potentially harmful. Economic and/or ethical reasons may thus preclude the routine use of gold standard tests except when sufficient evidence is present. As a consequence, much effort has been placed in developing new candidate tests that are less invasive, costly, or easier to apply than the gold standard counterpart. Nevertheless, new candidate tests are rarely perfect. Thus, a critical step prior to approving the use of a diagnostic test in clinical practice is to rigorously vet its ability to distinguish diseased from nondiseased individuals. Compared to the truth, i.e., to the diagnosis made by the gold standard test, which we assume to be available, interest lies in quantifying the misclassification errors made by the test under investigation and in deciding whether yet with such errors, the test may still be suitable for routine use. It is worth noting that although we focus on medical diagnosis, the problem of binary classification is such a wide one, finding applications in fields as diverse as finance (e.g., customer likely to incur in default or not) and cyber security (e.g., email messages are spam or not), to name only two. The receiver operating characteristic (ROC) curve [@Metz78] is the most popular used tool for evaluating the discriminatory ability of continuous-outcome tests, which are our focus. ROC curves thus receive a great deal of attention in this article. The ROC curve was developed during World War II to assess the ability of radar operators to differentiate signal (e.g., enemy aircraft) from noise (e.g., flock of birds). Its expansion to other fields was prompt (e.g., psychology) and it was first extensively used in radiology to evaluate medical imaging devices [@Metz86]. Thanks to advancements in technology, with a vast array of ways to diagnose disease or to predict its progression available and with new diagnostic tests or biomarkers continuously being studied, the ROC curve is, nowadays, a key tool in medicine. ROC curves are also widely used in machine learning to evaluate classification algorithms. Quoting @Gneiting18 [p. 1] there has been an ‘*(...) astonishing rise in the use of ROC curves in the scientific literature. In 2017, nearly 8,000 papers were published that use ROC curves, up from less than 50 per year through 1990 and less than 1,000 papers annually through 2002.*’. The aim of this article is to present an overview of the main statistical concepts and methods for evaluating the accuracy of medical tests, with ROC curves naturally receiving the main emphasis. The reader is referred to the books by [@Pepe03], [@Krzanowski09], [@Zhou11], [@Broemeling16] and papers cited in this article for further coverage of the topic. The remainder of this article is structured as follows: In Section \[ilu\] we describe the HOMA-IR dataset, which is used as an illustrative example throughout the article. Measures of diagnostic accuracy, including the ROC curve and some methods for its estimation, are introduced in Section \[acc\_measures\]. The topic of covariate-adjustment in ROC curves is reviewed in Section \[covariateroc\], while in Section \[timeROC\] time-dependent ROC curves are discussed. In Section \[software\] we outline available software in `R` [@R20]. Finally, in Section \[discussion\], we offer some conclusions and thoughts on further topics. ILLUSTRATIVE EXAMPLE {#ilu} ==================== Insulin resistance (IR) is a feature of disorders such as type 2 diabetes mellitus and is implicated in obesity, hypertension, cancer, or autoimmune diseases. Also, IR is associated with cardiovascular diseases, and some studies have shown that IR may be an important predictor of cardiovascular disease risk. The HOmeostasis Model Assessment of IR (HOMA-IR) is widely used in epidemiological studies and in clinical practice to estimate IR and has proved to be a robust tool for the surrogate assessment of IR. We will exemplify some of the different measures and methods described in this paper when it comes to studying the capacity of HOMA-IR levels to detect patients with higher cardio-metabolic risk and to ascertaining the possible effect of both age and gender on the accuracy of this measure. The purpose here is merely illustrative, and we refer the interested reader to [@Gayoso13], where the objective was originally proposed and studied, for more details and references. In particular, as an accurate indicator of the presence of cardio-metabolic risk (i.e., presence of ‘disease’), we use a diagnosis of metabolic syndrome as defined by the International Diabetes Federation [@IDF20] criteria, under which metabolic syndrome is defined as the presence of central obesity (defined as waist circumference with ethnicity specific values) plus any two of the following four risk factors: (1) reduced HDL-cholesterol or specific treatment for this lipid abnormality, (2) raised systolic or diastolic blood pressure or treatment of previously diagnosed hypertension, (3) raised fasting plasma glucose or previously diagnosed type 2 diabetes, (4) raised triglycerides or specific treatment for this lipid abnormality. Regarding the study population, it corresponds to the individuals enrolled in the EPIRCE study (Estudio Epidemiológico de la Insuficiencia Renal en España) [@Otero05; @Otero10], which is an observational cross-sectional study that included a randomly selected sample of Spanish individuals aged $20$ years and older, stratified by age, gender, and residence. For the analyses shown here, $2212$ individuals out of $2459$ were selected (age range in years $20$–$92$). Subjects with diabetes ($247$, $10.0\%$ of the total sample) were excluded. Of the total of $2212$ subjects, $41.0\%$ were men ($769$ nondiseased and $135$ diseased) and $59.0\%$ women ($1194$ nondiseased and $114$ diseased). All participants were Caucasians. Table \[EPIRCEDataDescriptive\] presents some summary statistics of the HOMA-IR levels (log-transformed) for men and women, as well as, for different age strata. In turn, Figure \[estdensities\] depicts, separately for men and women, the estimated density functions of the $\log$ HOMA-IR levels in the diseased and nondiseased populations. As can be observed, both in men and women, individuals with metabolic syndrome tend to have higher HOMA-IR levels and these levels also vary with age. 7.5pt **Diseased** **Nondiseased** ------------------- ---------------------- ---------------------- **Global sample** $0.91\;(0.50, 1.25)$ $0.51\;(0.13, 0.85)$ **Gender** Women $0.89\;(0.49, 1.26)$ $0.51\;(0.14, 0.82)$ Men $0.92\;(0.52, 1.25)$ $0.50\;(0.11, 0.89)$ **Age** $\leq 35$ $1.06\;(0.64, 1.34)$ $0.53\;(0.18, 0.85)$ $(35, 47]$ $1.04\;(0.69, 1.35)$ $0.47\;(0.09, 0.82)$ $(47, 60]$ $0.87\;(0.52, 1.19)$ $0.47\;(0.08, 0.81)$ $> 60$ $0.82\;(0.42, 1.25)$ $0.60\;(0.17, 0.92)$ : Median (interquartile range) of the (log) HOMA-IR levels in diseased and nondiseased populations, males and females, and for four gender strata based on quartiles. \[EPIRCEDataDescriptive\] ![Estimated density functions of $\log$ HOMA-IR levels obtained by fitting a Dirichlet process mixture of normals model to each population and separately for men and women.[]{data-label="estdensities"}](././figures/densities_pROC_all_dpm_woci_logHOMA.pdf){width="13cm"} POPULAR MEASURES OF ACCURACY {#acc_measures} ============================ Binary Tests ------------ Although our focus is on tests measured on a continuous scale, we start by defining measures of classification accuracy for binary tests as they provide the natural starting point for what comes next. A binary test is a test for which there are only two possible outcomes, usually denoted as positive or negative for the condition or disease of interest. Let $Y$ be a binary variable denoting the diagnostic test outcome, with $Y=1$ indicating a positive test result for disease, and $Y=0$ indicating a negative test result for disease. Further, let $D$ be the binary variable that denotes the true disease status, and let $D=1$ denote the presence of disease and $D=0$ indicate its absence. The accuracy of a test is defined as its ability to distinguish between diseased and nondiseased individuals and can be measured by its true positive and true negative fractions. The true positive fraction of a test, TPF, also known as sensitivity, is the probability that a diseased individual tests positive, that is, $\text{TPF}=\Pr(Y=1\mid D=1)$. The true negative fraction, TNF, also known as specificity, is the probability that a nondiseased subject tests negative, i.e., $\text{TNF}=\Pr(Y=0\mid D=0)$. The ideal test would correctly classify all diseased and nondiseased individuals, but the tests routinely used in practice are relatively inexpensive and classification errors do occur. Specifically, two types of misclassification are possible: a diseased individual can test negative and a nondiseased individual can test positive. The magnitude of such misclassification errors is measured through the false negative fraction (FNF) and the false positive fraction (FPF), which are defined as, $\text{FNF}=\Pr(Y=0\mid D=1)$ and $\text{FPF}=\Pr(Y=1\mid D=0)$. Clearly, $\text{FNF}=1-\text{TPF}$ and $\text{FPF}=1-\text{TNF}$. An ideal test is one for which the TPF and TNF are both equal to one or, equivalently, where the FNF and FPF are both equal to zero. Obviously, the closer such quantities are to these ideal values, the more the classification made by the test is to be trusted. Nevertheless, a test can be useful even when these quantities are smaller than the ideal values. The criterion whereby the validity of a test is established in practice depends entirely on the context in which it is to be applied. For example, a false negative outcome can be life-threatening with diseased individuals failing to receive prompt treatment while, on the other hand, a false positive outcome may result in the physical, emotional, and financial burdens resulting from further testing or even unnecessary treatment. The true positive and negative fractions quantify how well the test performs among subjects with and without the condition, respectively, which is important for public health concerns. In the clinical setting, however, interest resides in the opposite question, i.e., how well the test outcome predicts the true disease status. The question of interest is: Given that an individual has a positive (negative) test outcome, what is the probability of being diseased (nondiseased)? This leads to the positive and negative predictive values (PPV and NPV, respectively) $$\begin{aligned} \text{PPV}&=\Pr(D=1\mid Y =1)=\frac{\pi\text{TPF}}{\pi\text{TPF}+(1-\pi)\text{FPF}}, \label{ppv}\\ \text{NPV}&=\Pr(D=0\mid Y =0)=\frac{(1-\pi)\text{TNF}}{(1-\pi)\text{TNF}+\pi\text{FNF}}, \label{npv}\end{aligned}$$ where $\pi=\Pr(D=1)$ is the prevalence of the disease in the source population. An ideal test has PPV and NPV both equal to 1, that is, it predicts disease status perfectly. On the other hand, for a noninformative test one has that $\text{PPV}=\pi$ and $\text{NPV}=1-\pi$, i.e., the test has no information about the true disease status or, in other words, information about the test outcome is independent of disease status. Since the predictive values depend on the prevalence of the disease, their interpretation must be cautious. For instance, a low PPV may be due to a low disease prevalence or to a test that poorly reflects the true disease status. It has been suggested (e.g., @Pepe03 [Chapter 2]) to use the TPF and FNF for quantifying the inherent accuracy of a test, as these classification probabilities quantify how well a given test reflects true disease status. Predictive values, in turn, quantify the clinical or practical value of the test, rather than its accuracy. That is, diagnostic accuracy must refer to the quality of the information yielded by the test (i.e., its TPF and TNF), something that has to be distinguished from the usefulness or practical utility of such information (quantified by the predictive values). It is worth mentioning at this stage that as the TPF and TNF are independent of disease’s prevalence, they can be estimated from case-control studies. By opposition, estimation of the predictive values requires that the prevalence is known or that it can be estimated from the data. Continuous Tests {#continuous} ---------------- Although some tests are naturally dichotomous, such as commercial home pregnancy tests or bacterial cultures, many tests are continuous (e.g., HOMA-IR levels for predicting the presence of cardio-metabolic risk). The question arising is how to classify an individual as diseased or nondiseased based on his/her test result, which is now measured on a continuous scale. The simplest classification is based on a cutoff or threshold value, say $c$, such that a test result with $Y\geq c$ is considered positive for disease and if $Y<c$ the test is considered negative. Therefore, each threshold value chosen gives rise to a corresponding TPF and TNF, or equivalently, to a TPF and FPF, that is, $$\begin{aligned} \text{TPF}(c)&=\Pr(Y\geq c\mid D=1)=\Pr(Y_D\geq c)=1-F_D(c),\\ \text{FPF}(c)&=\Pr(Y\geq c\mid D=0)=\Pr(Y_{\bar{D}}\geq c)=1-F_{\bar{D}}(c),\end{aligned}$$ where we use the subscripts $D$ and $\bar{D}$ to index related quantities to the diseased ($D=1$) and nondiseased ($D=0$) populations, and with $F_D$ and $F_{\bar{D}}$ denoting the cumulative distribution function of test results in the diseased and nondiseased populations, respectively. It is clear that there will be as many pairs of true and false positive fractions as of threshold values chosen and comparing all of them would be impractical. This leads us to the popular ROC curve, which represents nothing more than the plot of the FPF versus the TPF as the threshold value used for defining a positive test result is varied, that is $$\{(\text{FPF}(c),\text{TPF}(c)):c\in\mathbb{R}\}=\{(1-F_{\bar{D}}(c),1-F_{D}(c)):c\in\mathbb{R}\}.$$ The ROC curve thus provides a visual description of the tradeoff between the FPF and TPF as the threshold $c$ changes. For $p=\text{FPF}(c)=1-F_{\bar{D}}(c)$, the ROC curve can be equivalently represented as $$\label{rocdef} \{(p,\text{ROC}(p)):p\in [0,1]\},\quad \text{with}\quad \text{ROC}(p)=1-F_{D}\{F_{\bar{D}}^{-1}(1-p)\}.$$ Further advantages afforded by the ROC curve as a measure of a test’s accuracy are that: (a) it is not dependent on disease prevalence, (b) it is independent of the units in which diagnostic test results are measured, thereby enabling ROC curves of different diagnostic tests, and thus their diagnostic accuracy, to be compared, and (c) it is invariant to strictly increasing transformations of the diagnostic test result $Y$. We shed some light on how ROC curves should be interpreted. ROC curves measure the amount of separation between the distribution of test outcomes in the diseased and nondiseased populations (see Figure \[densrocs\]). When the distributions of test results in the two populations completely overlap, then the ROC curve is the diagonal line of the unit square, with $\text{FPF}(c)=\text{TPF}(c)$ for all $c$, indicating a noninformative test. The more separated the distributions of test outcomes, the closer the ROC curve is to the point $(0,1)$ and, consequently, the better the diagnostic accuracy. A curve that reaches the point $(0,1)$ has $\text{FPF}(c)=0$ and $\text{TPF}(c)=1$ for some threshold $c$ and, hence, corresponds to a test that perfectly determines the true disease status. An ROC curve which lies below the diagonal line implies that the test is worse than useless, but this issue can be easily overcome by reversing the classification rule, i.e., to say that an individual is diseased when his/her test outcome is below $c$ and nondiseased otherwise. Related to the ROC curve is the notion of placement value [@Pepe04], which is simply a standardisation of test outcomes with respect to a reference population. Let $U_D=1-F_{\bar{D}}(Y_D)$ be the placement value of diseased individuals with respect to the nondiseased population. This variable $U_D$ quantifies the degree of separation between the diseased and nondiseased populations. Specifically, if test outcomes in the two populations are highly separated, the placement of most diseased individuals is at the upper tail of the nondiseased distribution and so most of them will have small $U_D$ values. In turn, if the two populations overlap substantially, $U_D$ will have a $\text{Uniform}(0,1)$ distribution. Interestingly, the ROC curve turns out to be the cumulative distribution function of $U_D$, that is, $\Pr(U_D\leq p)=\text{ROC}(p)$. ![Hypothetical densities of test outcomes in the diseased (dotted line, orange) and nondiseased (solid line, blue) populations (top) along with the corresponding ROC curves (bottom).[]{data-label="densrocs"}](././figures/densroc.pdf){width="13cm"} A standard way to summarise the information provided by the ROC curve is to calculate the area under the ROC curve (AUC), which is defined as $$\text{AUC}=\int_{0}^{1}\text{ROC}(p)\text{d}p.$$ In addition to its geometric definition, the AUC has also a probabilistic interpretation [see, e.g., @Pepe03 p. 78] $$\text{AUC} = \Pr\left(Y_{D} \geq Y_{\bar{D}}\right), \label{AUC2}$$ that is, the AUC is the probability that the test outcome for a randomly chosen diseased subject exceeds the one exhibited by a randomly selected nondiseased individual. The AUC is equal to 1 for a perfect test and it is equal to $0.5$ for a test with no discriminatory power (see Figure \[densrocs\]). Another global summary measure of diagnostic accuracy is the Youden index(YI) [@Youden50], defined as $$\begin{aligned} \text{YI}&=\max_c\{\text{TPF}(c)+\text{TNF}(c) -1\} \nonumber \\ &=\max_c\{F_{\bar{D}}(c)-F_D(c)\}\label{YIcdf} \\ &=\max_p\{\text{ROC}(p)-p\}\label{YIroc}.\end{aligned}$$ When the distributions of test outcomes completely overlap $\text{YI}=0$, whereas when they are completely separated $\text{YI}=1$. An YI below 0 indicates that the classification rule for defining a positive test result must be reversed. It is worth mentioning that the YI is equivalent to the Kolmogorov–Smirnov measure of distance between the distributions of test outcomes in the diseased and nondiseased populations. Note that from Equation \[YIroc\], the YI can also be interpreted as the maximum vertical distance between the ROC curve and the chance diagonal. An appealing feature of the YI not present in the AUC is that it provides a criterion for choosing the threshold value to diagnose subjects in practice. The criterion is to choose the value $c^{*}$ that maximises Equation \[YIcdf\] or $c^{*} = F_{\bar{D}}^{-1}(1-p^{*})$, with $p^{*}$ being the value that maximises Equation \[YIroc\]. For further measures of diagnostic accuracy, such as partial areas under the ROC curve, where only a subset of FPFs or TPFs are considered, we refer the reader to @Pepe03 [Chapter 4]. We finish this section highlighting that the ROC curve, as usually defined, measures the discriminatory capacity of a test under the particular classification rule that says that individuals with a test outcome larger than a pre-specified threshold are diseased, while those with a test outcome lower than the threshold are classified as nondiseased. The appropriateness of such classification rule relies on the standard assumption that larger test outcomes are more indicative of disease. However, this is not always the case. For instance, not only high but also low test results might be associated with disease. An example is provided in [@Martinez2017]. Therefore, one should be aware that the classification rule on which the usual definition of the ROC curve is based might not be the ‘optimal’ one, in the sense that it might not be the classification rule based on $Y$ that provides the largest discriminatory capacity. We note that the optimality of the classification rule is directly related to the concavity of the resulting ROC curve and refer the reader to [@Fawcett2006], [@Gneiting18] and @Pepe03 [p. 71] for a more extensive account on the importance of concave (also denoted in the literature as proper) ROC curves. ROC Curve and Related Indices Estimation {#rocest} ---------------------------------------- In what follows, let $\{y_{\bar{D}i}\}_{i=1}^{n_{\bar{D}}}$ and $\{y_{Dj}\}_{j=1}^{n_D}$ be two independent random samples of test outcomes from the nondiseased and diseased populations of size $n_{\bar{D}}$ and $n_D$, respectively. Statistical methods for estimating ROC curves have received wide attention in the literature. Plenty of parametric, semi, and nonparametric estimators have been proposed, both within frequentist and Bayesian paradigms. It would be an impossible task to cover, or even mention, all methods available. We succinctly describe the main idea behind each class of methods and further details can be found in the references provided. We give slightly more details about the nonparametric methods, as they are more widely applicable. A fully parametric approach estimates the constituent distribution functions parametrically to arise at the induced ROC curve estimate. Let $F_D$ and $F_{\bar{D}}$ be parametrised in terms of $\theta_D$ and $\theta_{\bar{D}}$, respectively, i.e., $F_{D}(y)=F_{D}(y\mid\theta_D)$ and $F_{\bar{D}}(y)=F_{\bar{D}}(y\mid\theta_{\bar{D}})$. Estimating the parameters on the basis of test outcomes from each corresponding group, yields $\widehat{\theta}_D$ and $\widehat{\theta}_{\bar{D}}$, and the resultant ROC estimate is $$\widehat{\text{ROC}}(p)=1-F_{D}\{F_{\bar{D}}^{-1}(1-p\mid\widehat{\theta}_{\bar{D}})\mid\widehat{\theta}_D\}.$$ Typically, a normal distribution is assumed for both $F_D$ and $F_{\bar{D}}$, possibly after some transformation of the $Y_D$ and $Y_{\bar{D}}$ scales (e.g., the logarithmic one or a Box–Cox type of transformation). See [@Brownie86] and [@Goddard90] for examples of this approach. In a semiparametric setting, the most common approach for ROC curve estimation is to assume a fully parametric form for the ROC curve, but making no assumptions about the distributions of the test outcomes themselves. These type of approaches have also been termed parametric distribution-free [@Pepe00; @Alonzo02]. The most popular of these strategies is, perhaps, the binormal model, which postulates the existence of some unspecified strictly increasing transformation $H$, such that $H(Y_D)$ and $H(Y_{\bar{D}})$ follow a normal distribution. Specifically, and without loss of generality, if $H$ is such that $H(Y_D)\sim\text{N}(\mu,\sigma^2)$ and $H(Y_{\bar{D}})\sim\text{N}(0,1)$, then the binormal ROC model is written as $$\text{ROC}(p)=\Phi\{a+b\Phi^{-1}(p)\}, \quad a=\frac{\mu}{\sigma},\quad b=\frac{1}{\sigma},$$ where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution. The appropriateness of the binormality assumption was discussed, among others, by [@Swets86] and [@Hanley96], who concluded that it provides a good approximation to a large range of ROC curve shapes that occur in practice. Estimation of the binormal ROC curve reduces to the problem of estimating $a$ and $b$. The corresponding AUC has a closed-form expression given by $\Phi(a/\sqrt{1+b^2})$. Under the binormal model several estimation methods have been proposed. The earliest approach is due to [@Dorfman69], but it was only applicable to ordinal test results; later [@Metz98] adapted it to the case of continuous test results by using a strategy that relies on categorising the outcomes into a finite number of categories and then applying the Dorfman and Alf procedure. [@Pepe00] and [@Alonzo02] suggest estimating the ROC curve by using procedures for fitting generalised linear models to binary data (these procedures will be further detailed in Section \[covariateroc\]). [@Zou00] considered a method based on rank likelihood and [@Gu09] proposed a Bayesian approach that also uses a rank-based likelihood. We also mention the work of [@Cai04] who developed a profile maximum likelihood approach. Apart from parametric and semiparametric approaches, several authors have also devoted their attention to the development of nonparametric methods, which are more generally applicable. All nonparametric methods reviewed here rely on (flexibly) estimating $F_D$ and $F_{\bar{D}}$ and plugging such estimates in Equation \[rocdef\]. The most popular and simplest nonparametric method, due to [@Hsieh96], is based on estimating $F_D$ and $F_{\bar{D}}$ by their corresponding empirical distribution functions, that is, $$\widehat{F}_{D}(y)=\frac{1}{n_D}\sum_{j=1}^{n_D}I(y_{Dj}\leq y),\quad \widehat{F}_{\bar{D}}(y)=\frac{1}{n_{\bar{D}}}\sum_{i=1}^{n_{\bar{D}}}I(y_{\bar{D}i}\leq y).$$ Interestingly, the area under the empirical ROC curve is equal to the Mann–Whitney U statistic [@Bamber1975] $$\widehat{\text{AUC}} = \frac{1}{n_D n_{\bar{D}}}\sum_{j = 1}^{n_D}\sum_{i = 1}^{n_{\bar{D}}}\left\{I\left(y_{Dj} > y_{\bar{D}i}\right)+\frac{1}{2}I\left(y_{Dj} = y_{\bar{D}i}\right)\right\}.$$ As it is clear from its definition, the empirical ROC curve is an increasing step function, which can be quite jagged, especially for small sample sizes and, as a consequence, might be unappealing in practice. To overcome the lack of smoothness of the empirical estimator, kernel-based methods for estimating the ROC curve have been developed. The earliest approach is due to [@Zou97], who suggested estimating the density function in each population using kernel density estimates. Specifically, $$\widehat{f}_{D}(y)=\frac{1}{n_Dh_D}\sum_{j=1}^{n_D}k\left(\frac{y-y_{Dj}}{h_D}\right),\quad \widehat{f}_{\bar{D}}(y)=\frac{1}{n_{\bar{D}}h_{\bar{D}}}\sum_{i=1}^{n_{\bar{D}}}k\left(\frac{y-y_{\bar{D}i}}{h_{\bar{D}}}\right),$$ where $f_D$ ($f_{\bar{D}}$) corresponds to the density associated to $F_D$ ($F_{\bar{D}}$), $k(\cdot)$ is the kernel function, and $h_D$ ($h_{\bar{D}}$) is the bandwidth or smoothing parameter. The kernel considered was the biweight and the corresponding distribution function estimates, $\widehat{F}_D$ and $\widehat{F}_{\bar{D}}$, were obtained by numerical integration. In a follow-up work, [@Zou98] suggested the use of the normal kernel and, in such case, the estimates of the distribution functions can be written as $$\widehat{F}_{D}(y)=\frac{1}{n_D}\sum_{j=1}^{n_D}\Phi\left(\frac{y-y_{Dj}}{h_D}\right), \quad \widehat{F}_{\bar{D}}(y)=\frac{1}{n_{\bar{D}}}\sum_{i=1}^{n_{\bar{D}}}\Phi\left(\frac{y-y_{\bar{D}i}}{h_{\bar{D}}}\right).$$ Still for the normal kernel, [@Lloyd98] has shown that the resulting estimate of the AUC has the following form $$\widehat{\text{AUC}}=\frac{1}{n_{D}n_{\bar{D}}}\sum_{j=1}^{n_D}\sum_{i=1}^{n_{\bar{D}}}\Phi\left(\frac{y_{Dj}-y_{\bar{D}i}}{\sqrt{h_D^2+h_{\bar{D}}^2}}\right).$$ The bandwidth, which controls the amount of smoothing and whose selection is critical to the performance of the estimator, was based on Silverman’s rule of thumb [@Silverman86 p. 48], which is optimal for data that are approximately bell-shaped distributed. Alternatively, the bandwidth can also be selected by least squares cross-validation; although this has not been proposed by the authors, it works quite well in practice for density estimation. The fact that the bandwidth proposed by [@Zou97] is not optimal for the ROC curve, because the latter depends on the distribution functions, and optimality for estimating density functions does not carry over the distribution functions, prompted [@Lloyd98] and [@Zhou02], among other authors, to improve the above estimator by obtaining asymptotically optimal estimates for $F_D$ and $F_{\bar{D}}$. To finish this section, we turn our attention to Bayesian approaches and start with the nonparametric method of [@Erkanli06], which models the distribution of test outcomes in each group via a Dirichlet process mixture of normal distributions [@Escobar95], that is, $$\label{dpm} F_{D}(y)=\int\Phi(y\mid\mu,\sigma^2)\text{d}G(\mu,\sigma^2),\quad G\sim\text{DP}(\alpha_D,G_{D}^{*}(\mu,\sigma^2)),$$ with the distribution function in the nondiseased group following analogously. Here $G_D\sim\text{DP}(\alpha_D,G_{D}^{*})$ is used to denote that the mixing distribution $G_D$ follows a Dirichlet process (DP) prior [@Ferguson73] with centring distribution $G_D^{*}$, for which $E(G_D)=G_{D}^{*}$ and which encapsulates any prior knowledge that might be known about $G_D$, and precision parameter $\alpha_D$, which controls the variability of $G_D$ around $G_D^{*}$. Larger values of $\alpha_D$ result in realisations $G_D$ that are closer to $G_D^{*}$. Unarguably, the most useful definition of the DP is its constructive definition due to [@Sethuraman94], which postulates that $G_D$ can be written as $$G_D(\cdot)=\sum_{l=1}^{\infty}\omega_{Dl}\delta_{(\mu_{Dl},\sigma_{Dl}^2)}(\cdot),$$ where $\delta_a$ denotes a point mass at $a$, $(\mu_{Dl},\sigma_{Dl}^2)\overset{\text{iid}}\sim G_D^{*}(\mu,\sigma^2)$, and the weights follow the so-called stick-breaking construction: $\omega_{D1}=v_{D1}$, $\omega_{Dl}=v_{Dl}\prod_{m<l}(1-v_{Dm})$, for $l\geq 2$, and $v_{Dl}\sim\text{Beta}(1,\alpha_D)$, for $l\geq 1$. Under Sethuraman’s representation, the distribution function in Equation \[dpm\] can be written as an infinite location-scale mixture of normal distributions, i.e., $$\label{cdfsethu} F_{D}(y)=\sum_{l=1}^{\infty}\omega_{Dl}\Phi(y\mid\mu_{Dl},\sigma_{Dl}^2).$$ For the ease of posterior inference, a conjugate centring distribution is usually specified, i.e., $G_{D}^{*}\equiv\text{N}(\mu\mid m_D,S_D)\Gamma(\sigma^{-2}\mid a_D,b_D)$. A blocked Gibbs sampler [@Ishwaran02], which relies on truncating the infinite mixture in Equation \[cdfsethu\] to a finite number of components, say $L_{D}$, can then be used for conducting posterior inference, thus obtaining posterior samples of the weights, components’ means and variances. Note that $L_D$ is not the number of components one expects to observe in the data but an upper bound on it. At iteration $s$ of the Gibbs sampler procedure, the ROC curve is computed as $$\begin{aligned} \text{ROC}^{(s)}(p)&=1-F_{D}^{(s)}\{F_{\bar{D}}^{-1(s)}(1-p)\},\qquad s=1,\ldots,S,\\ F_{D}^{(s)}(y)&=\sum_{l=1}^{L_D}\omega_{Dl}^{(s)}\Phi(y\mid \mu_{Dl}^{(s)},\sigma_{Dl}^{2(s)}),\quad F_{\bar{D}}^{(s)}(y)=\sum_{l=1}^{L_{\bar{D}}}\omega_{\bar{D}l}^{(s)}\Phi(y\mid \mu_{\bar{D}l}^{(s)},\sigma_{\bar{D}l}^{2(s)}).\end{aligned}$$ As shown by the authors, the AUC admits the following closed-form expression $$\label{aucdpm} \text{AUC}^{(s)}=\sum_{k=1}^{L_{\bar{D}}}\sum_{l=1}^{L_D}\omega_{\bar{D}k}^{(s)}\omega_{Dl}^{(s)}\Phi\left(\frac{a_{kl}^{(s)}}{\sqrt{1+b_{kl}^{2(s)}}}\right),\quad a_{kl}^{(s)}=\frac{\mu_{Dl}^{(s)}-\mu_{\bar{D}k}^{(s)}}{\sigma_{Dl}^{(s)}},\quad b_{kl}^{(s)}=\frac{\sigma_{\bar{D}k}^{(s)}}{\sigma_{Dl}^{(s)}}.$$ At the end of the sampling procedure an ensemble composed of $S$ ROC curves/AUCs is available. The average of the ensemble is used as a point estimate and the variation in the ensemble is used to construct credible bands/intervals. A somehow related approach is the Bayesian bootstrap (BB) ROC curve estimation procedure developed by [@Gu08], which assumes that $F_D$ and $F_{\bar{D}}$ follow a Dirichlet process prior, rather than a Dirichlet process mixture as in the previous approach, i.e., $$\begin{aligned} \{y_{Dj}\}_{j=1}^{n_D},\mid F_D\sim F_D,\quad F_D\sim\text{DP}(\alpha_D,G_{D}^{*}),\\ \{y_{\bar{D}i}\}_{i=1}^{n_{\bar{D}}}\mid F_{\bar{D}}\sim F_{\bar{D}},\quad F_{\bar{D}}\sim\text{DP}(\alpha_{\bar{D}},G_{\bar{D}}^{*}),\end{aligned}$$ where by a slight abuse of notation we are also using here the same DP parameters’. From the conjugacy property of the DP [@Ferguson73], which ensures that $$\label{dpconj} F_D\mid\{y_{Dj}\}_{j=1}^{n_D}\sim\text{DP}\left(\alpha_D+n_D,\frac{\alpha_D}{\alpha_D+n_D}G_{D}^{*}+\frac{1}{\alpha_D+n_D}\sum_{j=1}^{n_D}\delta_{y_{Dj}}\right),$$ it is clear that considering the noninformative limit of the DP, by letting $\alpha_D\rightarrow 0$ and $\alpha_{\bar{D}}\rightarrow 0$, simplifies drastically the computational effort, as one does not even need to specify the centring distributions $G_{D}^{*}$ and $G_{\bar{D}}^{*}$ (an equivalent to Equation \[dpconj\] holds for the nondiseased population). All that is needed is to generate from the uniform distribution over the simplex, which is equivalent to generating from a Dirichlet distribution with all parameters equal to one. The BB estimator of the ROC curve relies on a two-step procedure that makes use of the representation of the ROC curve as the distribution function of the diseased placement variable $U_D$. Specifically, as shown by the authors, it is only needed to 1) impute the variable $U_D=1-F_{\bar{D}}(Y_D)$ by plugging-in the survival function of $Y_{\bar{D}}$, generated from the BB resampling distribution given test outcomes $(y_{\bar{D1}},\ldots,y_{\bar{D}n_{\bar{D}}})$, and 2) compute the distribution function of $U_D$ based on the BB resample distribution to form one (of, say, $S$) realisation of the ROC curve. In fact, Step 1 is as simple as computing $U_{Dj}^{(s)}=\sum_{i=1}^{n_{\bar{D}}}q_{1i}^{(s)}I(y_{\bar{D}i}\geq y_{Dj})$, $j=1,\ldots,n_D$, and where $(q_{11}^{(s)},\ldots,q_{1n_{\bar{D}}}^{(s)})\sim\text{Dirichlet}(n_{\bar{D}};1,\ldots,1)$. In Step 2, we only need to calculate $\text{ROC}^{(s)}(p)=\sum_{j=1}^{n_D}q_{2j}^{(s)}I(U_{Dj}^{(s)}\leq p)$, with $(q_{21}^{(s)},\ldots,q_{2n_{D}}^{(s)})\sim\text{Dirichlet}(n_{D};1,\ldots,1)$. The AUC can also be expressed in closed form as $\text{AUC}^{(s)}=1-\sum_{j=1}^{n_D}q_{2j}^{(s)}U_{Dj}^{(s)}$. Still within a Bayesian nonparametric framework, we mention the approach of [@Branscum08], which is based on a different nonparametric prior, namely, a mixture of finite Polya trees, for modelling $F_D$ and $F_{\bar{D}}$. At last, for an overview article entirely dedicated to ROC curve estimation, we refer to [@Goncalves2014]. Concerning the estimation of the Youden index and/or associated optimal threshold, for all approaches that rely on estimating the distribution functions of test outcomes, they can be obtained by simply plugging the corresponding estimates of $F_D$ and $F_{\bar{D}}$ in Equation \[YIcdf\]. For the binormal model, where it is not assumed an explicit distribution for the test outcomes, Equation \[YIroc\] should instead be used. For a detailed comparison among different methods (namely, empirical, kernel, and a pararametric one assuming normality on the original scale or after a Box–Cox transformation), we refer the reader to the article by [@Fluss05]. Illustration ------------ We now illustrate the methods described in the previous section with the HOMA-IR dataset. Recall that we seek to assess the accuracy of the HOMA-IR levels when predicting the presence of cardio-metabolic risk. Here we stratify the analysis by gender but disregard the age effect (i.e., HOMA-IR levels were pooled together regardless the age of the individuals). As we will be using both the kernel-based approach and the Dirichlet process mixture model with a normal kernel, the logarithm of HOMA-IR levels was considered. Figure 1 in the Supplementary Materials shows the estimated densities, by gender and in each population (individuals with and without cardio-metabolic risk), under the Dirichlet process mixture of normals model and the (normal) kernel method with bandwidth selected by Silverman’s rule of thumb, and we can appreciate that both are very similar and follow the histograms of HOMA-IR levels quite closely. The estimated ROC curves using the four nonparametric methods described in the previous section are depicted in Figure \[pooledROC\]. All methods produced very similar ROC curves. In Figure 2 of the Supplementary Materials we depict the same ROC curves but without the confidence/credible bands, so that the comparison between point estimates is clearer. The corresponding AUCs are reported in Table \[tabauc\] and they are, both for men and women, close to $0.70$, revealing a mild accuracy of HOMA-IR levels for predicting cardio-metabolic risk. This comes as no surprise as Figure \[estdensities\] already evidenced a quite considerable overlap of HOMA-IR levels in the two populations. Table 1 of the Supplementary Materials presents the Youden index and corresponding optimal HOMA-IR thresholds estimates that can be used to detect, in practice, individuals with higher cardio-metabolic risk. ![Estimated ROC curves. The continuous lines correspond to point estimates and the shaded regions to the $95\%$ pointwise credible/confidence bands. Here BB stands for the Bayesian bootstrap method [@Gu08] and DPM for the Dirichlet process mixture of normals model [@Erkanli06].[]{data-label="pooledROC"}](././figures/pROC_all_ci_logHOMA){width="14cm"} 7.5pt ----------- -------------------------- -------------------------- -- Approach Women Men Empirical $0.691$ $(0.634, 0.736)$ $0.695$ $(0.647, 0.741)$ Kernel $0.683$ $(0.629, 0.728)$ $0.687$ $(0.641, 0.733)$ DPM $0.685$ $(0.631, 0.736)$ $0.691$ $(0.643, 0.736)$ BB $0.691$ $(0.635, 0.743)$ $0.695$ $(0.646, 0.740)$ ----------- -------------------------- -------------------------- -- : AUC point estimates and $95\%$ credible/confidence intervals. Here BB stands for the Bayesian bootstrap method [@Gu08] and DPM for the Dirichlet process mixture of normals model [@Erkanli06].[]{data-label="tabauc"} ROC CURVES AND COVARIATES {#covariateroc} ========================= Motivation ---------- The definition of ROC curve given in Equation \[rocdef\] implicitly assumes that both the diseased and nondiseased populations are homogeneous, at least, with regard to the performance of the test. However, this is rarely the case in practice. For instance, coming back to our motivating example, Figure \[conditionaldensities\] shows the densities of $\log$ HOMA-IR levels conditional on the age and gender of the subjects. It can be noticed that, especially for women, the overlap between the two distributions of $\log$ HOMA-IR levels changes with age, and thus we expect the accuracy of $\log$ HOMA-IR levels to vary across age as well. This illustrates that, quite often, the distribution of test outcomes, either in the nondiseased or diseased population, or in both, is likely to vary with covariates. Examples of such covariates include subject-specific characteristics or different test settings. We note in passing that this does not necessarily mean covariates affecting the discriminatory capacity of the test. In particular, the distributions of test outcomes might experience a shift for different covariate values but their overlap might remain the same, case in which the accuracy of the test does not change, but still the thresholds used for defining a positive result will be covariate-specific (for further details we refer to @Pepe03 [Chapter 6], @Pardo2014, and @Inacio18). ![Estimated density functions, obtained using a single-weights dependent Dirichlet process mixture of normals model, of $\log$ HOMA-IR levels in the diseased (dotted line, orange) and nondiseased (solid line, blue) populations, conditional on age and gender.[]{data-label="conditionaldensities"}](././figures/cROC_bnp_den_logHOMA.pdf){width="14.5cm"} Notation and Definitions ------------------------ Let us now assume that along with $Y_D$ and $Y_{\bar{D}}$, covariate vectors $\mathbf{X}_{D}$ and $\mathbf{X}_{\bar{D}}$ are also available. For ease of notation, we assume that the covariates of interest are the same in both populations, although this is not always necessarily the case (e.g., disease stage is, obviously, a disease-specific covariate). As a natural extension of the ROC curve, the conditional or covariate-specific ROC curve, given a covariate value $\mathbf{x}$, is defined as $$\label{roccov} \text{ROC}(p\mid \mathbf{x})=1-F_{D}\{F_{\bar{D}}^{-1}(1-p\mid\mathbf{x})\mid\mathbf{x}\},\quad 0\leq p \leq 1,$$ where $F_{D}(y\mid\mathbf{x})=\Pr(Y_D\leq y\mid\mathbf{X}_{D}=\mathbf{x})$ denotes the conditional distribution function in the diseased group, with $F_{\bar{D}}(y\mid\mathbf{x})$ being defined similarly. The covariate-specific counterparts of the AUC and YI are given by $$\begin{aligned} \text{AUC}(\mathbf{x})&=\int_{0}^{1}\text{ROC}(p\mid\mathbf{x})\text{d}p \label{covauc} \\ \text{YI}(\mathbf{x}) &= \max_{c} \{ F_{\bar{D}}(c\mid\mathbf{x}) -F_{D}(c\mid\mathbf{x})\} \label{covyidf}\\ &=\max_p\{\text{ROC}(p\mid\mathbf{x})-p\} \label{covyiroc}\end{aligned}$$ For each value of $\mathbf{x}$ we might obtain a different ROC curve (AUC and/or Youden index) and, therefore, also a possible different accuracy. Understanding the influence of covariates on the accuracy of a diagnostic test will help in determining the optimal and suboptimal populations where to perform the diagnostic tests on. Covariate-specific ROC curve estimation {#covroc} --------------------------------------- Approaches to estimation of the covariate-specific ROC curve can be broadly divided in two categories [@Pepe98]. Induced methodologies model the distribution of test outcomes in the diseased and nondiseased populations separately and then compute the induced ROC curve. On the other hand, direct methodologies assume a regression model directly on the covariate-specific ROC curve. In what follows, we now let $\{(\mathbf{x}_{\bar{D}i},y_{\bar{D}i})\}_{i=1}^{n_{\bar{D}}}$ and $\{(\mathbf{x}_{Dj},y_{Dj})\}_{j=1}^{n_D}$ be two independent random samples of covariates and test outcomes from the nondiseased and diseased populations of size $n_{\bar{D}}$ and $n_D$, respectively. Further, for all $i = 1,\ldots,n_{\bar{D}}$ and $j = 1,\ldots,n_D$, let $\mathbf{x}_{\bar{D}i}=(x_{\bar{D}i,1},\ldots, x_{\bar{D}i,q})^{\prime}$ and $\mathbf{x}_{Dj}=(x_{Dj,1},\ldots, x_{Dj,q})^{\prime}$ be $q-$dimensional vectors of covariates, which can be either continuous or categorical. ### Induced methodology For clarity in the presentation, within the induced methodology, we distinguish between two types of approaches. Both aim at estimating the constituent components of the covariate-specific ROC curve, i.e., the conditional distribution of test results in the diseased and nondiseased populations (see Equation \[roccov\]). However, whereas the first set of methods do it through the specification of a location-scale regression model for the test outcomes in each population, the second set focus on directly modelling the conditional distributions. We start by presenting the first mentioned induced approach. Specifically, the relationship between covariates and test outcomes in each population is given by location-scale regression models $$\label{locationscale} Y_{D}=\mu_{D}(\mathbf{X}_D)+\sigma_{D}(\mathbf{X}_D)\varepsilon_{D},\qquad Y_{\bar{D}}=\mu_{\bar{D}}(\mathbf{X}_{\bar{D}})+\sigma_{\bar{D}}(\mathbf{X}_{\bar{D}})\varepsilon_{\bar{D}},$$ where $\mu_{D}(\mathbf{x})=E(Y_D\mid\mathbf{X}_D=\mathbf{x})$ and $\sigma_{D}^{2}=\text{var}(Y_D\mid\mathbf{X}_D=\mathbf{x})$ are, respectively, the conditional mean and variance of $Y_D$ given $\mathbf{X}_D=\mathbf{x}$, with $\mu_{\bar{D}}$ and $\sigma_{\bar{D}}^{2}$ being analogously defined. The error terms $\varepsilon_{D}$ and $\varepsilon_{\bar{D}}$ are assumed to be independent of each other and of the covariates, with zero mean, unit variance, and distribution function $F_{\varepsilon_{D}}$ and $F_{\varepsilon_{\bar{D}}}$, respectively. Given the independence between the error and the covariates in the location-scale regression models in Equation \[locationscale\], it is easy to show that $$F_{D}(y\mid\mathbf{x})=F_{\varepsilon_{D}}\left(\frac{y-\mu_{D}(\mathbf{x})}{\sigma_D(\mathbf{x})}\right),\qquad F_{\bar{D}}(y\mid\mathbf{x})=F_{\varepsilon_{\bar{D}}}\left(\frac{y-\mu_{\bar{D}}(\mathbf{x})}{\sigma_{\bar{D}}(\mathbf{x})}\right).$$ An analogous relationship can be established between the conditional quantile function of test outcomes given the covariates and the quantile function of the error terms, namely $$F_{D}^{-1}(p\mid\mathbf{x})=\mu_{D}(\mathbf{x})+\sigma_{D}(\mathbf{x})F_{\varepsilon_{D}}^{-1}(p),\quad F_{\bar{D}}^{-1}(p\mid\mathbf{x})=\mu_{\bar{D}}(\mathbf{x})+\sigma_{\bar{D}}(\mathbf{x})F_{\varepsilon_{\bar{D}}}^{-1}(p).$$ The covariate-specific ROC curve, for a given covariate value $\mathbf{x}$, can therefore be expressed as $$\begin{aligned} \text{ROC}(p\mid\mathbf{x})=1-F_{\varepsilon_{D}}\left\{\frac{\mu_{\bar{D}}(\mathbf{x})-\mu_{D}(\mathbf{x})}{\sigma_D(\mathbf{x})}+\frac{\sigma_{\bar{D}}(\mathbf{x})}{\sigma_{D}(\mathbf{x})}F_{\varepsilon_{\bar{D}}}^{-1}(1-p)\right\},\quad 0\leq p \leq 1.\end{aligned}$$ This formulation allows expressing the covariate-specific ROC curve in terms of the distribution and quantile functions of the regression errors, which are not conditional, thus reducing the computational burden. Thus far we have described this form of induced ROC methodology in its most general form. Particular cases have been addressed in the literature. In particular, [@Faraggi03] assumed a normal linear homoscedastic model in each population, that is $$\mu_{D}(\mathbf{x})=\tilde{\mathbf{x}}^{\prime}\boldsymbol{\beta}_D, \quad \sigma_D(\mathbf{x})=\sigma_D,\quad F_{\varepsilon_{D}}(\cdot)=\Phi(\cdot),$$ with $\tilde{\mathbf{x}}^{\prime}=(1,\mathbf{x}^{\prime})$ and $\boldsymbol{\beta}_D=(\beta_{D0},\ldots,\beta_{Dq})^{\prime}$ is a $(q+1)-$dimensional vector of (unknown) regression coefficients. All quantities are analogously defined for the nondiseased population. Estimates of the regression coefficients $\boldsymbol{\beta}_{D}$ and $\boldsymbol{\beta}_{\bar{D}}$ are obtained by ordinary least squares on the basis of the samples $\{(\mathbf{x}_{Dj},y_{Dj})\}_{j=1}^{n_D}$ and $\{(\mathbf{x}_{\bar{D}i},y_{\bar{D}i})\}_{i=1}^{n_{\bar{D}}}$, respectively. The variances are then straightforwardly estimated as $$\widehat{\sigma}_{D}^2=\frac{\sum_{j=1}^{n_D}(y_{Dj}-\tilde{\mathbf{x}}_{Dj}^{\prime}\widehat{\boldsymbol{\beta}}_D)^2}{n_D-q-1},\qquad \widehat{\sigma}_{\bar{D}}^2=\frac{\sum_{i=1}^{n_{\bar{D}}}(y_{\bar{D}i}-\tilde{\mathbf{x}}_{\bar{D}i}^{\prime}\widehat{\boldsymbol{\beta}}_{\bar{D}})^2}{n_{\bar{D}}-q-1}.$$ The corresponding covariate-specific ROC curve is given by $$\widehat{\text{ROC}}(p\mid\mathbf{x})=1-\Phi\{a(\mathbf{x})+b\Phi^{-1}(1-p)\},\quad a(\mathbf{x})=\tilde{\mathbf{x}}^{\prime}\frac{(\widehat{\boldsymbol{\beta}}_{\bar{D}}-\widehat{\boldsymbol{\beta}}_D)}{\widehat{\sigma}_D},\quad b=\frac{\widehat{\sigma}_{\bar{D}}}{\widehat{\sigma}_D}.$$ As for the binormal ROC curve in the no-covariate case, the AUC under this model is given by $\Phi(-a(\mathbf{x})/\sqrt{1+b^2})$. Alternatively, and less restrictive, [@Pepe98] suggests to estimate the distribution function of the errors in each population by the corresponding empirical distribution function of the estimated standardised residuals. Note that in the original paper the same distribution was assumed in both populations, but we are presenting here the more general case in which each population has its own distribution, i.e., $$\widehat{F}_{\varepsilon_{D}}(y)=\frac{1}{n_D}\sum_{j=1}^{n_D}I(\widehat{\varepsilon}_{Dj}\leq y),\qquad \widehat{\varepsilon}_{Dj}=\frac{y_{Dj}-\tilde{\mathbf{x}}_{Dj}^{\prime}\widehat{\boldsymbol{\beta}}_D}{\widehat{\sigma}_D},$$ with $\widehat{F}_{\varepsilon_{\bar{D}}}(y)$ and $\widehat{\varepsilon}_{\bar{D}i}$, $i=1,\ldots,n_{\bar{D}}$, are defined in a similar fashion. The covariate-specific ROC curve is finally computed in an analogous way as for the method of [@Faraggi03] as $$\widehat{\text{ROC}}(p\mid\mathbf{x})=1-\widehat{F}_{\varepsilon_{D}}\{a(\mathbf{x})+b\widehat{F}_{\varepsilon_{\bar{D}}}^{-1}(1-p)\},\qquad 0\leq p\leq 1.$$ The covariate-specific AUC also admits a closed-form expression which can be regarded as a covariate-specific Mann–Whitney type of statistic, that is, $$\label{closedauc} \widehat{\text{AUC}}(\mathbf{x})=\frac{1}{n_{D}n_{\bar{D}}}\sum_{j=1}^{n_D}\sum_{i=1}^{n_{\bar{D}}}I\{\widehat{\mu}_{\bar{D}}(\mathbf{x})+\widehat{\sigma}_{\bar{D}}\widehat{\varepsilon}_{\bar{D}i}\leq \widehat{\mu}_{D}(\mathbf{x})+\widehat{\sigma}_{D}\widehat{\varepsilon}_{Dj}\}.$$ Still in a semiparametric context, [@Zheng04] proposed an estimator for the covariate-specific ROC curve in which the distribution of the error terms is unknown and allowed to depend on covariates (and so, strictly speaking, the underlying models for the test outcomes are no longer location-scale regression models) but, as in the previous two approaches, the effect of the covariates on the conditional means and variances is modelled parametrically. In a Bayesian context, [@Rodriguez14] proposed a semiparametric model, where the (marginal) error terms are assumed to follow a Student-$t$ distribution and the conditional mean and variance functions are modelled nonparametrically through Gaussian process priors. Within a nonparametric frequentist perspective, [@Yao10], [@Gonzalez11], and [@Rodriguez11] all proposed a kernel-based approach to estimate the mean and variance functions in Equation \[locationscale\] but, as proposed by these authors, the method can only deal with one continuous covariate. Both the regression and the variance functions are estimated using local polynomial kernel smoothers [@Fan96]. Estimation proceeds in a sequential manner: 1) the regression functions in the diseased and nondiseased populations are estimated first on the basis of $\{(x_{Dj},y_{Dj})\}_{j=1}^{n_{D}}$ and $\{(x_{\bar{D}i},y_{\bar{D}i})\}_{i=1}^{n_{\bar{D}}}$, respectively, and 2) the variance function is estimated next on the basis of the samples $\{(x_{Dj},[y_{Dj}-\widehat{\mu}_{D}(x_{Dj})]^2)\}_{j=1}^{n_{D}}$ and $\{(x_{\bar{D}i},[y_{\bar{D}i}-\widehat{\mu}_{\bar{D}}(x_{\bar{D}i})]^2)\}_{i=1}^{n_{\bar{D}}}$. Both steps involve the selection of a smoothing parameter and that can be done, for instance, via least squares cross-validation. Once estimates of the mean and variance functions are available, the standardised residuals can be calculated and, as in Pepe’s method, their empirical distribution function is used to estimate the distribution of the error terms. The covariate-specific AUC can also be written in the form of Equation \[closedauc\], with the mean and variance functions replaced by their corresponding kernel-based counterparts. Because the estimator of the conditional ROC curve is based on the emprirical distribution function (of the standardised residuals), the resulting estimator is not smooth and, in order to overcome this drawback, [@Gonzalez11] also proposed an estimator that makes use of a further bandwidth and does the convolution with a continuous kernel, namely $$\widehat{\text{ROC}}_{h}(p\mid\mathbf{x})=1-\int\widehat{F}_{\varepsilon_{D}}\left(a(x)+\widehat{F}_{\varepsilon_{\bar{D}}}^{-1}(1-p+hu)b(x)\right)k(u)\text{d}u,$$ where $a(x)=\frac{\widehat{\mu}_{\bar{D}}(x)-\widehat{\mu}_{D}(x)}{\widehat{\sigma}_{D}(x)}$, $b(x)=\frac{\widehat{\sigma}_{\bar{D}}(x)}{\widehat{\sigma}_{D}(x)}$, and $k(\cdot)$ is a kernel function. Note that when $h=0$ the non-smooth estimator is recovered. We now briefly detail the approach of [@Inacio13] which, by opposition to the previous approaches, is based on directly modelling the conditional distribution function of test outcomes in the diseased and nondiseased populations, allowing it to smoothly change as a function of the covariates. Specifically, the authors use a single-weights linear dependent Dirichlet process mixture of normals to model the conditional distribution in each population $$F_{D}(y\mid\mathbf{x})=\int \Phi(y\mid\mu(\mathbf{x},\boldsymbol{\beta}),\sigma^2)\text{d}G_{D}(\boldsymbol{\beta},\sigma^2),\quad G_D\sim\text{DP}(\alpha_D,G_{D}^{*}(\boldsymbol{\beta},\sigma^2)),$$ with the conditional distribution function in the nondiseased population following in an analogous manner. This model can be regarded as an extension to the conditional case of the method of [@Erkanli06]. As in the no-covariate case, using Sethuraman’s representation, the conditional distribution can be expressed as $$\label{ddp} F_{D}(y\mid\mathbf{x})=\sum_{l=1}^{\infty}\omega_{Dl}\Phi\left(y\mid\mu(\mathbf{x},\boldsymbol{\beta}_{Dl}),\sigma_{Dl}^2\right),$$ with the weights matching those from the stick-breaking construction as specified in Erkanli’s model. Notice that the only difference between Equations \[cdfsethu\] and \[ddp\] is that now the mean of each component depends on covariates. Regarding the specification of the components’ means, it has been recommended (see @Inacio18 for more details) to use a flexible formulation, so that a large number of (conditional) density shapes’ are well-approximated. In particular, cubic B-splines basis functions are used for continuous covariates and, as a result, we write $$\mu(\mathbf{x},\boldsymbol{\beta}_{Dl})=\mathbf{z}_{D}^{\prime}\boldsymbol{\beta}_{Dl}, \quad l\geq 1,\quad j =1,\ldots,n_D,$$ where $\mathbf{z}_{D}$ is the vector containing the intercept, the cubic B-splines basis representation of the continuous covariates, the categorical covariates (if any), and their interaction(s) with the smoothed continuous covariate(s) (if believed to exist). Also, $\boldsymbol{\beta}_{Dl}$ collects, for the $l$th component, the regression coefficients associated with the aforementioned covariate vector. The regression coefficients and variances associated with each component are sampled from a conjugate centring distribution $(\boldsymbol{\beta}_{Dl},\sigma_{Dl}^{-2})\overset{\text{iid}}\sim\text{N}(\mathbf{m}_D,\mathbf{S}_D)\Gamma(a_D,b_D)$ and, as in the unconditional case, the blocked Gibbs sampler is used to simulate draws from the posterior distribution. At iteration $s$ of the Gibbs sampler procedure, the covariate-specific ROC curve is computed as $$\begin{aligned} \text{ROC}^{(s)}(p\mid\mathbf{x})&=1-F_{D}^{(s)}\{F_{\bar{D}}^{-1(s)}(1-p\mid\mathbf{x})\mid\mathbf{x}\},\qquad s=1,\ldots,S,\\ F_{D}^{(s)}(y\mid\mathbf{x})&=\sum_{l=1}^{L_D}\omega_{Dl}^{(s)}\Phi(y\mid \mathbf{z}_{D}^{\prime}\boldsymbol{\beta}_{Dl}^{(s)},\sigma_{Dl}^{2(s)}),\quad F_{\bar{D}}^{(s)}(y\mid\mathbf{x})=\sum_{l=1}^{L_{\bar{D}}}\omega_{\bar{D}l}^{(s)}\Phi(y\mid \mathbf{z}_{\bar{D}}^{\prime}\boldsymbol{\beta}_{\bar{D}l}^{(s)},\sigma_{\bar{D}l}^{2(s)}).\end{aligned}$$ The covariate-specific AUC admits exactly the same closed form expression as in Equation \[aucdpm\], with the obvious difference that the components’ means are covariate-dependent, i.e., we now have $$a_{kl}^{(s)}(\mathbf{x}) = \frac{\mu(\mathbf{x},\boldsymbol{\beta}_{Dl}^{(s)}) - \mu(\mathbf{x},\boldsymbol{\beta}_{\bar{D}k}^{(s)})}{\sigma_{Dl}^{(s)}}.$$ Point and interval estimates for the covariate-specific ROC curve and AUC can be obtained from the corresponding ensembles of posterior realisations. Another estimator for the conditional ROC curve that also directly models the conditional distribution of test outcomes, but based on kernel methods, was proposed by [@Lopez08]. In what concerns estimation of the covariate-specific Youden index and/or associated threshold, because all induced approaches, in a more or less direct way, provide an estimate of the conditional distribution function of the test outcomes in each population, these can be plugged in the definition in Equation \[covyidf\], so that estimates of these quantities can be obtained. We also mention here the work by [@Xu14], where the authors propose an approach that directly estimates the covariate-specific YI and threshold value without the need of first estimating the conditional distribution functions. ### Direct methodology In contrast to the induced approach, in the direct methodology the effect of the covariates is directly evaluated on the ROC curve, with its general form given by the following regression model $$\label{rocglm} \text{ROC}(p\mid\mathbf{x})=g\{\mu(\mathbf{x})+h_0(p)\},\qquad 0\leq p \leq 1,$$ where $\mu(\mathbf{x})$ collects the effects of the covariates on the ROC curve, $h_0(p)$ is an unknown monotonic increasing function of the FPF related to the shape of the ROC curve and $g$ is the inverse of the link function. Unlike in standard regression analysis, the response variable of the model presented in Equation \[rocglm\] is not directly observable. However, note that the covariate-specific ROC curve can be re-expressed as $$\begin{aligned} \text{ROC}(p\mid\mathbf{x})&=1-F_{D}\{F_{\bar{D}}^{-1}(1-p\mid\mathbf{x})\mid\mathbf{x}\} \nonumber\\ &=1-\Pr\{Y_D\leq F_{\bar{D}}^{-1}(1-p\mid\mathbf{x})\mid\mathbf{X}_D=\mathbf{x}\}\nonumber\\ &=\Pr\{1-F_{\bar{D}}(Y_D\mid\mathbf{x})<p\mid\mathbf{X}_D=\mathbf{x}\} \nonumber\\ &=E[I(1-F_{\bar{D}}(Y_D\mid\mathbf{x})<p)\mid\mathbf{X}_D=\mathbf{x}],\label{rocdpv}\end{aligned}$$ and, in particular, as highlighted by Equation \[rocdpv\], it can be interpreted as the conditional expectation of the binary variable $I(1-F_{\bar{D}}(Y_D\mid\mathbf{x})<p)$ and, therefore, the ROC regression model in Equation \[rocglm\] can be viewed as a regression model for $I(1-F_{\bar{D}}(Y_D\mid\mathbf{x})<p)$. Note that $1-F_{\bar{D}}(Y_D\mid\mathbf{X}_{D}=\mathbf{x})$ is nothing more than the conditional diseased placement value, that is, a covariate-specific version of the $U_D$ variable introduced in Section \[acc\_measures\]. Different estimation proposals, which differ in the assumptions made about $g$, $\mu$, and $h_0$, have been suggested in the literature. In [@Pepe00] and [@Alonzo02], $g$ is assumed to be known (e.g., $g(\cdot)=\Phi(\cdot)$), the effect of the covariates on the conditional ROC curve is assumed to be linear, i.e., $\mu(\mathbf{x})=\mathbf{x}^{\prime}\boldsymbol{\beta}$, and the baseline function $h_0$ is assumed to have a parametric form given by $h_{0}(p)=\sum_{k=1}^{K}\alpha_k h_k(p)$, where $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_K)^{\prime}$ is a vector of unknown parameters and $h(p)=(h_1(p),\ldots,h_K(p))$ are known functions. Note that the binormal model for the (unconditional) ROC curve arises when no covariates are considered and for $g(\cdot)=\Phi(\cdot)$, $h_1(p)=1$, and $h_2(p)=\Phi^{-1}(p)$. [@Cai02] and [@Cai04glm] studied a more flexible model by leaving $h_0$ completely unspecified, but the function $\mu$ is still modelled in a linear way and $g$ is also considered to be known. In general, models like those in Equation \[rocglm\] with parametric specifications for $\mu$ define the so-called class of ROC-GLMs due to the similarities with generalised linear models [@Pepe00]. In contrast to the previous cited works, [@Lin12] developed a semiparametric model where both the link and baseline functions are completely unknown and $\mu$ is assumed to have a parametric form. Finally, [@Rodriguez11new] assumes that $g$ is known but an additive smooth structure is assumed for $\mu(\mathbf{x})$, i.e., $\mu(\mathbf{x})=\beta+\sum_{k=1}^{q}f_k(x_k)$, where $f_1,\ldots,f_q$ are unknown nonparametric functions and $h_0$ also remains unspecified. Regardless of whether the specification in Equation \[rocglm\] involves a generalised linear or additive model structure, the estimation process is similar and can be described as given in the following steps. First, one must choose a set of FPFs, say $0\leq p_l\leq 1$ for $l=1,\ldots,n_P$, where the covariate-specific ROC curve will be evaluated. Second, an estimate of $F_{\bar{D}}(\cdot\mid\mathbf{x})$, say $\widehat{F}_{\bar{D}}(\cdot\mid\mathbf{x})$, on the basis of the sample $\{(\mathbf{x}_{\bar{D}i},y_{\bar{D}i})\}_{i=1}^{n_{\bar{D}}}$, must be obtained. Third, one should calculate the estimated placement value for each disease observation $1-\widehat{F}_{\bar{D}}(y_{Dj}\mid\mathbf{x}_{Dj})$, for $j=1,\ldots,n_D$. The fourth step involves the calculation of the binary indicators $I(1- \widehat{F}_{\bar{D}}(y_{Dj}\mid\mathbf{x}_{Dj})\leq p_l)$, for $j=1,\ldots,n_D$ and $l=1,\ldots,n_P$. Lastly, in fifth, the model $g(\mu(\mathbf{x})+h_0(p))$ is fitted as a regression model for binary data with indicators $I(1-\widehat{F}_{\bar{D}}(y_{Dj}\mid\mathbf{x}_{Dj})\leq p_l)$ as the response variable and covariates $\mathbf{x}_{Dj}$ and $h(p_l)$ (when $h_0$ is modelled parametrically) or $p_l$ (when $h_0$ is left unspecified), for $j=1,\ldots,n_D$ and $l=1,\ldots,n_P$. We note that the above algorithm does not apply for the estimation of the proposals described in [@Cai02], [@Cai04glm], and [@Lin12]. For conciseness we do not present here the details of their approaches, but refer the readers to the respective articles. Regarding the estimation of the conditional AUC within the direct methodology, the obvious way is to simply plug-in an estimate for the conditional ROC curve in Equation \[covauc\], and approximate the integral using numerical integration methods. However, this approach might not be the most efficient one, and several methods to *directly* estimate $\text{AUC}(\mathbf{x})$ have been proposed in the literature. We mention here the articles by [@Dodd03a; @Dodd03b] and [@Cai08], where semiparametric regression models for the conditional (partial) AUC are proposed. For the Youden index (and associated threshold value), to the best of our knowledge, no *direct* estimators have been proposed. Estimation, in this case, requires making use of Equation \[covyiroc\], with $\text{ROC}(p\mid\mathbf{x})$ being replaced by its estimate. Note that, once we obtain the (conditional) FPF at which the maximum of \[covyiroc\] is attained, an estimate of the associated conditional threshold value can be obtained using the estimator of $F_{\bar{D}}(\cdot\mid\mathbf{x})$ needed in the second step of the above described algorithm. Covariate-adjusted ROC curve ---------------------------- The covariate-specific ROC curve and associated AUC and YI assess the accuracy of the test for specific covariate values. It would, however, be useful to have a global summary measure that also takes covariate information into account. The covariate-adjusted ROC (AROC) curve proposed by [@Janes09] is exactly one of such measures. It is defined as $$\text{AROC}(p)=\int \text{ROC}(p\mid\mathbf{x})\text{d}H_{D}(\mathbf{x}),$$ where $H_{D}(\mathbf{x}) = \Pr(\mathbf{X}_{D}\leq \mathbf{x})$ is the distribution function of $\mathbf{X}_{D}$. That is, the AROC curve is a weighted average of covariate-specific ROC curves, weighted according to the distribution of the covariates in the diseased group. As shown by the authors, the AROC curve can also be expressed as $$\begin{aligned} \text{AROC}(p) & = \Pr\{Y_{D}>F_{\bar{D}}^{-1}(1-p\mid \mathbf{X}_{D})\} =\Pr\{1-F_{\bar{D}}(Y_D\mid\mathbf{X}_{D})\leq p\}, \end{aligned}$$ emphasising that the AROC curve at a FPF of $p$ is the overall TPF when the thresholds used for defining a positive test result are covariate-specific and chosen to ensure that the FPF is $p$ in each subpopulation defined by the covariate values. We refer to [@Janes09], [@Rodriguez11], and [@Inacio18] for the different estimation methods available for the ROC curve. A natural question to ask is when to use the covariate-specific ROC curve and the covariate-adjusted ROC curve. Very briefly, and without going into details, when the accuracy of the test does change with the covariates (i.e., when the separation between the distributions of test outcomes changes for different covariate levels), the covariate-specific ROC curve should be the primary tool to be used. On the other hand, if the distributions of the test outcomes change with covariates but not the accuracy of the test (i.e., if the overlap between the distributions of test outcomes remains the same for different covariate levels), then the covariate-adjusted ROC curve, which in this case corresponds to the common covariate-specific ROC curve, should be instead reported. For a lengthy discussion of this point, see @Pepe03 [Chapter 6], [@Janes08a], [@Janes08b], and [@Inacio18]. Also, for a recent overview article focusing exclusively on ROC curves and covariates, we refer to [@Pardo2014]. Illustration ------------ We revisit our example dataset and the aim now is to assess the effect of age and gender on the ability of HOMA-IR levels for predicting cardio metabolic-risk. In Figure \[covariateresults\] (top) we present several ROC curves, obtained using the induced Bayesian nonparametric approach of [@Inacio13], associated with different ages, for both men and women. While there is no substantial variation in the shape of the ROC curves in men, there is considerable differences for women (as already expectable given the conditional densities in Figure \[conditionaldensities\]). To get deeper insight, in Figure \[covariateresults\] (bottom) we depict the covariate-specific AUC for ages between 27 and 83 years old, which roughly correspond to the age interval where the two populations, for both men and women, had observations. Results are also shown for the kernel-based approach of [@Rodriguez11], with the analysis in men and women conducted separately. It is also important to mention that for the approach of [@Inacio13] an interaction between age and gender was included. As foreseen, there is essentially no dynamic for the age-specific AUC in men. On the other hand, for women, the results suggest a decrease in the accuracy of HOMA-IR levels as age increases. Additionally, Figure 3 in the Supplementary Materials shows the age/gender-specific Youden index and associated age/gender-specific HOMA-IR optimal thresholds. ![Estimated age/gender-specific ROC curves using the approach of [@Inacio13] (top). Estimated age/gender-specific AUC. The continuous lines correspond to point estimates and the shaded regions correspond to the $95\%$ pointwise credible/confidence bands. Here BNP stands for the Bayesian nonparametric method of [@Inacio13] and Kernel for the approach of [@Rodriguez11].[]{data-label="covariateresults"}](././figures/auc_rocs_logHOMA_new.pdf){width="14.5cm"} ROC CURVES AND TIME (AND COVARIATES) {#timeROC} ==================================== Up to now we have been concerned about diagnosis. Yet, depending on the clinical circumstances, the aim and interest might involve prognosis rather than diagnosis. The main difference between diagnostic and prognostic settings is that the latter involves a time dimension. More specifically, in a prognostic setting the test outcome is measured at a given time (usually at baseline) and disease onset may occur at any time thereafter. As such, in prognosis, the true positive and negative fractions, and by consequence the ROC curve, are time dependent and may be calculated for different times. Here we only attempt to cover the main concepts, pointing the reader to the appropriate references about the estimation approaches. With regard to notation, as before, let $Y$ be a continuous random variable denoting the test outcome and, additionally, let $T$, also a continuous random variable, denotes the time to disease onset. Further, let $D(t)$ be the disease status at time $t$, with $D(t) = 1$ indicating that disease onset is prior to time $t$, and $D(t) = 0$ otherwise. [@Heagerty05] proposed three definitions of the time-dependent true positive and negative fractions (which give rise to different definitions of the time-dependent ROC curve), namely, the *cumulative* TPF and *dynamic* TNF, the *incident* TPF and *dynamic* TNF, and the *incident* TPF and *static* TNF. These different definitions differ mainly in how disease and nondisease status are defined. We focus on the *cumulative/dynamic* definition, where a diseased subject is any subject diagnosed between baseline (assumed to be the time $t=0$) and time $t$ and a nondiseased subject is any individual free of disease at time $t$. From a practical point of view it has been argued [@Blanche13; @Rodriguez16] that this is the most relevant definition, as clinicians often want to predict disease onset within a window of time rather than at a specific time (as in the *incident* TPF) and the goal is also to distinguish nondiseased subjects at the end of such time window and not at a later pre-specified time (as implied by the *static* FNF). For a threshold $c$ and a given time $t$, [@Heagerty00] defined the cumulative true positive fraction $\text{TPF(c,t)}$ and the dynamic true negative fraction $\text{TNF}(c,t)$ by $$\begin{aligned} \text{TPF}(c,t)&=\Pr(Y\geq c\mid D(t)=1)=\Pr(Y\geq c\mid T\leq t),\\ \text{TNF}(c,t)&=\Pr(Y< c\mid D(t)=0)=\Pr(Y< c\mid T >t).\end{aligned}$$ In words, the cumulative TPF is the probability that a subject has a test outcome equal or greater than $c$ among those individuals who developed the disease by time $t$, whereas the dynamic TNF is the probability that an individual has a test result less than $c$ among those who are disease free beyond that same time $t$. Under this definition, the sets of diseased and nondiseased subjects are changing over time and each individual might be in the nondiseased group at an earlier time and then in the diseased group at later times. The corresponding time-dependent ROC curve is defined as the plot of $\text{FPF}(c,t)$ versus $\text{TPF}(c,t)$ for all values of $c$, that is, $\{(\text{FPF}(c,t), \text{TPF}(c,t): c\in\mathbb{R}\}$. In analogy to Equation \[rocdef\], the time-dependent ROC curve can also be written as $$\text{ROC}(p, t)=\text{TPF}\{\text{FPF}^{-1}(p,t),t\},\qquad 0\leq p \leq 1,$$ where $\text{FPF}^{-1}(p,t)=\inf\{c\in\mathbb{R}: \text{FPF}(c,t)\leq p\}$. The AUC has been the preferred summary measure in the time-dependent context $$\text{AUC}(t) = \int_{0}^{1}\text{ROC}(p, t)\text{d}p,$$ and it is worth noting that it also accepts a probabilistic intepretation $$\text{AUC}(t) = \Pr(Y_l>Y_m\mid D_l(t)=1, D_m(t)=0)=\Pr(Y_l>Y_m\mid T_l\leq t, T_m>t),$$ where $l$ and $m$ denote the indices of two randomly chosen subjects. When it comes to estimating time-dependent ROC curves, one of the challenges is the (potential) presence of censoring. In practice, some subjects may be lost during the follow-up period, thus introducing right-censoring, and making it impossible to know if disease onset has happened before the time point $t$ for such subjects. Ignoring censoring might lead to biased estimates of the true positive and negative fractions. To address this issue, several approaches have been proposed to estimate the cumulative TPF, the dynamic TNF, and the corresponding ROC curve. The first proposal is due to [@Heagerty00] who developed estimators based on the Bayes’ theorem and the Kaplan–Meier estimator of the survival function. The fact that this approach does not necessarily yield monotone true positive and negative fractions led the authors to propose an alternative approach based on a nearest neighbour estimator of the bivariate distribution of the test result and time to disease onset. Later [@Chambless06] proposed two alternative estimation methods for the TPF and TNF, one that deals with censoring by conditioning on the observed disease onset times as in the Kaplan-Meier estimator and another one that makes use of a Cox model. In turn, [@Uno07] and [@Hung10] both developed inverse probability of censoring weighting methods, while [@Martinez18] proposed an approach based on a bivariate kernel density estimator. From a Bayesian perspective, [@Zhao16] proposed a semiparametric approach that uses a single-weights dependent Dirichlet process mixture for modelling the conditional distribution of the time to disease onset given the test outcome. For recent overview articles on this topic see [@Blanche13] and [@Kamarudin17], where the latter also surveys estimators proposed under the dynamic and incident definitions. To conclude, we highlight that the inclusion of covariates, whenever available, in the time-dependent true positive and negative fractions, should also be done. The covariate-specific time-dependent TPF and TNF, for a covariate value $\mathbf{x}$, are given by $$\text{TPF}(c,t\mid\mathbf{x})=\Pr(Y\geq c\mid T\leq t, \mathbf{x}),\qquad \text{TNF}(c,t\mid\mathbf{x})=\Pr(Y< c\mid T> t, \mathbf{x}),$$ with the covariate-specific time-dependent ROC curve and AUC following in a similar fashion. The literature on the estimation of the covariate-specific time-dependent TPF and TNF is, by comparison, relatively scarce. Important references are [@Song08] and [@Rodriguez16]. SOFTWARE ======== We start by making the disclaimer that we are not, by no means, doing an exhaustive review and that our focus is the `R` software. The package `pROC` (<https://CRAN.R-project.org/package=pROC>) provides a set of tools to visualise, smooth, and compare ROC curves, but covariate information cannot be explicitly taken into account. Packages `ROCRegression` and `npROCRegression` offer functions to estimate semiparametrically and nonparametrically, under a frequentist framework and using both induced and direct methodologies, the covariate-specific ROC curve. In particular, `ROCRegression` (<https://bitbucket.org/mxrodriguez/rocregression>) implements the approaches of [@Faraggi03], [@Pepe98], [@Alonzo02], and [@Cai04glm], while `npROCRegression` (<https://CRAN.R-project.org/package=npROCRegression>) implements the approaches of [@Rodriguez11] and [@Rodriguez11new]. To the best of our knowledge, `ROCnReg` (<https://CRAN.R-project.org/package=ROCnReg>) is the only `R` package that allows conducting Bayesian inference for the ROC curve and related indices (including optimal thresholds) estimation. In particular, `ROCnReg` implements all four nonparametric approaches for ROC curve estimation described in Section \[rocest\] and all induced approaches reviewed in Section \[covroc\] for the estimation of the covariate-specific ROC curve. `ROCnReg` also offers routines for conducting inference about the covariate-adjusted ROC curve. All data analysis conducted in this article were obtained using `ROCnReg` [for more details about the package see @MX20]. Also, the package `OptimalCutpoints` (<https://CRAN.R-project.org/package=OptimalCutpoints>) provides a collection of routines for point and interval estimation of optimal thresholds. Regarding estimation of the time-dependent ROC curve, the packages `survivalROC` (<https://CRAN.R-project.org/package=survivalROC>), `timeROC` (<https://CRAN.R-project.org/package=timeROC>), and `CondTimeROC` (<https://bitbucket.org/mxrodriguez/condtimeroc>) implement some of the approaches mentioned in Section \[timeROC\]. DISCUSSION AND FURTHER TOPICS {#discussion} ============================= In this article we have reviewed from a high-level perspective some of the main aspects related to the statistical evaluation of medical tests. We have deliberately chosen to place special focus on the estimation of ROC curves, with and without covariates, with the case of time-dependent ROC curves being also covered. As a so broad area, many interesting topics have had necessarily to be left untouched and we briefly mention some of them below. The available methodology for the study of the predictive values of continuous tests is far less extensive than the corresponding methodology for ROC curves. We mention the predictive receiver operating characteristic (pROC) curve proposed by [@Shiu08] for the joint assessment of the positive and negative predictive values. In an analogous way to the definition of the ROC curve, the authors defined the pROC as $\{1-\text{NPV}(c), \text{PPV}(c): c\in\mathbb{R}\}$. One possibility for its estimation is to make use of Equations \[ppv\] and \[npv\] (with the due adaptation that now in the continuous case all quantities are a function of the threshold) and then plug these estimates in the definition of the pROC curve, that is, in order to estimate the pROC curve we only need to estimate the corresponding TPF and FPFs and the prevalence of the disease. A covariate-specific ROC curve can be defined and estimated in a similar fashion. Although we have assumed that disease status is binary (disease versus nondisease), in clinical practice, physicians often face situations that require decisions among three (or even more) diagnostic alternatives. This is especially true for neurological disorders, where cognitive function usually declines from normal function to mild impairment, to severe impairment or dementia. ROC surfaces (and the volume under the surface and the generalised Youden index) have been proposed in the literature as an extension to the three-class case of ROC curve methodology [@Nakas04; @Nakas10]. Parametric, semiparametric and nonparametric estimators do exist and we refer to [@Nakas14] for a recent overview. ROC surface regression, by opposed to the two-class counterpart, has received little attention, with [@Li12], to the best of our knowledge, being the only contribution. The existence of a gold standard test was assumed throughout this article, but this might not be the case for some diseases as, for instance, a definitive diagnosis of the Alzheimer’s disease can only be made through autopsy after death. Approaches for estimating the ROC curve and the covariate-specific ROC curve in the absence of a gold standard test have been proposed, among others, by [@Branscum08] and [@Branscum15]. Lastly, in our motivating example, the HOMA-IR levels, our diagnostic test/marker, was known and given. However, sometimes researchers do have access to multiple tests or biomarkers on individuals and interest in such cases might lie on how to best combine and transform this information onto a univariate score, to further use it to diagnose individuals. The topic of optimal combination of biomarkers using ROC analysis has received considerable attention in the literature (see, among many others, @Su93 [@Pepe06; @Liu11]). Recently, methods that deal with optimal biomarker combination but with covariate adjustment have also been proposed [e.g. @Liu13; @Kim17]. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== The work of V Inácio was partially supported by FCT (Fundação para a Ciência e a Tecnologia, Portugal), through the projects PTDC/MAT-STA/28649/2017 and UIDB/00006/2020. MX Rodríguez-Álvarez was funded by project MTM2017-82379-R (AEI/FEDER, UE), by the Basque Government through the BERC 2018-2021 program and by the Spanish Ministry of Science, Innovation, and Universities (BCAM Severo Ochoa accreditation SEV-2017-0718). SUPPLEMENTARY MATERIALS {#supplementary-materials .unnumbered} ======================= Here we provide supplementary figures and tables to the main document. ![Histograms of the $\log$ HOMA IR levels along with the estimated densities produced by a Dirichlet process mixture of normals model (solid pink line, with the dashed pink lines representing the pointwise 95% credible band) and by a kernel method (normal kernel and bandwidth selected by Silverman’s rule of thumb) (solid blue line).](././figures/densities_pROC_dpm_logHOMA.pdf){width="13.5cm"} ![Estimated ROC curve. Here BB stands for the Bayesian bootstrap method (Gu et al. 2008) and DPM for the Dirichlet process mixture of normals model (Erkanli et al. 2006).](././figures/pROC_all_woci_logHOMA.pdf){width="16cm"} [ccccc]{} & &\ & Youden index & $\log$ HOMA-IR optimal threshold & Youden index & $\log$ HOMA-IR optimal threshold \ Empirical &$0.325$& $0.742$ & $0.292$ &$0.718$\ Kernel &$0.283$& $0.804$ & $0.265$ & $0.728$\ DPM &$0.277$ $(0.197,0.356)$& $0.779$ $(0.660,0.894)$ & $0.282$ $(0.210,0.353)$ &$0.666$ $(0.557,0.779)$\ BB &$0.338$ $(0.249,0.427)$& $0.781$ $(0.695,0.913)$ & $0.315$ $(0.237,0.390)$ & $0.757$ $(0.400,0.962)$\ ![Estimated age/gender-specific Youden index and associated $\log$ HOMA-IR optimal thresholds. The continuous lines correspond to point estimates and the shaded region correspond to the $95\%$ pointwise credible band. Here BNP stands for the Bayesian nonparametric method of Inácio de Carvalho et al. (2013) and Kernel for the approach of Rodríguez-Álvarez et al. (2011). ](././figures/YI_comparisons_kernel_bnp_inter_logHOMA.pdf "fig:"){width="13.5cm"} ![Estimated age/gender-specific Youden index and associated $\log$ HOMA-IR optimal thresholds. The continuous lines correspond to point estimates and the shaded region correspond to the $95\%$ pointwise credible band. Here BNP stands for the Bayesian nonparametric method of Inácio de Carvalho et al. (2013) and Kernel for the approach of Rodríguez-Álvarez et al. (2011). ](././figures/TH_comparisons_kernel_bnp_inter_logHOMA.pdf "fig:"){width="13.5cm"}\
--- abstract: 'Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial ‘exploration’ even in cases where potentially similar prior tasks have been solved. We propose to transfer information across tasks using kernel embeddings of distributions of training datasets used in those tasks. The resulting method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective.' author: - | Ho Chung Leon Law\ University of Oxford\ `ho.law@stats.ox.ac.uk`\ Peilin Zhao\ Tencent AI Lab\ `masonzhao@tencent.com`\ Junzhou Huang\ Tencent AI Lab\ `joehhuang@tencent.com`\ Dino Sejdinovic\ University of Oxford\ `dino.sejdinovic@stats.ox.ac.uk`\ bibliography: - 'references.bib' title: Hyperparameter Learning via Distributional Transfer --- Introduction ============ Hyperparameter selection is an essential part of training a machine learning model and a judicious choice of values of hyperparameters such as learning rate, regularisation, or kernel parameters is what often makes a difference between an effective and a useless model. To tackle the challenge in a more principled way, the machine learning community has been increasingly focusing on Bayesian optimisation (BO) [@snoek2012practical], a sequential strategy to select hyperparameters $\theta$ based on previous past evaluations of model performance. Using a Gaussian process (GP) [@rasmussen2004gaussian] prior to build a representation of the underlying accuracy $f$ as a function of the hyperparameters, and using an acquisition function $\alpha(\theta;f)$ to trade off exploration and exploitation, given the posterior of $f$, has been shown to give superior performance compared to traditional methods [@snoek2012practical] such as grid search or random search. However, even in this setup, BO still suffers from the so called ‘cold start’ problem [@poloczek2016warm; @swersky2013multi]. Namely, in order to begin fitting a GP model, one needs to have initial evaluations of $f$ at different hyperparameters. Hence, prior research considered transferring knowledge from previously ‘solved’ tasks, as in [@swersky2013multi; @feurer2018scalable; @springenberg2016bayesian; @poloczek2016warm]. However, to consider the similarity across tasks, initial random evaluations of the model at hand are often required. This might be prohibitive: evaluations of $f$ can be computationally costly and our goal may be to select hyperparameters and deploy our model as soon as possible. We note that treating $f$ as a black-box function, as is often the case in BO, is ignoring the highly structured nature of hyperparameter learning – it corresponds to training specific models on specific datasets. We make steps towards utilizing such structure in order to borrow strength across different tasks and datasets.\ \ **Contribution** We assume a scenario where a number of tasks have already been ‘solved’ and propose a new BO algorithm, making use of the mean embeddings of the joint distribution of the training data [@muandet2017kernel; @blanchard2017domain]. In particular, we propose a GP model that can jointly model all tasks at once, by considering extended domain of inputs to model accuracy $f$: joint distribution of the training data $\mathcal{P}_{XY}$, sample size of the training data $s$ and hyperparameters $\theta$. Through utilising all seen evaluations from all tasks and meta-information, we can optimise the marginal likelihood to learn a meaningful similarity between tasks. Experimentally, our methodology performs favourably already at initialisation and has a faster convergence compared to existing baselines – in some cases, the optimal accuracy is achieved in just a few evaluations. Related work ============ The idea of transferring information from different tasks in the context of BO is not new, and it has mainly been studied in the setting of multi-task BO [@swersky2013multi; @feurer2018scalable; @springenberg2016bayesian; @poloczek2016warm]. In multi-task BO, there is a set of tasks that we wish to solve jointly or we wish to solve a target task given some ‘solved’ source task. We follow the same setup, but take a different approach. Currently, correlation across tasks is only captured through evaluations of $f$. However, in terms of hyperparameter search for a machine learning model, this ignores additional information available: datasets used in training. Further, since task similarity is captured through evaluations of $f$ only, this implies that we need to observe sufficient evaluations from the target task first in order to learn these task correlations. However, this is unnecessary in our proposed methodology, which can yield good initial hyperparameter candidates without having seen any evaluations from our target task, since we draw information from the meta-features corresponding to the training data distribution. The use of such meta-information has in fact been explored before, but the current literature either uses hand-crafted manual features [@feurer2015initializing] or define them in an unsupervised way [@kim2017learning]. These strategies are not optimal, as while different tasks can have very different meta-information, their $f$s can in fact be highly correlated. In this case, using these meta-information (to define similarity) can have an adverse effect on exploration – this highlights the importance of using evaluations of $f$ to learn the similarity between two tasks, as in multi-task BO. Furthermore, having obtained a similarity across tasks, current literature suggests to initialise with the best $\theta$ from the solved tasks, but again this is not optimal as we are neglecting non-optimal $\theta$ that can provide information for the target task. Our methodology can be seen as a combination of these two frameworks, as we use *learnt* embeddings of the joint distribution of the training data, while implicitly capturing correlation across tasks. The most similar in spirit to ours is the work of [@klein2016fast], who consider an additional input to be the sample size $s$, but do not consider different tasks corresponding to different training data distributions. Background {#sec:background} ========== Let $f^{target}$ be the target task objective we would like to optimise, i.e. we want to find $\theta^\ast_{target} = \text{argmax}_{\theta \in \Theta} f^{target}(\theta)$. Assume that there are $n$ (potentially) related source tasks $f^i \ i=1,\dots n$. For each source task, we assume that we have $\{\theta^i_k, t^i_k\}_{k=1}^{N_i}$ from past runs, where $t^i_k$ denotes a noisy evaluation of $f^i(\theta^i_k)$ and $N_i$ denotes the number of evaluations of $f^i$ from task $i$. Here, we assume that $f^i \ \forall \ i$ is the (underlying) accuracy of a trained machine learning model with training data $D_i = \{\mathbf{x}^i_l, y^i_l\}_{l=1}^{s_i}$, where $\mathbf{x}^i_l \in \mathbb{R}^p$ are the covariates, $y^i_l$ are the labels and $s_i$ is the size of the data. For a general framework, $D$ is any input to $f$ apart from $\theta$ – but following a typical supervised learning treatment, we assume it to be an iid sample from the joint distribution $\mathcal{P}_{XY}$. The method could be unsupervised though, as long as $f$ is an appropriate form of a measure of performance. Under this setting, we have $(f^i, D_i = \{\mathbf{x}^i_l, y^i_l\}_{l=1}^{s_i}, \{\theta^i_k, t^i_k\}_{k=1}^{N_i})$ for $i=1,\dots n $, where $D_i$ denotes a dataset of source task $i$. Our strategy now is to measure the similarity between datasets (as a representation of the task itself), in order to find $\theta^\ast_{target}$.\ \ **Assumptions** To compare between different datasets, we make the assumption that $\mathbf{x}^i_l \in \mathcal{X}$ and $y^i_l \in \mathcal{Y}$ for all $i, l$, and that throughout the supervised learning model class $M$ is the same. For example, one could imagine a data stream setting, where models have to be constantly retrained. This assumption implies that the source of differences of $f^i$ across $i$ and $f^{target}$ is in the data $D_i$ and $D_{target}$ *only*, i.e. it is the dataset that affects the location of $\theta^\ast_i$. In particular, we assume $D_i =\{\mathbf{x}^i_l, y^i_l\}_{l=1}^{s_i} \sim \mathcal{P}^i_{XY}$, where $\mathcal{P}^i_{XY}$ is the underlying joint distribution of the data for source task $i$. Further, as sample size is closely related to model complexity choice which is in turn closely related to hyperparameter choice [@klein2016fast], we will also encode this information. With these assumptions, we consider $f(\theta, \mathcal{P}_{XY}, s)$, where $f$ is a function on hyperparameters $\theta$, joint distribution of the underlying data $\mathcal{P}_{XY}$ and sample size $s$. Here, $f$ could be the negative of the empirical risk, i.e. $f(\theta, \mathcal{P}_{XY}, s) = -\frac{1}{s}\sum_{l=1}^s L(h_{\theta}(\mathbf{x}_l), y_l))$, where $L$ is the loss function and $h_{\theta}$ is the model. In this form, we can also recover $f^i$ and $f^{target}$ from $f^{i}(\theta) = f(\theta, \mathcal{P}^{i}_{XY}, s_{i})$ and $f^{target}(\theta) = f(\theta, \mathcal{P}^{target}_{XY}, s_{target})$. Now by constructing an appropriate covariance function for these inputs, we can use a GP to model $f$. Intuitively, similarly to assuming that $f$ varies smoothly as a function of $\theta$ in standard BO, this model also assumes smoothness of $f$ across $\mathcal{P}_{XY}$[^1] as well as across $s$ following [@klein2016fast]. Intuitively, if two distributions and sample sizes are similar (in some distance that we will learn), their corresponding $f$ will also be similar. In this source and target task setup, this would mean we can utilise information from *all* previous source datasets evaluations $ \{\theta^i_k, t^i_k\}_{k=1}^{N_i}$. Methodology {#sec:method} =========== In order to model the quantities $\theta, \mathcal{P}_{XY}$ and $s$ in a GP, we need a corresponding covariance function $C$ on these quantities. Assuming a separable kernel for the inputs, we have: $$C(\{\theta_1, \mathcal{P}^1_{XY}, s_1\}, \{\theta_2, \mathcal{P}^2_{XY}, s_2\}) = \nu k_\theta(\theta_1, \theta_2) k_p(\psi(D_1),\psi(D_2))k_s(s_1,s_2)$$ where $\nu$ is a constant, $k_\theta$ and $k_p$ is the standard Matérn-$3/2$ kernel in BO and $k_s$ is the sample size kernel found in [@klein2016fast]. These are standard choices in the literature to facilitate fair comparisons, hence we do not investigate them further and instead we focus on modelling the dataset representation $\psi(D)$. To specify $\psi(D)$, a feature map on joint distributions, estimated through samples $D$, we will follow an approach similar to [@blanchard2017domain] who consider transfer learning, and make use of kernel mean embeddings in order to compute feature maps of distributions (cf. [@muandet2017kernel] for an overview). In particular, we will consider feature maps of covariates and labels seperately, denoting them by $\phi_1(\mathbf{x}) \in \mathbb{R}^p$ and $\phi_2(y) \in \mathbb{R}^q$ [^2]. Given these two feature maps, to meaningfully embed a joint distribution $\mathcal{P}^i_{XY}$, we use the cross covariance operator $\mathcal{C}^i_{XY}$ (see [@gretton2015notes] for review), estimated by $D_i$ with: $$\hat{\mathcal{C}}^i_{XY} = \frac{1}{s_i}\sum_{\ell=1}^{s_i} \phi_1(\mathbf{x}^i_\ell) \otimes \phi_2(y^i_\ell),$$ i.e. in the case of finite-dimensional features, the cross covariance operator is just a tensor product of the feature maps $\Phi_1^i(\mathbf{x}) = [\phi_1(\mathbf{x}^i_1), \dots, \phi_1(\mathbf{x}^i_{s_i})] \in \mathbb R^{p \times s_i}$, $\Phi_2^i(y) = [\phi_2(y^i_1), \dots, \phi_2(y^i_{s_i})] \in \mathbb R^{q \times s_i}$, $$\hat{\mathcal{C}}^i_{XY} = \frac{1}{s_i}\Phi_1^i(\mathbf{x}) \Phi_2^i(y)^\top \in \mathbb{R}^{p\times q}.$$ Given $\hat{\mathcal{C}}^i_{XY}$, we can flatten it to obtain $\psi(D_i)\in \mathbb{R}^{pq}$, an estimator of the representation of the joint distribution $\mathcal{P}^i_{XY}$. As $\psi(D_i) \in \mathbb{R}^{pq}$, we can use a kernel $k_p(s, t): \mathbb{R}^{pq} \times \mathbb{R}^{pq} \rightarrow \mathbb{R}$ to measure similarity between $\mathcal{P}^i_{XY}$ and $\mathcal{P}^j_{XY}$ say. Note that while we have discussed the embedding of the joint distribution $\mathcal{P}_{XY}$, it is straightforward to also embed the marginal or conditional distributions.\ \ An important choice is the form of $\phi_1(\mathbf{x})$ and $\phi_2(y)$, as these define the features of the distribution $\mathcal{P}_{XY}$ we would like to capture. For example $\phi_1(\mathbf{x}) = \mathbf{x}$ would be capturing the mean of the marginal distribution $\mathcal{P}_X$. In practice (keeping sample size $s$ the same), if we know that $\mathcal{P}_{XY}^i \approx \mathcal{P}_{XY}^j$, we would expect that $\theta^\ast_i \approx \theta^\ast_j$, however generally the converse does not hold and the exact relationship is unknown. Hence, we opt for a flexible representation for $\phi_1(\mathbf{x})$ and $\phi_2(y)$ using small neural networks, which we optimise as part of the marginal likelihood maximisation. This can be thought of as a learning to learn setup [@andrychowicz2016learning], where a smaller robust model is used to optimise over bigger models. As we now have $f$ on inputs $(\theta, \mathcal{P}_{XY}, s)$, we can fit a GP (with the standard normal noise model) on all observations $\{\{(\theta^i_k, \mathcal{P}^i_{XY}, s_i), t^i_k\}_{k=1}^{N_i}\}_{i=1}^n$ (along with any observations on the target), optimising any unknown parameters through the marginal likelihood. In order to propose the next $\theta^{target}$ to evaluate, we let $f^{target}(\theta)=f(\theta, \mathcal{P}^{target}_{XY}, s_{target})$ and maximise the acquisition function $\alpha(\theta;f^{target})$. Here, we use the expected improvement [@movckus1975bayesian], however other options are readily applicable. Experiments {#sec:experiments} =========== We term the proposed method distBO and use a two layer neural network (of size 20 hidden units and 10 output units with RELU activation) for $\phi_1(\mathbf{x})$, while using an RBF network with $4$ landmark points for $\phi_2(y)$. In general, results are fairly robust to sensible settings of the neural network (that we *cannot* tune). For baselines, we will consider random search (RS), Bayesian optimisation (noneBO), multi-task Bayesian optimisation (multiBO) [@swersky2013multi] (learns a similarity between tasks through evaluations of $f$ *only*) and Bayesian optimisation with all source samples (allBO). The last baseline essentially ignores the distribution of the dataset, and assumes all seen samples come from the target task. For noneBO and multiBO, we will initialise with 5 iterations of random search. We repeat each experiment $30$ times and ignore $k_s$ as samples sizes do not differ across task here. Throughout for both source and target tasks, we will run $30$ iterations for each algorithm. To obtain $\{\theta^i_k, t^i_k\}_{k=1}^{30}$ for source task $i$, we use noneBO to simulate a realistic scenario.\ \ **Toy dataset** ![Target toy task with 15 source datasets **Left**: Max accuracy $f$ seen so far **Right**: Similarity with the target task $D_{target}$ for a particular run, measured using $k_p$. The legend represents the various sampled $\mu^i$, here the target $\mu^i =-0.25$.[]{data-label="fig:toy"}](figures/toy.png) To demonstrate our algorithm, we create 1-dimensional distribution of datasets from the following generative process: $$\begin{aligned} \mu^i \sim \mathcal{N}(\gamma^i,1) & X^i_l|\mu^i \sim \mathcal{N}(\mu^i,1) & Y^i_l | X^i_l \sim \mathcal{N}(X^i_l, 1) $$ Given $\gamma^i$, we can simulate a $\mu^i$ as a characteristic varying across task, before sampling $D_i=\{x^i_l, y^i_l\}_{l=1}^{s_i}$ from $X^i_l, Y^i_l |\mu^i$. We consider a simple form of $f$ given by $$f(\theta; D_i) = \exp\left(-\frac{(\theta - \frac{1}{s}\sum_l x^i_l)^2}{2}\right),$$ and here $\theta\in [-8,8]$ plays the role of a hyperparameter that we would like to learn and ‘labels’ $y^i_l$ are just nuisance variables. The optimal hyperparameter is the sample mean of the $\{x^i_l\}$ and hence it is varying together with the underlying mean of the data $\mu^i$. We now perform an experiment with $n=15$, and $s_i=400$, and generate $3$ source task with $\gamma^i=0$, and $12$ source task with $\gamma^i=4$. In addition, we generate an additional target dataset with $\gamma^i=0$. The idea is that only $3$ of the source tasks should be helpful to solve our target task, and distBO which can learn this should be able to reach the target optimum in only a few shots, as shown in Figure \[fig:toy\]. Here, distBO is able to take advantage of the correct source tasks, allowing it to outperform other baselines. In particular this is evident on the right graph in Figure \[fig:toy\], which shows the similarity measure $k_p(\psi(D_i), \psi(D_{target})) \in [0,1]$ for $i=1,\dots 15$. The feature representation has correctly learned to put high similarity on the three source datasets from the same process and low similarity on source datasets that are not from the same process. We also observe that while the multi-task BO also achieve the target optimum quickly, it is unable to few-shot the optimum, as it does not make use of meta-information, hence needing intialisations from the target task to even begin learning the similarity across tasks. We also see that allBO which puts similarity as $1$ for all tasks performs poorly.\ \ **Swiss Roll dataset** We now consider the swiss roll dataset[^3], as shown in Figure \[fig:swiss\] in Appendix \[app:swiss\_roll\], with covariates being the spatial location of each point, and labels given by the colour of the point. In order to make this problem more challenging, we place this data manifold in $\mathbb{R}^{10}$ by concatenating $7$ dimensions of $0$ onto the covariates, before applying a randomly selected, then fixed rotation matrix. We now fit a RBF kernel ridge regression (with hyperparameters $\alpha$ and $\gamma$), and aim to find the hyperparameters that will give us the highest accuracy as measured by the coefficient of determination ($R^2$). Throughout, we will take $s_i=400$ for training and $s_i=400$ for testing. In the first experiment, we have $n = 15$ source datasets with increasing noise on the marginal distribution of $\mathbf{x}$, with the target dataset having a noise of $\sigma=1.0$. In figure \[fig:marginal\], we observe that distBO can achieve the optimum in just several evaluations, this is non-surprising as distBO has correctly learnt the correct similarity ordering across tasks (with respect to the noise level). ![Target swiss rolls with $\sigma=1.0$ with 15 source datasets **Left**: Max accuracy $f$ seen so far on test set (Coefficient of Determinant, $R^2$ $\in [0,1]$) **Right**: Similarity with the target task $D_{target}$ for a particular run, measured using $k_p$. The legend represents source datasets, with various noise level $\sigma$. Here the target dataset has $\sigma = 1.0$.[]{data-label="fig:marginal"}](figures/marginal.png) **Real dataset** The Parkinson’s disease telemonitoring dataset[^4] consists of voice measurements using a telemonitoring device for $42$ patients with Parkinson disease ($200$ recordings each). The label is the clinician’s Parkinson disease symptom score for each recording. Following [@blanchard2017domain], we can treat each patient as a separate task. For the model, we employ RBF kernel ridge regression (with hyper-parameters $\alpha, \gamma$), with $f$ as the coefficient of determination ($R^2$). While this problem does not gain from BO, as $f$ is not expensive, this allows for a comprehensive benchmark comparisons. Here, we take a particular patient to be our target task, and randomly choose $n=30$ other patients as source tasks (different across each repetition). We show the result in Figure \[fig:pat\] for three patients (as targets) to highlight the behaviour. In fact across all patients being the target, distBO outperforms (or is equivalent to) all baselines in terms of faster convergence to the optimum. Here, it is clear that by encoding the distributional meta-information, we can learn useful similarities[^5] between tasks. ![Max accuracy $f$ seen so far on test set (Coefficient of Determinant, $R^2$ $\in [0,1]$) on three different target patient task with 30 source datasets.[]{data-label="fig:pat"}](figures/pat_acc.png) Conclusion ========== We demonstrated that it is possible to borrow strength between multiple hyperparameter learning tasks by making use of the similarity between training datasets used in those tasks. This helped us to develop a method which finds a favourable setting of hyperparameters in only a few evaluations of the target objective. We argue that the model performance should not be treated as a black box function as it corresponds to specific known models and specific known datasets and that its careful consideration as a function of all its inputs, and not just of its hyperparameters, can lead to useful algorithms. Swiss Roll dataset {#app:swiss_roll} ================== ![The swiss roll dataset, with no noise, i.e. $\sigma=0$. The label of each point is given by its corresponding colour.[]{data-label="fig:swiss"}](figures/swiss_roll.png) Verifying Smoothness Assumption {#app:assumption} =============================== We need to verify that as the joint distribution of the data changes, the change in the function $f$ also changes smoothly. To provide some empirical results for this, we perform a grid search (using the same swiss roll setup in Section \[sec:experiments\]) to observe behaviour of $f$ for increasing noise (changes in $P_{X}$) and increasing conditional distributional change (changes in $P_{Y|X}$ by taking a power P on label, before normalisation to \[0,1\] to obtain new label $y^{new}_i$), i.e. $$\label{eqn:stand} y^{new}_i = \frac{(y_i)^\text{P} - \text{min}_i \ (y_i)^\text{P}}{ \text{max}_i \ (y_i)^\text{P} - \text{min}_i \ (y_i)^\text{P}}$$ The results are shown in Figure \[fig:noise\] and \[fig:conditional\], and indeed we observe that $f$ changes smoothly as the data distribution changes. ![Swiss roll $R^2$ $\in [0,1]$ on the test set for a grid of $\alpha, \gamma$ parameters for the kernel ridge regression. Here N denotes the additional noise $\sigma$ on the covariates.[]{data-label="fig:noise"}](noise_transfer.png){width="0.95\linewidth"} ![Swiss roll $R^2$ $\in [0,1]$ on the test set for a grid of $\alpha, \gamma$ parameters for the kernel ridge regression. Here P denotes the additional power on the label (which we then normalise back to be in \[0,1\])[]{data-label="fig:conditional"}](conditional_transfer.png){width="0.95\linewidth"} Parkinson’s disease dataset {#app:sim} =========================== ![Similarities with the three different target patient for a particular repetition. Different colour represents a different patient source task.[]{data-label="fig:sim"}](figures/pat_sim.png) [^1]: We empirically verify validity of this assumption on simulated datasets in Appendix \[app:assumption\]. [^2]: $p$ and $q$ can be potentially infinite, but we consider finite explicit feature maps for simplicity. [^3]: The swiss roll manifold function (for sampling) can be found on the Python scikit-learn package, and there exists a variable $\sigma$ for adding different noise levels. [^4]: http://archive.ics.uci.edu/ml/datasets/Parkinsons+Telemonitoring [^5]: Shown in Figure \[fig:sim\] in Appendix \[app:sim\].
--- abstract: 'We construct a simple class of exact solutions of the electroweak theory including the naked $Z$–string and fermion fields. It consists in the $Z$–string configuration ($\phi,Z_\theta$), the [*time*]{} and $z$ components of the neutral gauge bosons ($Z_{0,3},A_{0,3}$) and a fermion condensate (lepton or quark) zero mode. The $Z$–string is not altered (no feed back from the rest of fields on the $Z$–string) while fermion condensates are zero modes of the Dirac equation in the presence of the $Z$–string background (no feed back from the [*time*]{} and $z$ components of the neutral gauge bosons on the fermion fields). For the case of the $n$–vortex $Z$–string the number of zero modes found for charged leptons and quarks is (according to previous results by Jackiw and Rossi) equal to $|n|$, while for (massless) neutrinos is $|n|-1$. The presence of fermion fields in its core make the obtained configuration a superconducting string, but their presence (as well as that of $Z_{0,3},A_{0,3}$) does not enhance the stability of the $Z$–string.' author: - | \ [**J.M. Moreno**]{}, [**D. H. Oaknin**]{}\ \ Instituto de Estructura de la Materia, CSIC\ Serrano 123, 28006-Madrid, Spain\ and\ \ [**M. Quirós**]{}[^1]\ \ Theory Division, CERN\ CH-1211 Geneva 23, Switzerland title: '[**Fermions on the electroweak string**]{}[^2]' --- 15.5 cm 22. cm -1.2 cm \#1[[bsphack@filesw [ gtempa[auxout[ ]{}]{}]{}gtempa @nobreak esphack]{} eqnlabel[\#1]{}]{} eqnlabel vacuum \#1 -21.0cm [**1.**]{} It is well known that cosmic strings are originated in spontaneously broken gauge theories when the vacuum manifold is not simply connected. Strings originating at mass scales $\Lambda$ close to the Planck scale $M_{Pl}$ can yield (and be detected by) gravitational effects: gravitational lensing, seeds for galaxy formation, millisecond pulsar timing perturbations, etc. [@graveffects]. On the other hand, strings originating at $\Lambda \ll M_{Pl}$ have negligible gravitational effects and, correspondingly, cannot be detected through their gravitational interactions. In particular, the Nielsen-Olesen [@NO] vortex solution of the abelian Higgs model can be embedded into the $SU(2)_L \times U(1)_Y$ electroweak theory (and then $\Lambda \sim G_F^{-1/2}$). This vortex of $Z$-particles is known as the (naked) $Z$–string [@Zstring]. As stated above $Z$–strings have negligible gravitational interactions and experimental detection seems problematic, though they have been proposed as candidates to trigger baryogenesis at the electroweak phase transition (no matter what the order is) [@baryo]. However their dynamical stability (not guaranteed by topological arguments) only holds for unrealistic values of $\sin ^2 \theta_W$ [@stab], a mechanism to stabilize the $Z$–string being still missing [^3]. Another class of cosmic strings with non-gravitational effects were proposed by Witten [@Witten1]. They are called superconducting strings because they have superconducting charge carriers (either charged bosons or fermions) with expectation values in the core of the string. Superconducting strings can yield observable effects even for $\Lambda \ll M_{Pl}$ ([*e.g.*]{} for $ \Lambda \sim G_F^{-1/2}$) [@Witten2]. In particular superconducting strings with Fermi charge carriers can arise if there are normalizable fermion zero-modes bounded to the string. In this paper we prove that the $Z$–string is superconducting, with leptons and quarks being the charge carriers. In particular, we will embed the naked $Z$-string into a field configuration with fermion fields, the [*time*]{} and $z$ component of the electromagnetic, $A$, and $Z$ fields and the (unperturbed) $Z$–string. Fermion condensates are zero modes of the Dirac equation in the presence of the $Z$-string background. We have constructed solutions where fermions are either charged leptons, quarks or neutral leptons (neutrinos). As for the former (charged fermions) we have followed the analysis of Ref. [@JR]. For the latter (neutrinos) we have performed a similar analysis and found that neutrinos can be bounded at the string core by the weak interactions. We have found that the $Z$-string configuration is unaltered by the presence of both the fermion condensate and the [*time*]{} and $z$ components of the electromagnetic and $Z$ fields. A complete numerical analysis is also presented, including the profiles for fermion densities and field configurations and mean radii for the different bound states. A similar analysis has been recently performed in [@EP], where a solution of the electroweak theory with a single lepton family is constructed. The authors of Ref. [@EP] claim their solution is approximate since they put $A_0=A_3=Z_0=Z_3=0$. However we have explicitly shown that the fields $A_0$, $A_3$, $Z_0$ and $Z_3$ are non–zero in the presence of fermionic densities and that [*indeed*]{} there is no feed back from them on the fermion zero modes. It is also explicitly proven in [@EP] that in the absence of the fields $A_0$, $A_3$, $Z_0$ and $Z_3$ the presence of the fermion condensate does not alter the stability properties of the $Z$-string. We have proven by symmetry arguments that this is also the case when dealing with the total solution containing non–zero values for $A_0$, $A_3$, $Z_0$ and $Z_3$. We have also shown the above features remain unchanged when zero–modes are either charged (leptons or quarks) or neutral (neutrinos) fermions. [**2.**]{} We will consider the case of just one fermion ([*i.e.*]{} lepton or quark) species. Let $\Psi = \left[ \begin{array}{c} \psi^+_L \\ \psi^-_L \end{array}\right]$ be the left–handed fermionic doublet and $\psi^+_R$, $\psi^-_R$ their right–handed partners [^4]. The relevant lagrangian density is therefore: \[lagrangian\] [L]{} & = & - W\_[ a]{} W\^[ a]{} - B\_ B\^ + | D\_ |\^2 - (\^ - \^2)\^2\ & & + i [|]{} [ D]{} + i [|\^+\_R]{} [ D]{} \^+\_R + i [|\^-\_R]{} [ D]{} \^-\_R\ & & - h\_+ ( [|]{} \^+\_R + h.c. ) - h\_- ( [|]{} \^-\_R + h.c. ) where $W_{\mu \nu}^a = \partial_{\mu} W_{\nu}^a - \partial_{\nu} W_{\mu}^a + g \varepsilon^{abc} W_{\mu}^b W_{\nu}^c$, $B_{\mu \nu} = \partial_{\mu} B_{\nu} - \partial_{\nu} B_{\mu}$, $ D_{\mu} = \partial_{\mu} + i g T^a W_{\mu}^a + i g' Y B_{\mu}$, $T^a$ being the corresponding $SU(2)_L$ generator and $Y$ the $U(1)_Y$ hypercharge. $\Phi = \left[ \begin{array}{c} \phi^+ \\ \phi \end{array}\right]$ is the Higgs doublet and ${\tilde \Phi} \equiv i \sigma^2 \Phi^*$. The Euler-Lagrange equations are: \[eqnold\] D\_D\^& = & - 2 (\^ - \^2) - h\_+ ([|]{}\^+\_R i \^2)\^t - h\_- [|\^-]{}\_R\ & &\ (D\_W\^)\^a & = & -i (\^ \^a (D\^) - (D\^ )\^ \^a ) - [ |]{} \^ \^a\ & &\ D\_B\^ & = & -i (\^ (D\^) - (D\^ )\^ )\ & & - g’ y\_L | \^ - g’ y\_R\^+ |[\_R\^+]{} \^ \_R\^+ - g’ y\_R\^- |[\_R\^-]{} \^ \_R\^-\ & &\ i [ D]{} & = & h\_+ \^+\_R + h\_- \^-\_R\ & &\ i [ D]{} \^+\_R & = & h\_+ \^\ & &\ i [ D]{} \^-\_R & = & h\_- \^\ To solve eqs. (\[eqnold\]) it is necessary to make an ansatz on the symmetry of the solution. Our aim is to find some configuration that could be interpreted as the $Z$-string plus a fermion condensate in its core. Then, our starting point will be to generalize the $Z$-string solution keeping, if possible, their symmetries. In particular, concerning global symmetries, the $Z$-string is invariant under the $Z_2$ [*parity*]{} given by the global $U(2)$ transformation on the bosonic fields $\left( \begin{array}{rc} -1 & 0 \\ 0 & 1 \end{array} \right)$. The even fields under this [*parity*]{} are: $\left[ \begin{array}{c} 0 \\ \phi \end{array} \right] $, $W_\mu^3$, $B_\mu$. There are several possible ways to extend this symmetry to the fermionic fields. For example, we can choose one fermion to be odd and the second one, that will be designed by $\psi_L$, $\psi_R$, to be even [^5]. It is clear from (\[eqnold\]) that we can consistently fix to zero all the odd fields. Thus, the solutions under this ansatz are equivalent to those in the reduced $U(1) \times U(1)$ model [^6], spontaneously broken to the $U(1)_{em}$ with $A_{\mu}$ and $Z_{\mu}$ the corresponding gauge bosons. The relevant field equations are obtained directly from (\[eqnold\]) by replacing $\Phi \rightarrow \phi$, $\Psi \rightarrow \psi_L$; $W^{1,2}, \phi^+ \rightarrow 0$. In terms of the mass eigenstates gauge bosons, $A_\mu = \sin \theta_W \, W^3_\mu + \cos \theta_W \, B_\mu$; $Z_\mu = \cos \theta_W \, W^3_\mu - \sin \theta_W \, B_\mu$, the covariant derivatives are: \[covariant\] D\_ & = & ( \_+ i q\_H Z\_)\ &&\ D\_ \_L & = & ( \_+ i q\_L Z\_+ i q A\_) \_L\ &&\ D\_ \_R & = & ( \_+ i q\_R Z\_+ i q A\_) \_R $q$ being the electric charge of the corresponding field and $q_{H,\,L,\,R}$ the eigenvalues for the Higgs boson and left and right fermions, respectively, of the $Z$-charge, defined in our notation as $Q^Z = \frac{e}{\sin\theta_W \cos\theta_W}(T_3 - \sin^2 \theta_W \,q/e ) $, where $T_3$ is the third component of the weak isospin, equal to 0 for singlets and $\pm 1/2$ for the doublet components. In particular, for the gauge bosons we have \[eqAZ\] Z\^- \^\^Z\_& = & i q\_H ( \^ (D\^) - (D\^ )\^ ) + j\_Z\^\ &&\ A\^- \^\^A\_& = & j\_A\^ where the fermionic currents on the right hand side of (\[eqAZ\]) are defined as \[currents\] j\_Z\^ & = & q\_L [|\_L]{} \^\_L + q\_R [|\_R]{} \^\_R\ &&\ j\_A\^ & = & q ( [|\_L]{} \^\_L + [|\_R]{} \^\_R )\ as deduced from eq. (\[covariant\]). A general, static, $z$-independent ansatz still invariant under the combined action of rotation around the z-axis and the suitable gauge transformation is, in cylindrical coordinates [^7] \[bosons\] & = & f(r) e\^[- i n ]{}\ Z\^& = & Z\^(r)\ A\^& = & A\^(r) for bosonic fields and \[fermions\] \_L (r,) & = &\ & &\ \_R (r,) & = & for fermions [^8], where we are using the Dirac representation: \[dirac\] \^0 = ( [cr]{} \^0& 0\ 0 & -\^0 ) \^i = ( [rc]{} 0 & \^i\ - \^i & 0 ) \^5 = ( [cc]{} 0 & [**1**]{}\ [**1**]{} & 0 ) with $\sigma^i$ the Pauli matrices and $\sigma^0 = {\bf 1}$. Notice that an enlarged gauge configuration, compared to the electroweak string, is expected in general due to the presence of the fermionic currents that will act as sources. Using the ansatz (\[bosons\],\[fermions\]), the $\theta$–dependence cancels in the Euler–Lagrange equations. Defining ${\bf \psi_L}^t = (\psi_{L1}, \psi_{L2})$, ${\bf \psi_R}^t = (\psi_{R1}, \psi_{R2})$ we get \[ansatz\] f” & = & - f’ + f\ & & i h\_\^ \^0 [**\_[R,L]{}**]{} \[dos\] Z\_0” & = & - Z\_0’ + 2 q\_H\^2 |f|\^2 Z\_0 - q\_L [**\_L**]{}\^ \^0 [**\_L**]{} - q\_R [**\_R**]{}\^ \^0 [**\_R**]{}\ Z\_3” & = & - Z\_3’ + 2 q\_H\^2 |f|\^2 Z\_3 - q\_L [**\_L**]{}\^ \^3 [**\_L**]{} + q\_R [**\_R**]{}\^ \^3 [**\_R**]{}\ Z\_” & = & - Z\_’ + 2 q\_H\^2 |f|\^2 Z\_- 2 q\_H |f|\^2 + Z\_ - q\_L [**\_L**]{}\^ \^2 [**\_L**]{} + q\_R [**\_R**]{}\^ \^2 [**\_R**]{}\ 0 & = & -2 q\_H Im + q\_L [**\_L**]{}\^ \^1 [**\_L**]{} - q\_R [**\_R**]{}\^ \^1 [**\_R**]{} \[tres\] A\_0” & = & - A\_0’ - q [**\_L**]{}\^ \^0 [**\_L**]{} - q [**\_R**]{}\^ \^0 [**\_R**]{}\ A\_3” & = & - A\_3’ - q [**\_L**]{}\^ \^3 [**\_L**]{} + q [**\_R**]{}\^ \^3 [**\_R**]{}\ A\_” & = & - A\_’+ A\_ - q [**\_L**]{}\^ \^2 [**\_L**]{} + q [**\_R**]{}\^ \^2 [**\_R**]{}\ 0 & = & q [**\_L**]{}\^ \^1 [**\_L**]{} - q [**\_R**]{}\^ \^1 [**\_R**]{} where the last equations of (\[dos\]) and (\[tres\]) are [*constraints*]{} corresponding to the gauge conditions $Z^r=A^r=0$, and \^1 [**\_L**]{}’ & = & { M\_L - i q\_L \^2 Z\_ & + i q\_L \^0 Z\_0 - i q\_L \^3 Z\_3 & &\ & & - i q \^2 A\_ & + i q \^0 A\_0 - i q \^3 A\_3 } [**\_L**]{} & - & h f\^\^0 [**\_R**]{}\ &&&&&\ \^1 [**\_R**]{}’ & = & { M\_R - i q\_R \^2 Z\_ & - i q\_R \^0 Z\_0 - i q\_R \^3 Z\_3 & &\ & & - i q \^2 A\_ & - i q \^0 A\_0 - i q \^3 A\_3 } [**\_R**]{} & - & h f\^\^0 [**\_L**]{} where the prime denotes the derivative with respect to the cylindrical radius $r$, and \[angular\] M\_L = ( [cc]{} 0 & m - 1\ -m & 0 ) M\_R = ( [cc]{} 0 & m n - 1\ -(m n) & 0 ) the $\pm$ sign corresponding to the case where the even fermionic fields are $\psi_{L}^{\pm},\psi_{R}^{\pm}$ and $f^+ = f^*$, $f^- = f$. Before going ahead with our ansatz, we will review some features of the vortex-fermion system. Jackiw and Rossi showed in Ref. [@JR] that this system has normalizable zero-modes if the fermions get their mass through their coupling to the scalar field. In fact, the number of these zero modes depends on the winding number of the vortex by an index theorem [@index]. These zero modes are [*transverse*]{} [@Witten1], [*i.e.*]{} they are eigenstates of $\gamma^0 \gamma^3$, \[eigen\] \^0 \^3 = , \^2 =1 , which is the operator on the fermions associated to the parity operation $(t,z) \rightarrow (-t,-z)$. Let us see how these zero modes are in our case. Notice that, as we remarked before, the gauge field configuration of the naked $Z$-string must be enlarged in the presence of this fermionic density. In particular, the [*time*]{} and $z$ components will be different from zero. Suppose that (\[eigen\]) also holds in our case and let us see whether it is consistent with eqs. (\[ansatz\]). Then j\_A\^[0]{} = j\_A\^[3]{} and \[a0a3\] A\^0 = A\^3 if the boundary conditions also obey this relation. In the same way, j\_Z\^[0]{} = j\_Z\^[3]{} then (Z\^0 - Z\^3) = - (Z\^0 - Z\^3) + 2 q\_H |f|\^2 (Z\^0 - Z\^3) and again \[nofeed\] Z\^0 = Z\^3 for appropriate boundary conditions. Then, \[nofeedback\] ( \^0 Z\_0 + \^3 Z\_3 ) \_[L,R]{}(r,) & = & 0\ ( \^0 A\_0 + \^3 A\_3 ) \_[L,R]{}(r,) & = & 0 where $\psi_{L,R}(r,\theta)$ are the spinors defined in the ansatz (\[fermions\]). Eqs. (\[nofeedback\]) and (\[nofeed\]) show that the fermionic zero modes (\[eigen\]) produce no feedback on the naked $Z$-string. In other words, the ansatz (\[bosons\]) is consistent with the existence of fermionic zero modes since odd quantities under the operator $\gamma^0\gamma^3$ vanish. Notice that the coefficients in the equations are real, and therefore the phases of $f(r)$, ${\bf \psi_L}(r)$, ${\bf \psi_R}(r)$ are constant. We can use the global $U(1) \times U(1)$ to bring them to zero. It is easy to check that condition (\[eigen\]) leads to the ansatz $$\label{kappam1} \psi_L=\left( \begin{array}{c} \psi_{L1} \\ 0 \end{array} \right), \ \ \psi_R=\left( \begin{array}{c} 0 \\ \psi_{R2} \end{array} \right)$$ for $\kappa=-1$, and $$\label{kappa1} \psi_L=\left( \begin{array}{c} 0 \\ \psi_{L2} \end{array} \right), \ \ \psi_R=\left( \begin{array}{c} \psi_{R1} \\ 0 \end{array} \right)$$ for $\kappa=1$. Using now the ansatz (\[kappam1\]) or (\[kappa1\]), and $f(r)$ real, the constraints in (\[dos\]) and (\[tres\]), corresponding to the gauge conditions $Z^r=A^r=0$, are trivially satisfied. The conclusions of this discussion can be summarized as follows: - It is possible to choose consistently fermions (both left and right components) in one $\kappa$-sector. These configurations correspond to zero-modes. - In this case, the back reaction on the fields in the undressed electroweak string ($\phi$, $Z_\theta$) is [*exactly*]{} zero. - The equations for the fermions are just given by the Dirac equation in the electroweak string background. - The [*time*]{} and $z$ components of the gauge fields are different from zero. As for the problem of the dynamical stability of this string configuration, it holds for the same values of physical parameters as in the case of the $Z$-string. To see that, let us just consider, on top of the ansatz (\[bosons\]) and (\[fermions\]), the same kind of [*dangerous*]{} perturbations $\delta \, W_{i}^{\pm}(r, \theta)$ ($i=1,2$), $\delta \, \phi^{\pm}(r,\theta)$ that destabilize the $Z$-string. These perturbations are [*odd*]{} in our language. Let us focus, first, on fermionic terms. As fermions always appear in cubic terms with one boson, it is clear that this kind of perturbations decouple. This was explicitly proven in Ref. [@EP]. It is also straightforward to see that these perturbations decouple in the new bosonic terms induced in the energy by $Z_{0,3}$, $A_{0,3}$. In fact the only relevant terms consistent with our ansatz and Lorentz covariance are $$\begin{aligned} \label{WWZZ} & &\delta \, W_{i}^{\pm}(r, \theta)\delta \, W_{j}^{\mp}(r, \theta) \delta^{ij}\times \\ & &\left\{ \alpha_W\left[A_0(r)^2-A_3(r)^2\right]+\beta_W \left[Z_0(r)^2-Z_3(r)^2\right] +\gamma_W\left[A_0(r)Z_0(r)-A_3(r)Z_3(r)\right]\right\} \nonumber\end{aligned}$$ and $$\begin{aligned} \label{ppZZ} && \delta\, \phi^{\pm}(r,\theta)\delta \, \phi^{\mp}(r,\theta)\times \\ &&\left\{\alpha_\phi\left[A_0(r)^2-A_3(r)^2\right]+ \beta_\phi\left[Z_0(r)^2-Z_3(r)^2\right] +\gamma_\phi\left[A_0(r)Z_0(r)-A_3(r)Z_3(r)\right]\right\} \nonumber\end{aligned}$$ where the coefficients $\alpha_{W,\phi}$, $\beta_{W,\phi}$ and $\gamma_{W,\phi}$ are some dimensionless combinations of the gauge coupling constants. Using now the conditions (\[a0a3\]) and (\[nofeed\]) one can easily check that the terms (\[WWZZ\]) and (\[ppZZ\]) do vanish, as anticipated. As we see, the requirement of simplicity that made this configuration tractable is too strong to leave any room for stability improvements. [**3.**]{} We have found a consistent ansatz with zero energy, but we still have to see if the equations admit a non–trivial, normalizable solution. In the bosonic sector we take $A_{\theta}(r)\equiv 0$, and, for the rest of the fields, the boundary conditions $$\begin{array}{c} f(0)=0,\ f(\infty)=\eta \\ \\ {\displaystyle q_H Z_\theta\equiv \frac{v(r)}{r},\ v(0)=0,\ v(\infty)=-1 } \\ \\ A'_{0,3}(0)=Z'_{0,3}(0)=A'_{0,3}(\infty)=Z'_{0,3}(\infty)=0 \end{array}$$ For the case of fermions getting their masses through the coupling to the Higgs field we follow the work of Jackiw and Rossi [@JR]. The relevant equations are: $$\begin{aligned} \label{kplus} \left[\begin{array}{l} \psi_{L1} \\ \psi_{R2} \end{array}\right]' & = & \left\{ \frac{1}{r} \left(\begin{array}{cc } -m & 0 \\ 0 & m \pm n-1 \end{array}\right) + \; Z_\theta \left( \begin{array}{cc} q_L & 0 \\ 0 & -q_R \end{array}\right) \right\} \left[\begin{array}{l} \psi_{L1} \\ \psi_{R2} \end{array}\right] \nonumber \\ & & \nonumber \\ & & - h_{\pm} f \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) \left[\begin{array}{l} \psi_{L1} \\ \psi_{R2} \end{array}\right]\end{aligned}$$ in the $\kappa=-1$ sector and $$\begin{aligned} \label{kminus} \left[\begin{array}{l} \psi_{L2} \\ \psi_{R1} \end{array}\right]' & = & \left\{ \frac{1}{r} \left(\begin{array}{ccc} m - 1 & & 0 \\ 0 & & -(m \pm n) \end{array}\right) + \; Z_\theta \left( \begin{array}{cc} -q_L & 0 \\ 0 & q_R \end{array}\right) \right\} \left[\begin{array}{l} \psi_{L2} \\ \psi_{R1} \end{array}\right] \nonumber \\ & & \nonumber \\ & & - h_{\pm} f \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) \left[\begin{array}{l} \psi_{L2} \\ \psi_{R1} \end{array}\right]\end{aligned}$$ for $\kappa=1$. The behaviour of $\psi_{L},\psi_{R}$ at [*large*]{} $r$ values is controlled by the coupling to the Higgs field, because the gauge field vanishes there. In general, the asymptotic solution will be, modulo some polynomial prefactors, a combination of two exponential functions, one increasing and another decreasing with $r$. The requirement of normalizability translates into the condition $\psi_L|_\infty = \psi_R|_\infty $ For [*small*]{} $r$, we have again two equations coupled by the Higgs term (the gauge term is negligible compared to the angular term for wound fermions). By imposing consistency of the equations and regularity of the spinors at the origin, we get that just some values of $m$ (the parameter that controls the fermion winding) are selected once $n$ is fixed. For these values, the behaviour of the two fermion fields near $r=0$ is controlled by the [*centrifugal*]{} term coming from the angular momentum, and the Higgs term is negligible. The selected regions in the $n-m$ plane for the two $\kappa$ sectors and the two $\psi^\pm$ fermions are shown in Fig. 1. Notice that the specific values of the fermionic gauge couplings are irrelevant in the above considerations on the number of zero modes. In fact, for the $n$-string there are $|n|$ zero-modes. A quick glance at Fig. 1 shows that if we try to put both $\psi^\pm$ fermions in the same $\kappa$ sector [^9], the solution cannot be normalized. Therefore, if we are looking for configurations involving both $\kappa$ sectors, such as those with fermion energy different from zero, the bosonic field ansatz must be enlarged. Of course, it would be very interesting to find an exact normalizable solution also in this case, but the lost of the symmetry makes that task almost impossible. So far, we have studied the normalizability of the solution just for massive fermions. The [*rôle*]{} of the Higgs boson was crucial there, and we can expect a very different situation for massless fermions (neutrinos). The interaction of the neutrinos with the $Z$-string is purely gauge and is governed by: \[neutrino1\] \^[’]{}\_[L1]{} = ( - + q\_Z\_) \_[L1]{} for $\kappa = -1$ and \^[’]{}\_[L2]{} = ( - q\_Z\_) \_[L2]{} when $\kappa = 1$. The behaviour of the neutrino field near $r = 0$ is controlled by the angular term, and if we impose $\psi$ to be regular at the origin we get: ------- -------- --- -------------- --- ------ $- m$ $\geq$ 0 for $\kappa$ = $-1$ $m-1$ $\geq$ 0 for $\kappa$ = $ 1$ ------- -------- --- -------------- --- ------ Due to the absence of the Higgs term, $\psi$ is not an exponential function of $r$ at large values, but goes like $r^\alpha$, where $\alpha$ is some integer. In particular, for $\kappa = -1$, $\psi_{L1} \sim r^{-(m+n)}$. Then, for $(m+n) > 1$, the neutrino distribution is normalizable. If we ask for a localized configuration, [*i.e.*]{} finite $\langle r \rangle$, $\langle r^2 \rangle$ and assume $m$ integer, [*i.e.*]{} periodic fermions when winding around the string, we get: ------- ----- ------ -------------- --- ------ $m+n$ $>$ $2$ for $\kappa$ = $-1$ $m+n$ $<$ $-1$ for $\kappa$ = $ 1$ ------- ----- ------ -------------- --- ------ We have found that there are normalizable fermionic configurations with zero energy even in the case of massless particles. It means that the gauge interaction can be efficient enough to keep the neutrino near the string core. Let us illustrate this situation by using an analogous quantum mechanical system. From Eq. (\[neutrino1\]) it follows: ( - + V\_[eff]{} ) \_[L1]{} = 0 where (for $\kappa$=–1) \[veff\] V\_[eff]{}(r) = { + q\_ + q\_\^2 Z\_\^2(r) - 2 m q\_ } This is the Schrödinger equation for a particle of unit mass under the action of the potential described by $V_{eff}$. In Fig. 2 we have drawn the shape of this effective potential as a function of the distance [^10] to the string axis. We have worked out two cases: one for $n=2$, $m=0$ ([*i.e.*]{}, without centrifugal term) and the second one for $n=4$, $m=-2$ (with a centrifugal term). Notice the existence of a centrifugal barrier when $m \neq -1,0$. For large r values, $V_{eff} \sim r^{/2}$. For the relevant values of $n$ and $m$, the structure of the potential can be described by the existence of an absolute minimum with $V_{min} < 0$ and a barrier with $V_{max} > 0$ that decreases as $1/r^2$ for large $r$. As $V_{eff} (\infty) = 0$, we could consider the possibility of tunnelling of the neutrino zero mode to the large $r$ region, thus dissociating the bound state. To estimate this probability, we can use the WKB approximation [@Coleman] assuming that the potential vanishes for $r>R$, with $R$ large enough, and take the limit $R\rightarrow\infty$. It is easy to see that this probability is $\propto R^{-2\sqrt{(n+m)(n+m+1)}}$ and then goes to zero [^11]. We have also drawn in the same picture the effective potential for a case in which the distribution is normalizable but $\langle r \rangle$ is infinite. [**4.**]{} We have solved numerically the field equations for the neutrino and for massive fermions from the third generation. In Fig. 3a we have plotted fermionic density per unit of string length for several cases. They include the neutrino configuration for $n=2$, $m=0$ and a fermion with $M=M_{top}$ in the limit $q_L, q_R \rightarrow 0$. For the neutrino, the interaction is purely gauge, whereas for the other illustrating fermion the interaction is only through the Higgs coupling. We have also drawn the fermionic density for the top quark with the same $n, m$ values. Notice that, as one could expect, the density shape for the top quark almost coincides with the corresponding one in the limit of zero charges. In all cases, the typical mean radius is of the order of the string radius. We have also calculated the $Z_0$ field component corresponding to these configurations. The results are gathered in Fig. 3b. We have repeated this analysis for the bottom quark and the tau lepton. The corresponding profiles are shown in Fig. 4a,b. Notice that the density function goes to zero for large $r$ much more slowly than in the case of the top quark. This is because the behaviour of the wave function for massive fermions, as we said before, is described by $e^{-M_f}r$, with $M_f$ the fermion mass. In Table 1 we list the mean and root-mean-squared radii corresponding to the configurations shown in Figs. 3a,4a. Finally, in Fig. 5a,b we show the radial electric field produced by the charged fermionic distributions. The asymptotic form of this field is given by the Gauss law, $E_r \sim q r^{-1}$ with $q$ the charge of the fermion. [**5.**]{} In summary, we have constructed in this paper a simple class of exact solutions of the electroweak theory, which consists in the $Z$-string configuration, the gauge fields $A_0$, $A_3$, $Z_0$ and $Z_3$, and fermion zero modes bounded at the string core. We have explicitly worked out the cases where fermions are charged leptons or quarks, and neutral leptons (neutrinos). In the cases where zero modes are charged fermions, moving under the action of the electromagnetic field, the $Z$-string becomes superconducting. In our solution there is no feed back from gauge and fermion fields on the $Z$-string configuration. As for the stability problem our solution does not add anything new with respect to the case of the $Z$-string in the absence of fermion zero modes: either the $Z$-string configuration is extended to a stable configuration including other bosonic fields, or its stability is topologically guaranteed by the presence of an extra spontaneously broken (gauge) symmetry, remnant of the theory at some high scale [@frustrated]. In both cases the presence of fermion zero modes would make the string configuration superconducting and therefore detectable by non-gravitational interactions. [99]{} See. [*e.g.*]{}: A. Vilenkin, [*Phys. Rep.*]{} [**121**]{} (1985) 263. H.B. Nielsen and P. Olesen, [*Nucl. Phys.*]{} [**B61**]{} (1973) 45. T. Vachaspati, [*Phys. Rev. Lett.*]{} [**68**]{} (1992) 1263. R. Brandenberger and A.-C. Davis, [*Phys. Lett.*]{} [**B308**]{} (1993) 79; R. Brandenberger, A.-C. Davies and M. Trodden, [*Phys. Lett.*]{} [**B335**]{} (1994) 123. M. James, T. Vachaspati and L. Perivolaropoulos, [*Nucl. Phys.*]{} [**B395**]{} (1993) 534. L. Perivolaropoulos, [*Phys. Rev.*]{} [**D50**]{} (1994) 962. E. Witten, [*Nucl. Phys.*]{} [**B249**]{} (1985) 557. J.P. Ostriker, C. Thompson and E. Witten, [*Phys. Lett.*]{} [**B180**]{} (1986) 231. R. Jackiw and P. Rossi, [*Nucl. Phys.*]{} [**B190**]{} \[FS3\] (1981) 681. M.A. Earnshaw and W.B. Perkins, [*Phys. Lett.*]{} [**B328**]{} (1994) 337. E. Weinberg, [*Phys. Rev.*]{} [**D24**]{} (1981) 2669. S. Coleman, [*Phys. Rev.*]{} [**D15**]{} (1977) 2929. C.T. Hill, A.L. Kogan and L.M. Widrow, [*Phys. Rev.*]{} [**D38**]{} (1988) 1100; G. Dvali and G. Senjanović, [*Phys. Rev. Lett.*]{} [**71**]{} (1993) 2376. ---------- ------ --- --------------------- ------------------------------ Particle n m $\langle r \rangle$ $\sqrt{\langle r^2 \rangle}$ neutrino 2 0 4.45 —– top 1 0 1.18 1.37 bottom $-1$ 0 16.39 23.12 tau $-1$ 0 50.04 69.57 ---------- ------ --- --------------------- ------------------------------ : Mean and root-mean-square radii for fermionic configurations shown in Fig. 4a,b **Figure captions** {#figure-captions .unnumbered} =================== [**Fig. 1**]{} : Allowed regions in the $(n,m)$ plane for $\kappa=1$ ($\kappa=-1$), circles (squares), and $\psi^+$ ($\psi^-$), open (black), fermions. [**Fig. 2**]{} : Plots of the effective potential for the $\psi_{L1}$ neutrino component, as defined in Eq. (\[veff\]), for the indicated values of $(n,m)$. [**Fig. 3**]{} : [**a)**]{} Fermionic densities for the top zero mode $n=1, m=0$ and for the neutrino configuration with $n=2, m=0$. For illustration, the density for a massive top like fermion in the limit $q_{L,R} = 0$ is also shown (dashed line). In all cases, $\kappa = -1$. [**b)**]{} $Z_0$ component of the gauge field generated by the configurations shown in (a). [**Fig. 4**]{} : [**a)**]{} Fermionic densities for tau and bottom fermionic zero modes. In both cases, $n=-1, m=0$ and $\kappa = -1$. [**b)**]{} $Z_0$ component of the gauge field generated by the configurations shown in (a). [**Fig. 5**]{} : [**a)**]{} Electric field $E_r$ generated by the top quark configuration shown in Fig. 3a. [**b)**]{} Electric field $E_r$ generated by the fermionic configurations shown in Fig. 4a. [^1]: On leave from Instituto de Estructura de la Materia, CSIC, Madrid, Spain [^2]: Work partly supported by CICYT under contract AEN94-0928, and by the European Union under contract No. CHRX-CT92-0004 [^3]: See, however,[@Leandros] for a recent proposal. [^4]: The +/- superscript refers to the up/down component, [*i.e.*]{} $\nu_{\ell}/\ell$ for leptons and $u/d$ for quarks. The absence of right–handed neutrino in the Standard Model implies, in our notation, $\psi^+_R$ = 0, $h_+ = 0$ for leptons. [^5]: From here on, and for notational simplicity, we will drop the $\pm$ superscript from even fields. [^6]: Of course, the question of the stability should be addressed in the whole $SU(2) \times U(1)$ model. [^7]: Here $\alpha=0,3, \, r, \theta $ and we will write the equations in the gauge $Z^r = A^r =0$ [^8]: The $i$ factor in the parametrization of $\psi_R$ is a matter of convention, and $\psi_{L1}(r)$, $\psi_{L2}(r)$, $\psi_{R1}(r)$, $\psi_{R2}(r)$ are general complex functions. Notice also that, since we will look for zero modes, we have already fixed the fermion energy $\omega$ to zero and no prefactor $e^{-i\omega t}$ appears in (\[fermions\]). [^9]: In this case also $W^{\pm \, 0,3}$ is generated, playing the same [*rôle*]{} as $(A,Z)^{0,3}$, and the ansatz (\[bosons\]) would be trivially enlarged. [^10]: All the dimensional values are expressed in units of the corresponding power of $\langle \phi \rangle$. [^11]: In a realistic situation, a natural value for $R$ could be provided by the radius of the string loop, or by the typical distance between strings.
[**[ Heavy Charged Gauge Bosons with General CP Violating Couplings ]{}**]{}\ Mojtaba Mohammadi Najafabadi [^1]\ [*School of Particles and Accelerators,\ Institute for Research in Fundamental Sciences (IPM)\ P.O. Box 19395-5531, Tehran, Iran*]{}\ **Abstract**\ Heavy gauge bosons such as $W^{\prime}$ are expected to exist in many extensions of the Standard Model. In this paper, it is shown that the most general Lagrangian for the interaction of $W^{\prime}$ with top and bottom quarks which consists of V-A and V+A structure with in general complex couplings produces an Electric Dipole Moment (EDM) for the top quark at one loop level. We predict the allowed ranges for the mass and couplings of $W^{\prime}$ by using the upper limit on the top quark EDM. Introduction ============ The Standard Model (SM) of the particles has been found to be in a good agreement with the present experimental data in many of its aspects. Nonetheless, it is believed to leave many questions unanswered, and this belief has resulted in numerous theoretical and experimental attempts to discover a more fundamental underlying theory. Various types of experiments may expose the existence of physics beyond the SM, including the search for direct production of exotic particles at high energy colliders. A complementary approach in hunting for new physics is to examine its indirect effects in higher order processes. There are many different models which predicts the existence of new charged gauge bosons, $W^{\prime}$. These scenarios include the Little Higgs model [@lh1] [@lh2], Grand Unified Theories [@gut], Universal Extra Dimension [@ued], Left-Right Symmetric Model [@lrsm1] and some other models. One must note that the properties and interactions of $W^{\prime}$ depend on the model. One of the simple extension of the SM is the Left-Right Symmetric Model. It is based on the $SU(2)_{R} \times SU(2)_{L}\times U(1)$ gauge group. The new $SU(2)_{R}$ symmetry leads to additional $W^{\prime},Z^{\prime}$ gauge bosons. For a detailed discussion of Left-Right Symmetric Model see for example [@lrsm1],[@lrsm2],[@lrsm3]. The Left-Right Symmetric Model is constructed by placing the fermion right-handed singlets into doublets regarding $SU(2)_{R}$. This requires to introduce right-handed neutrinos. One the interesting aspects of this model is that the parity is broken spontaneously which causes to different masses for the $SU(2)_{R}$ and $SU(2)_{L}$ gauge bosons. Although such charged massive bosons have not been found yet experimentally but it is widely believed that the experiments at the LHC are able to probe them in the coming years [@ptdr], [@rizzo], [@wp2]. At the LHC, for an integrated luminosity of 10 fb$^{-1}$, $W^{\prime}$ bosons can be discovered or excluded up to a mass of 5 TeV/c$^{2}$, from an analysis of the muonic decay mode. This result belongs to the model which makes the assumptions that the new gauge boson $W^{\prime}$ has the same couplings as the Standard Model $W$ boson. The capability of LHC to explore the helicity of $W^{\prime}$ is discussed in [@rizzo]. There are already direct and indirect searches for the new gauge bosons. There is a severe limit obtained from $K_{0}-\bar{K}_{0}$ mixing: $M_{W^{\prime}} \geq$2.5 TeV/c$^{2}$ [@zhang]. The direct searches for $W^{\prime}$ can be found for example in [@pdg],[@ds1],[@ds2]. In the framework of the SM top quark is the only quark which has a mass in the same order as the electroweak symmetry breaking scale, $v\sim 246$ GeV, whereas all other observed fermions have masses which are a tiny fraction of this scale. This huge mass might be a hint that top quark plays an essential role in search for new physics originating from physics at higher scale [@beneke]. Hence, the study of interaction of top quark with $W^{\prime}$ might give useful information about $W^{\prime}$. For example, the interference between $W^{\prime}$ and $W$ in the production of single top quarks is important and could be useful in search for $W^{\prime}$ which has been discussed in [@wp2]. The aim of this article is to constrain the mass of $W^{\prime}$ by considering its contribution to the electric dipole moment (EDM) of the top quark. In [@toscano2], the authors have estimated an upper limit of $10^{-20}$ e.cm. on the top quark EDM from the experimental bound on the neutron EDM. Combination this limit with the contribution of the $W^{\prime}$ to top EDM leads to valuable information on $M_{W^{\prime}}$ and its couplings. The Contribution of the $W^{\prime}$ to the Top Quark EDM ========================================================= Similar to the interaction of $Wtb$, the most general lowest order effective Lagrangian for the interaction of $W^{\prime}$ with top and bottom quarks in the SM can be written in the following form [@pdg],[@ds1]: $$\begin{aligned} \label{lag} {\cal L} = \frac{g}{\sqrt{2}}\bar{t}\gamma^{\mu} \left( a_{L}P_{L} + a_{R}P_{R}\right)b W^{\prime}_{\mu}\end{aligned}$$ where $P_{L}(P_{R})$ are the left-handed (right-handed) projection operators. The $a_{L},a_{R}$ coefficients are complex in general. This signifies the CP violating effects. In this notation, $a_{L} = 1$ and $a_{R} = 0$ for a so-called SM-like $W^{\prime}$. It is worth mentioning that in Eq.\[lag\] if we replace $W^{\prime}$ gauge boson by the Standard Model $W$ gauge boson, from the B decay processes the limits on $a_{L},a_{R}$ are: $Re(a_{R})\leq 4\times 10^{-3}$, $Im(a_{R})\leq 10^{-3}$ and $Im(a_{L})\leq 3 \times 10^{-2}$ [@b1],[@b2],[@b3]. The introduced Lagrangian in Eq.\[lag\] induces an electric dipole moment for the top quark at the one loop level via the Feynman diagrams shown in Fig.\[vertex\]. One should note that all the particles are taken on-shell. After calculation of the one loop corrections to the vertex of $\bar{t}t\gamma$ shown in Fig.\[vertex\], we find some terms with different structures. The coefficient of the structure of $\sigma_{\mu\nu}\gamma_{5}q^{\nu}$ gives the top quark electric dipole moment where $q^{\nu}$ is the four momentum of photon [@edm0],[@edm1]. It should be noted that this structure arises via radiative corrections and does not exist at tree level. ![Feynman diagrams contributing to the on shell $t\bar{t}\gamma$.[]{data-label="vertex"}](vertex "fig:"){width="10cm" height="6cm"}\ After all calculation, the top EDM is found as: $$\label{topEDM} d_{t}=-\frac{e}{m_{W^{\prime}}}\frac{3\,\alpha}{32 \pi}\,\frac{m_{b}}{m_{W^{\prime}}}\,\left(V_{1}(x_{b},x_{W^{\prime}})+ \frac{1}{3}\,V_{2}(x_{b},x_{W^{\prime}})\right)\,{\text Im}\left(a_{L} a^{*}_{R}\right),$$ where $x_{a}=m_a^{2}/m_t^{2}$. The $V_{1,2}$ are the functions stand for the contribution of the Feynman diagram where the photon emerges from the $W^{\prime}$ boson and the $b$ quark line, respectively. They have the following forms: $$\begin{aligned} V_{1}=-\left(4x_{W^{\prime}}-x_{b}+1\right)f(x_{b},x_{W^{\prime}})- \left(x_{b}^2+4x_{W^{\prime}}^2-5x_{b}x_{W^{\prime}}-3x_{W^{\prime}}-2x_{b}+1\right)g(x_{b},x_{W^{\prime}})\nonumber \\ V_{2}=-\left(4x_{W^{\prime}}-x_{b}+1\right)f(x_{W^{\prime}},x_{b})+ \left(x^{2}_{b}+4x_{W^{\prime}}^2-5x_{b}x_{W^{\prime}}-3x_{W^{\prime}}-2x_{b}+1\right)g(x_{W^{\prime}},x_{b})\end{aligned}$$ where the functions of $f$ and $g$ are as follows: $$\begin{aligned} f(a,b)&=&\left(\frac{1+a-b}{2}\right)\log\left(\frac{b}{a}\right)+\sqrt{(1-a-b)^2-4ab}\,\times{\rm ArcSech}\left(\frac{2 \sqrt{ab}}{a+b-1}\right)+2 \nonumber \\ g(a,b)&=&-\frac{1}{2}\log\left(\frac{b}{a}\right)-\frac{1+a-b}{\sqrt{(1-a-b)^2-4ab}}\,\times{\rm ArcSech}\left(\frac{2 \sqrt{ab}}{a+b-1} \right) \nonumber\end{aligned}$$ Results ======= In [@toscano2], the authors have predicted an upper bound for the top quark EDM using the experimental limit on the neutron EDM. Their estimate for the top quark EDM is $10^{-20}$ e.cm. In Eq.\[topEDM\], if we assume $Im\left(a_{L} a^{*}_{R}\right) \sim 10^{-1}$ and by using the bound of the top EDM, the upper limit of $190$ GeV/c$^{2}$ is achieved for the mass of $W^{\prime}$ and if $Im\left(a_{L} a^{*}_{R}\right) \sim 10^{-3}$ we have $M_{W^{\prime}} \leq 1470$ GeV/c$^{2}$. The shaded region in Fig.\[exclusion\] is the excluded region in the plane of $M_{W^{\prime}}$ and $Im\left(a_{L} a^{*}_{R}\right)$. Fig.\[exclusion\] obviously presents the the strong dependence of the upper bound of the mass of $W^{\prime}$ on the $Im\left(a_{L} a^{*}_{R}\right)$. The predicted lower limit for the $W^{\prime}$ mass from other studies ($K_{0}-\bar{K}_{0}$ mixing) which mentioned in the introduction can be used to estimate the allowed range for $Im\left(a_{L} a^{*}_{R}\right)$. In Eq.\[topEDM\] if we put $d_{t} < 10^{-20}$ and $M_{W^{\prime}} \geq$ 2.5 TeV/c$^{2}$ the upper bound of $3.18 \times 10^{-4}$ is derived for $Im\left(a_{L} a^{*}_{R}\right)$. In [@ds1] a search has been performed for $W^{\prime}$ bosons which decay to $t+b$, using 0.9 fb$^{-1}$ of data recorded by D0 detector in proton anti-proton collisions. A 95$\%$ C.L. upper limit on $\sigma(p\bar{p}\rightarrow W^{\prime})\times BR(W^{\prime}\rightarrow tb)$ has been set. This excludes the gauge couplings ($a_{L},a_{R}$) above $\sim 0.7$ for $W^{\prime}$ bosons with a mass of 600 GeV/c$^{2}$. From the current analysis, for the $W^{\prime}$ bosons with a mass of 600 GeV/c$^{2}$, $Im\left(a_{L} a^{*}_{R}\right)$ above $\sim 0.007$ is excluded. ![The shaded region is the excluded region for the $W^{\prime}$ mass and $Im(a_{L} a^{*}_{R})$ by this analysis.[]{data-label="exclusion"}](Exclusion "fig:"){width="11cm" height="8cm"}\ Conclusion ========== In this paper we focus our attention on the contribution of the $W^{\prime}$ gauge boson to the electric dipole moment (EDM) of the top quark. The most general Lagrangian for the interaction of $W^{\prime}$ with top and bottom quarks which consists of V-A and V+A structure with in general complex couplings $(a_{L},a_{R})$ produces an EDM for the top quark at level of one loop. The top EDM is proportional to $Im\left(a_{L} a^{*}_{R}\right)$. Using the upper limit on the top EDM, we exclude the region shown in Fig.\[exclusion\] in the plane of $M_{W^{\prime}}$ and $Im\left(a_{L} a^{*}_{R}\right)$. For example, for $Im\left(a_{L} a^{*}_{R}\right) \sim$ 0.001, the $W^{\prime}$ boson mass above 1470 GeV/c$^{2}$ is excluded. The upper bound of $3.18 \times 10^{-4}$ is derived for $Im\left(a_{L} a^{*}_{R}\right)$ by considering the lower limit on the $M_{W^{\prime}}$ from $K_{0}-\bar{K}_{0}$ mixing studies. [**Acknowledgments**]{}\ The author would like to thank B. Safarzadeh for reading the manuscript.\ [100]{} N. Arkani-Hamed, A. G. Cohen and H. Georgi, Phys. Lett.  B [**513**]{}, 232 (2001) \[arXiv:hep-ph/0105239\]. D. E. Kaplan and M. Schmaltz, JHEP [**0310**]{}, 039 (2003) \[arXiv:hep-ph/0302049\]. R. W. Robinett and J. L. Rosner, Phys. Rev.  D [**26**]{}, 2396 (1982). G. D. Kribs, arXiv:hep-ph/0605325. R. N. Mohapatra and J. C. Pati, Phys. Rev.  D [**11**]{}, 566 (1975). R. N. Mohapatra and J. C. Pati, Phys. Rev.  D [**11**]{}, 2558 (1975). G. Senjanovic and R. N. Mohapatra, Phys. Rev.  D [**12**]{}, 1502 (1975). G. L. Bayatian [*et al.*]{} \[CMS Collaboration\], J. Phys. G [**34**]{}, 995 (2007). T. G. Rizzo,JHEP [**0705**]{}, 037 (2007) \[arXiv:0704.0235 \[hep-ph\]\]. E. Boos, V. Bunichev, L. Dudko and M. Perfilov, Phys. Lett.  B [**655**]{}, 245 (2007) \[arXiv:hep-ph/0610080\]. Y. Zhang, H. An, X. Ji and R. N. Mohapatra, Phys. Rev.  D [**76**]{} 091301 (2007) \[arXiv:0704.1662 \[hep-ph\]\]. C. Amsler [*et al.*]{}, Phys. Lett. B [**667**]{}, 1 (2008). V. M. Abazov [*et al.*]{}, Phys. Rev. Lett.  [**100**]{}, 211803 (2008) \[arXiv:0803.3256 \[hep-ex\]\]. V. M. Abazov [*et al.*]{}, Phys. Rev. Lett.  [**100**]{}, 031804 (2008) \[arXiv:0710.2966 \[hep-ex\]\]. M. Beneke [*et al.*]{}, arXiv:hep-ph/0003033. H. Novales-Sanchez and J. J. Toscano, Phys. Rev.  D [**77**]{}, 015011 (2008) \[arXiv:0712.2008 \[hep-ph\]\]. A. Abd El-Hady and G. Valencia, Phys. Lett.  B [**414**]{}, 173 (1997) \[arXiv:hep-ph/9704300\]. F. Larios, M. A. Perez and C. P. Yuan, Phys. Lett.  B [**457**]{}, 334 (1999) \[arXiv:hep-ph/9903394\]. B. Grzadkowski and M. Misiak, Phys. Rev.  D [**78**]{}, 077501 (2008) \[arXiv:0802.1413 \[hep-ph\]\]. D. Atwood, S. Bar-Shalom, G. Eilam and A. Soni, Phys. Rept.  [**347**]{}, 1 (2001) \[arXiv:hep-ph/0006032\]. M. Pospelov and A. Ritz, Annals Phys.  [**318**]{}, 119 (2005) \[arXiv:hep-ph/0504231\]. [^1]: Email: mojtaba@mail.ipm.ir
--- abstract: 'We present an investigation of the Residual Free Bubble finite element method for a class of multiscale nonlinear elliptic partial differential equations. After proposing a nonlinear version for the method, we address fundamental questions as existence and uniqueness of solutions. We also obtain a best approximation result, and investigate possible linearizations that generate different versions for the method. As far as we are aware, this is the first time that an analysis for the nonlinear Residual Free Bubble method is considered.' address: - 'Departamento de Matemática, Universidade Federal do Paraná, Curitiba - PR, Brazil' - ' Laboratório Nacional de Computação Científica, Petrópolis - RJ, Brazil' - ' Fundação Getúlio Vargas, Rio de Janeiro - RJ, Brazil' author: - Manuel Barreda - 'Alexandre L. Madureira' date: 'March 24, 2017' title: 'A Residual-Free Bubble Formulation for nonlinear elliptic problems with oscillatory coefficients' --- [^1] \[section\] \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Proposition]{} \[theorem\][Remark]{} Introduction {#s:introdução} ============ Important physics and engineering problems are nonlinear and of multiscale nature. Examples include certain models for flow in porous media and mechanics of heterogeneous materials. We consider in this work nonlinear elliptic problems of the form $$\label{e:pnl} -\div[a_\epsilon(x,{u_{\epsilon}},\grad{u_{\epsilon}})]=f\quad\text{in }\Omega,\qquad{u_{\epsilon}}=0\quad\text{on }\partial\Omega,$$ where $\Omega\subset\mathbb{R}^2$ is a polygonal domain, $$a_\epsilon(x,{u_{\epsilon}},\grad{u_{\epsilon}})=\alpha_\epsilon(x)b({u_{\epsilon}})\grad{u_{\epsilon}}.$$ and $\alpha_\epsilon$ might have an oscillatory nature. We describe further restrictions on the coefficients latter on. Problems like  are often dealt with using homogenization techniques, even in the linear case. However, this is not always convenient due to restrictive hypothesis on the coefficients, like periodicity or certain probabilistic distributions. Thus, even for the linear situation, several authors developed methods that can compute approximations that do not rely on homogenization. It is well-known that standard Galerkin methods perform poorly for such equations, linear or nonlinear, under the presence of oscillatory coefficients [@brezzi; @MR2477579], and there is a strong interest in developing numerical schemes that are efficient for problems with multiscale nature. Important methods include the *Generalized Finite Element Method* (GFEM) [@MR701094], the *Discontinuous Enrichment Method* (DEM) [@MR1870426], the *Heterogeneous Multiscale Method* (HMM) [@EW-BE], and the *Multiscale Hybrid Mixed Method* (MHM) [@MR3143841; @MR3066201; @madureira]. We concentrate our literature review on the the *Residual-Free Bubble Method* (RFB) [@MR1222297; @B-R; @brezzi; @MR1159592; @MR2203943; @MR2142535] and the *Multiscale Finite Element Method* (MsFEM) [@TH-HW; @MR2477579; @E-H-G; @MR1740386; @MR1642758; @efenpank; @MR2448695] since they are closer to our own method. For all the above methodology, the goal is to derive numerical approximations for the multiscale solution using a mesh that is coarser than the characteristic length $\epsilon$ of the oscillations (in opposition to [@SV1; @SV2]). The idea behind the MsFEM is to incorporate local information of the underlying problem into the basis functions of the finite element spaces, capturing microscale aspects. Its analysis was first considered for linear problems, and assuming that the coefficients of the equations are periodic [@MR1642758; @MR1740386]. Latter, the non periodic case was also considered [@MR2982460]. An extension for nonlinear problems appears in [@E-H-G], for pseudo-monotone operators, and the authors show that, under periodicity hypothesis, the numerical solution converges towards the homogenized solution. They also determine the convergence rate if the flux depends only on the gradient of the solution. Further variations of the method were considered in [@chen; @CH-Y]. The MHM method shares some of the characteristics of the MsFEM, but so far it was considered only for linear problems. The HMM approach for linear and nonlinear problems differs considerably, but, as in the MsFEM, the method is efficient in terms of capturing the macroscale behavior of multiscale problems. See [@EW-BE; @M-Y] for a description of the method, and [@MR2114818] for a analysis of the method involving linear and nonlinear cases. The Residual Free Bubble (RFB) formulation [@MR1222297; @MR1159592; @B-R] was first considered with advection-reaction-diffusion problems in mind. The use of RFB for problems with oscillatory coefficients was already suggested in [@brezzi], and investigated in [@SG] for the linear case. See [@MR2901822] for a clear description of how the MsFEM and RFB relate. In the present work, we extend the RFB formulation for a class of nonlinear problems, with oscillatory coefficients, as in . Such model is a natural extension of the linear problem with oscillatory coefficients, and of the nonlinear problems as considered in [@D-D], without oscillatory coefficients. We remark that the RFB was considered only in the linear setting, with one exceptions [@ramalho] which considers numerical experiments with RFB for shallow water problem in an ad hoc manner. Assume that $\alpha_\epsilon(.): \Omega \rightarrow\mathbb{R}$ is measurable, and that there exist positive constants $\alpha_0$ and $\alpha_1$ such that $$\label{limitação.hipótesesH1} 0<\alpha_0\leq\alpha_{\epsilon}(x)\leq\alpha_1\quad\text{almost everywhere in }\Omega.$$ Assume also that $b:\mathbb{R}\rightarrow\mathbb{R}$ is continuous and belongs to $W^{2,\infty}(\R)$, and that there exists a constant $b_0$ such that $$\label{limitação.hipótesesH2} 0<b_0\leq b(t) \quad\text{for all }t\in \mathbb{R}.$$ Note that a uniform coercivity follows from the above hypothesis, i.e., for almost every $x\in\Omega$, and all $t\in\mathbb{R}$ and $\bxi\in\mathbb{R}^2$, $$\alpha_\epsilon (x)b(t)\bxi.\bxi\geq \alpha_0 b_0 \|\bxi\|^2.$$ Rewriting  in its variational formulation, we have that ${u_{\epsilon}}\in H_0^1(\Omega)$ solves $$\label{s:varnonlin.ms} a({u_{\epsilon}},v)=(f,v)\quad\text{for all }v\in H^1_0(\Omega),$$ where $$\label{s:formanonlinear.ms} a(\psi,\phi)=\int_\Omega\alpha_\epsilon(x)b(\psi)\grad\psi\cdot\grad\phi\,dx.$$ Throughout this paper, we denote by $L^2(\Omega)$ the space of square integrable functions, by $W^{q,p}$, $H_0^1(\Omega)$, $H^1(\Omega)$ the usual Sobolev Spaces, and by $H^{-1}(\Omega)$ the dual space of $H_0^1(\Omega)$ [@brezis; @evans]. By $C$ we denote a generic constant that might have different values at different locations, but that does not depend on $h$ or $\epsilon$. The outline of the article is as follows. After the introductory Section \[s:introdução\], we describe the RFB method in Section \[s:rfb\], and discuss existence and uniqueness of solutions in Section \[s:eus\]. A best approximation result is obtained in Section \[s:melaprox\], and possible linearizations are discussed in Section \[s:lineariz\]. The Residual Free Bubble Method {#s:rfb} =============================== Let $\T_h=\{K\}$ be a partition of $\Omega$ into finite elements $K$, and, associated to $\T_h$, the subspace $V_h\subset H_0^1(\Omega)$ of piecewise polynomials. The classical finite element Galerkin method seeks a solution of  within $V_h$. The RFB method seeks a solution within the enlarged, or enriched, space $V_r=V_h\oplus V_b$, where the bubble space is given $$V_b=\{v \in H_0^1(\Omega):\, v|_K \in H_0^1(K) \text{ for all }K \in\T_h\}.$$ That means that we seek $u_r\in V_r$ such that $$\label{e:rfb} \int_\Omega\alpha_{\epsilon}(x)b(u_r)\grad(u_r)\cdot\grad v_r\,dx=\int_\Omega fv_r\,dx\quad\text{for all }v_r\in V_r.$$ The second equation in the above system is obtained, for each fixed element $K$, by considering $v_r|_K\in H_0^1(K)$ arbitrary and vanishing outside $K$. An integration by parts yield the strong equation of . This is equivalent to search for $u_r=u_h+u_b$, where $u_h\in V_h$ and $u_b\in V_b$ solve $$\label{e:rfb2} \begin{gathered} \int_\Omega\alpha_{\epsilon}(x)b(u_h+u_b)\grad(u_h+u_b)\cdot\grad v_h\,dx =\int_\Omega fv_h\,dx\quad\text{for all }v_h\in V_h, \\ -\div[\alpha_{\epsilon}(x)b(u_h+u_b)\grad(u_h+u_b)]=f \quad\text{in }K,\text{ for all }K\in\T_h. \end{gathered}$$ The coupled system  defines the *Residual Free Bubble Method*. The use of bubbles allows the *localization* of the problems of the second equation of , while the first equation has a global character. Such formulation induces a two-level discretization, where the global problem given by the first equation in  should be discretized by a coarse mesh, and the local problems given by the second equation of  should be solved in a fine mesh. Thus, in terms of computational cost, the first equation is global but posed in a coarse mesh, and the second equation requires refined meshes, but they are local and can be solved in parallel. Note that for linear problems, it is possible to perform static condensation, “eliminating” the bubble part in the final formulation, which is then modified and posed only on the polynomial space [@MR1222297; @brezzi; @MR1159592; @B-R; @F-R; @SG]. See remark below. \[obslinear\] If $\L$ denotes a linear differential operator, and $a(\cdot,\cdot)$ the associated bilinear form, then it results from the RFB that $u_b\in H_0^1(K)$ solves $$\L u_b=-\L u_h+f \quad \text{in }K.$$ Denoting by $\L_K^{-1}: H^{-1}(K)\rightarrow H_0^1(K)$ the local solution operator, we gather that $u_b|_K=\L_K^{-1}(f-\L u_h)$. Thus $u_h \in V_h$ solves that $$a(u_h,v_h)+a(\-\sum_{K\in\T_h}\L_K^{-1}\L u_h,v_h) =(f,v_h)-a(\sum_{K\in\T_h}\L_K^{-1}f,v_h)\quad\text{for all } v_h\in V_h.$$ The formulation above is a perturbed Galerkin formulation. The perturbation aims to capture the microscale effects neglected by coarse meshes. Existence and Uniqueness of Solutions {#s:eus} ===================================== In this section we prove existence and uniqueness results for the continuous problem and for the RFB formulation. We adapt here ideas present in [@boccardo-murat; @artola-duvaut]. We shall make use of the following version of the Schauder Fixed-Point Theorem [@diaz-naulin]. \[t:sfp\] Let $E$ be a normed space, $A\subset E$ a non-empty convex set, and $C\subset A$ compact. Then, every continuous mapping $T:A\rightarrow C$ has at least one fixed point. The following result guarantees existence and uniqueness of solutions for the variational problem . \[t:eu\] Let $\alpha_\epsilon(.)$ and $b(.)$ such that  and  hold. Then, given $f\in L^2(\Omega)$, the variational problem  has one and only one solution in $H^1_0(\Omega)$. Our proof of Theorem \[t:eu\] is based on the lemmata that follow. We first observe that  suggests the definition $$\label{operador_ponto_fixo_cont} T^\epsilon:\,L^2(\Omega)\rightarrow H^1_0(\Omega),$$ such that, for every $w\in L^2(\Omega)$, the operator $w^{\epsilon}=T^{\epsilon}(w)\in H^1_0(\Omega)$ solves $$\int_\Omega\alpha_\epsilon(x)b(w)\grad w^\epsilon.\grad v\,dx =\int_\Omega fv\,dx\quad\text{for all } v \in H^1_0(\Omega).$$ The operator $T^\epsilon$ is clearly well-defined since, from the hypothesis imposed on $\alpha_{\epsilon}$ and $b$, the bilinear form above satisfies the hypothesis of Lax-Milgram Lemma. \[lem3.1\] Under the hypothesis of Theorem \[t:eu\], the operator $T^\epsilon$ given by  is continuous. Let $\{w_m\}$ be a sequence in $L^2(\Omega)$ such that $w_m\rightarrow w$ strongly in $L^2(\Omega)$. Consider $T^\epsilon (w_m)=w^\epsilon_m $ and $T^\epsilon(w)=w^{\epsilon}$. Then, $$\begin{gathered} \int_{\Omega}\alpha_{\epsilon}(x)b(w_m)\grad w^\epsilon_m\cdot\grad v\,dx =\int_{\Omega} fv\,dx\quad\text{for all } v \in H^1_0(\Omega), \\ \int_\Omega\alpha_{\epsilon}(x)b(w)\grad w^{\epsilon}\cdot\grad v\,dx =\int_{\Omega} fv\,dx\quad\text{for all } v \in H^1_0(\Omega).\end{gathered}$$ Subtracting both equations, it follows that $$\begin{gathered} \int_{\Omega}\alpha_{\epsilon}(x)b(w_m)\grad w_m^{\epsilon}\cdot\grad v\,dx - \int_{\Omega}\alpha_{\epsilon}(x)b(w)\grad w^\epsilon\cdot\grad v\,dx=0\quad\text{for all }v \in H^1_0(\Omega). \end{gathered}$$ Adding and subtracting $w^\epsilon$ we gather that $$\int_\Omega\alpha_{\epsilon}(x)b(w_m)\grad(w^\epsilon_m-w^\epsilon+w^\epsilon)\cdot\grad v\,dx =\int_\Omega\alpha_{\epsilon}(x)b(w)\grad w^\epsilon\cdot\grad v\,dx \quad\text{for all } v\in H^1_0(\Omega).$$ In an equivalent form, for each $v\in H^1_0(\Omega)$, $$\label{e:estwmw} \int_{\Omega}\alpha_{\epsilon}(x)b(w_m)\grad (w^\epsilon_m-w^\epsilon)\cdot\grad v\,dx =\int_{\Omega}\alpha_{\epsilon}(x)(b(w)-b(w_m))\grad w^{\epsilon}\cdot\grad v\,dx.$$ In particular, for $v=w^\epsilon_m-w^\epsilon$ it follows that $$\begin{gathered} \alpha_0 b_0\|\grad(w^{\epsilon}_m-w^{\epsilon})\|_{0,\Omega}^2 \leq\int_{\Omega}\alpha_{\epsilon}(x)b(w_m)\grad(w^\epsilon_m-w^\epsilon)\cdot\grad(w^\epsilon_m-w^\epsilon) \\ =\int_{\Omega}\alpha_{\epsilon}(x)(b(w)-b(w_m))\grad w^{\epsilon}\cdot\grad(w^\epsilon_m-w^\epsilon)\,dx \\ \leq\alpha_1\bigl\|[b(w)-b(w_m)]\grad w^{\epsilon}\bigr\|_{0,\Omega}\|\grad(w^{\epsilon}_m-w^{\epsilon})\|_{0,\Omega}\end{gathered}$$ Thus, $\|\grad(w^{\epsilon}_m-w^{\epsilon})\|_{0,\Omega}\leq C\bigl\|[b(w)-b(w_m)]\grad w^{\epsilon}\bigr\|_{0,\Omega}$. Now [@artola-duvaut], since $b(w)-b(w_m)\to0$ in measure, and that $|\grad w^{\epsilon}|^2\in L^1(\Omega)$, we conclude that $\bigl\|[b(w)-b(w_m)]\grad w^{\epsilon}\bigr\|_{0,\Omega}\to0$. Thus $w^{\epsilon}_m\to w^{\epsilon}$ strongly in $H^1(\Omega)$. \[lem3.2\] Let $F\in C^1(\mathbb{R})$ such that $F(0)=0$ and $|F'(t)|\leq L$ for all $t\in\mathbb{R}$. Let $\Omega\subset\mathbb{R}^d$ be open, and let $1\leq p<\infty$. Then - if $v\in W^{1,p}(\Omega)$, then $F \circ v \in W^{1,p}(\Omega)$ and $\partial(F \circ v)/\partial x_i= F'(v)\partial v/\partial x_i$, for $1\leq i\leq d$ - if $v\in W^{1,p}_{0}(\Omega)$, then $F \circ v\in W^{1,p}_{0}(\Omega)$. [@brezis]\*[Proposition 9.5]{}. \[l:contpf\] Under the hypotheses of Theorem \[t:eu\], the uniqueness of solutions for  follows. Let, for $t\in\mathbb{R}$, $$\tilde b(t)=\int_0^t\,b(s)ds.$$ Since $b\in C^0(\mathbb{R})$, then $\tilde b\in C^1(\mathbb{R})$. Moreover, $\tilde b'$ is always positive, and then $\tilde b$ is a bijection in $\mathbb{R}$. Consider the Kirchhoff transform $U_{\epsilon}=\tilde b({u_{\epsilon}})$. From Lemma \[lem3.2\] we gather that $$\grad U_{\epsilon}= b({u_{\epsilon}})\grad{u_{\epsilon}}$$ and $U_\epsilon\in H^1_0(\Omega)$. Thus,  is equivalent to the linear problem $$\label{c:probmodelequivalin.ms} \begin{gathered} -\div[\alpha_\epsilon(x)\grad U_\epsilon]=f\quad\text{in }\Omega, \\ U_\epsilon=0\quad\text{on }\partial\Omega. \end{gathered}$$ that is, ${u_{\epsilon}}$ solves  is and only if $U_\epsilon$ solves . From Lax-Milgram Lemma, there is at most one solution for , and therefore, there is also at most one solution for . Indeed, if there were two solutions for , we would be able to construct also two solutions for . We now prove Theorem \[t:eu\]. Consider in Theorem \[t:sfp\] that $A=E=L^2(\Omega)$, $C= H^1_0(\Omega)$, and the operator $T^{\epsilon}$ defined by . Then, from Lemma \[lem3.1\] we conclude that $T^{\epsilon}$ has a fixed point. Follows from Lemma \[l:contpf\]. To show existence of the RFB solution, it is enough to pursue the same ideas just presented, but now considering the operator $$T_h^\epsilon:\,L^2(\Omega)\rightarrow V_r,$$ where, for a given $w\in L^2(\Omega)$, we define $w_r^{\epsilon}=T_h^\epsilon(w)$ such that $$\int_\Omega\alpha_\epsilon(x)b(w)\grad w_r^\epsilon.\grad v\,dx =\int_\Omega fv\,dx\quad\text{for all } v \in V_r.$$ As in Lemma \[lem3.1\], the operator $T_h^\epsilon$ is continuous. The proof is basically the same, replacing $H_0^1(\Omega)$ by $V_r$. In [@efenpank] the existence and uniqueness result for solutions for the MsFEM requires monotonicity. Such results were obtained [@xu] without monotonicity assumptions, but under the condition that the discrete and exact solutions are close. We follow the same approach. To establish a uniqueness result, let $\L u=-\div[\alpha_{\epsilon}(x)b(u)\grad u]$, and its Fréchet derivative in $u$ defined by $$\L'(u)v=-\div\{\alpha_{\epsilon}(x)\grad[b(u)v]\}=-\div\{\alpha_{\epsilon}(x)[b(u)\grad v+b'(u)v\grad u].$$ Consider also  and $$a'(u;v,\chi)=\int_\O\alpha_{\epsilon}\grad[b(u)v]\cdot\grad\chi=\int_\O\alpha_{\epsilon}[b(u)\grad v\cdot\grad\chi+b'(u)v\grad u\cdot\grad\chi],$$ induced by $\L$ and $\L'$ respectively. From [@PR]\*[Theorem 6 and Remark 6]{}, it follows that $\L'(u)$ defines an isomorphism from $H_0^1(\O)$ in $H^{-1}(\O)$. Note that if $\chi=b(u)v$, then $$\sup_{\chi\in H_0^1(\O)}\frac{a'(u;v,\chi)}{\|\chi\|_1}\ge\frac{\int_\O\alpha_{\epsilon}|\grad[b(u)v]|^2}{\|b(u)v\|_1} \ge\alpha_0\|b(u)v\|_1\ge c(u)\|v\|_1.$$ Note also that $$|b(u)v|_1=\|b(u)\grad v + b'(u)v\grad u\|_0 \geq\|b(u)\grad v\|_0 - \| b'(u)v\grad u\|_0,$$ and, on the other hand, from Poincaré’s inequality, $$\| b'(u)v\grad u\|_0 \leq \|\grad b(u)\|_{L^\infty(\O)}\|v\|_0 \leq C_\O\|\grad b(u)\|_{L^\infty(\O)}\|\grad v\|_0.$$ It is enough to consider then $$c(u)\ge\alpha_0(b_0-C_\O\|\grad b(u)\|_{L^\infty(\O)}).$$ Thus, for $\|u\|_{1,\infty}$ sufficiently small, $c(u)$ is positive. In what follows, we consider the Galerkin projection $P_h:H_0^1(\O)\to V_r$ with respect to the bilinear form $\int_\O{\alpha_{\epsilon}(x)}b({u_{\epsilon}})\grad v\grad\chi\,dx$. Assume also that $$\|\chi-P_h\chi\|_{L^2(\O)}\le \hat c(h)\|\chi\|_{H^1(\O)},$$ where $\hat c(h)\to0$ independently of ${\epsilon}$. This holds, for instance, if $\alpha(\cdot)$ is ${\epsilon}$-periodic [@CH-Y]. Consider the following result. Let $u$ and $\tilde u\in H^1(\O)$. Then $$\label{e:coerc} \bar c(u)\|v_h\|_1\le\sup_{\chi_h\in V_h}\frac{a'(\tilde u;v_h,\chi_h)}{\|\chi_h\|_1},$$ where $\bar c(u)=c(u)-\hat c(h)-\|u-\tilde u\|_{1,\infty}\|b\|_{2,\infty}\|\alpha\|_{0,\infty}\|u\|_{1,\infty}$. To show , note that $$\begin{gathered} a'(\tilde u;v_h,\chi_h) =\int_\O\alpha_{\epsilon}[b(\tilde u)\grad v_h\cdot\grad\chi_h+b'(\tilde u)v_h\grad\tilde u\cdot\grad\chi_h] \\ =\int_\O\alpha_{\epsilon}\{b(u)\grad v_h\cdot\grad\chi_h +b'(u)v_h\grad u\cdot\grad\chi_h\} \\ +\int_\O\alpha_{\epsilon}\{[b(\tilde u)-b(u)]\grad v_h\cdot\grad\chi_h +[b'(\tilde u)\grad\tilde u-b'(u)\grad u]v_h\cdot\grad\chi_h\} \\ \ge a'(u;v_h,\chi_h)-\delta\|v_h\|_1\|\chi_h\|_1\end{gathered}$$ where $$\delta=\|\alpha\|_{0,\infty}\|b\|_{2,\infty}\|\tilde u\|_{1,\infty}\|\tilde u-u\|_{1,\infty}.$$ Observe that, from [@xu]\*[Lemma 2.2]{}, $$a'(u;v_h,P_h\chi)\ge a'(u;v_h,\chi)-\hat c(h)\|v_h\|_1\|\chi\|_1$$ for all $\chi\in H_0^1(\O)$. Then $$\begin{gathered} \sup_{\chi_h\in V_r}\frac{a'(\tilde u;v_h,\chi_h)}{\|\chi_h\|_1} =\sup_{\chi\in H_0^1(\O)}\frac{a'(\tilde u;v_h,P_h\chi)}{\|P_h\chi\|_1} \\ \ge c\sup_{\chi\in H_0^1(\O)}\frac{a'(u;v_h,P_h\chi)}{\|\chi\|_1}-\delta\|v_h\|_1 \ge\sup_{\chi\in H_0^1(\O)}\frac{a'(u;v_h,\chi)}{\|\chi\|_1}-(\hat c(h)+\delta)\|v_h\|_1 \\ \ge[c(u)-\hat c(h)-\delta]\|v_h\|_1\ge\bar c(u)\|v_h\|_1\end{gathered}$$ for $\delta$ and $h$ sufficiently small. Above, we use the inequality $\|P_h\chi\|_1\le c\|\chi\|_1$. Let $u_h$ and $\tilde u_h$ be two solutions for the discrete problem such that $$\|u-u_h\|_{1,\infty}+\|u-\tilde u_h\|_{1,\infty}\le\eta,$$ where $\eta$ is small enough. Then $u_h=\tilde u_h$. Note that $$\|u-u_h-t(\tilde u_h-u_h)\|\le(1-t)\|u-u_h\|+t\|u-\tilde u_h\|\le\eta,$$ for all $t\in[0,1]$. Let $\eta$ be small enough such that $$\bar c(u)=c(u)-\hat c(h)-\eta\|b\|_{2,\infty}\|\alpha\|_{0,\infty}\|u\|_{1,\infty}>0.$$ Then $$\begin{gathered} \bar c(u)\|u_h-\tilde u_h\|_1=\bar c(u)\int_0^1\|u_h-\tilde u_h\|_1\,dt \\ \le \int_0^1\sup_{\chi_h\in V_h}\frac{a'(u_h+t(\tilde u_h-u_h);u_h-\tilde u_h,\chi_h)}{\|\chi_h\|_1}\,dt \\ \le\sup_{\chi_h\in V_h}\frac{\int_0^1a'(u_h+t(\tilde u_h-u_h);u_h-\tilde u_h,\chi_h)\,dt}{\|\chi_h\|_1} \\ =\sup_{\chi_h\in V_h}\frac{\int_0^1\frac d{dt}a(u_h+t(\tilde u_h-u_h),\chi_h)\,dt}{\|\chi_h\|_1}=0.\end{gathered}$$ Since $\bar c(u)>0$, then $u_h=\tilde u_h$. Best approximation result {#s:melaprox} ========================= We establish here a Céa’s Lemma type result for the Residual Free Bubble Method. The strategy to obtain such result is to consider a linearization $A(u_r;\cdot,\cdot)$ of  centered at the “enriched solution” $u_r$. We consider then the following linear problem to find $w\in H_0^1(\O)$ such that $$A(u_r ;w,v)=(f,v)\quad\text{for all } v \in H^1_0(\Omega),$$ where $$A(u_r;w,v)=\int_\O{\alpha_{\epsilon}(x)}\,b(u_r )\grad w\cdot\grad v\,dx.$$ Thus, $A(u_r;\cdot,\cdot)$ is coercive in $H_0^1(\O)$, since $$\label{e:coercA} A(u_r;w,w)=\int_\O{\alpha_{\epsilon}(x)}\,b(u_r )|\grad w|^2\,dx \ge C_\O\alpha_0b_0\|w\|_{1,\O}^2,$$ where $C_\O$ is the Poincaré’s constant. We establish first the following identity. Given $v_r \in V_r$, the following identity holds $$\label{s:identidade.cea} A(u_r; {u_{\epsilon}}-u_r,v_r)=A(u_{\epsilon}; u^{\epsilon}-u_r,v_r)= \int_{\Omega}\alpha_{\epsilon}(x)[b(u_r)- b({u_{\epsilon}})] \grad {u_{\epsilon}}\cdot\grad v_r\, dx.$$ Indeed, $$\begin{gathered} A(u_r; {u_{\epsilon}}-u_r,v_r)=\int_{\Omega}\alpha_{\epsilon}(x)b(u_r)\grad {u_{\epsilon}}.\grad v_r\, dx -\int_{\Omega}\alpha_{\epsilon}(x)b(u_r) \grad u_r.\grad v_r\,dx \\ = \int_{\Omega}\alpha_{\epsilon}(x)b(u_r)\grad {u_{\epsilon}}.\grad v_r \, dx - \int_{\Omega} f v_r\,dx \\ = \int_{\Omega}\alpha_{\epsilon}(x)b(u_r)\grad {u_{\epsilon}}.\grad v_r \, dx -\int_{\Omega}\alpha_{\epsilon}(x)b({u_{\epsilon}})\grad {u_{\epsilon}}.\grad v_r \, dx \\ = \int_\O\alpha_{\epsilon}(x)[b(u_r)-b(u^{\epsilon})]\grad u^{\epsilon}.\grad v_r \, dx.\end{gathered}$$ The proof of the second inequality is similar. We end the present section establishing a best approximation result in the enriched space $V_r$. This is a Céa’s Lemma type result for the multiscale nonlinear problem [@BS]. An advantage of the estimate is that it requires less regularity of $b(\cdot)$ than in [@D-D], cf. also Remark \[r:dd\]. We often use Hölder’s inequality $$\int_\O fgh\,dx\le\|f\|_{L^3}\|g\|_{L^6}\|h\|_{L^2} \le\|f\|_{0,\O}^{1/2}\|f\|_{1,\O}^{1/2}\|g\|_{L^6}\|h\|_{L^2}$$ where we use also the continuous embedding $H^1(\O)\hookrightarrow L^6(\O)$ (for dimensions smaller than three). \[l:cea\] Let $\alpha_\epsilon(.)$ and $b(.)$ satisfying  and , respectively. Then, for ${u_{\epsilon}}$ sufficiently small in $W^{1,6}(\Omega)$, it follows that $$\label{e:cea} \|\grad({u_{\epsilon}}- u_r)\|_{0,\Omega} \leq C \|\grad({u_{\epsilon}}-w_r)\|_{0, \Omega}\quad\text{for all } w_r \in V_r.$$ Let $w_r\in V_r$. To establish , compute $$\begin{gathered} \label{eq.cea} A(u_r;{u_{\epsilon}}-u_r,{u_{\epsilon}}-u_r) =A(u_r;{u_{\epsilon}}- u_r,{u_{\epsilon}}- w_r) + A(u_r;{u_{\epsilon}}- u_r,w_r - u_r) \\ =\int_\O\alpha_{\epsilon}\,b(u_r)\grad(u^{\epsilon}-u_r)\cdot\grad(u^{\epsilon}-w_r)\,dx + \int_\O\alpha_{\epsilon}\,(b(u_r)-b({u_{\epsilon}}))\grad u^{\epsilon}\cdot\grad(w_r-u_r)\,dx\end{gathered}$$ using . Denote by $I_1$, $I_2$ the first and second terms of . We now estimate each of these terms $$I_1=\int_\Omega\alpha_{\epsilon}\,b(u_r)\grad(u^{\epsilon}-u_r)\cdot\grad({u_{\epsilon}}-w_r)\,dx \leq c_1\|\grad({u_{\epsilon}}-u_r)\|_{0,\Omega} \|\grad({u_{\epsilon}}-w_r)\|_{0,\Omega},$$ where $c_1:=\alpha_1\|b\|_\infty$. We estimate now $I_2$: $$\begin{gathered} I_2=\int_\Omega\alpha_\epsilon\,(b(u_r)-b({u_{\epsilon}}))\grad {u_{\epsilon}}.\grad(w_r-u_r)\,dx \\ \leq\alpha_1\|b'\|_{\infty}\int_\Omega|u_r-{u_{\epsilon}}|\,|\grad {u_{\epsilon}}| \,|\grad(w_r-u_r)|\,dx \\ \leq\alpha_1\|b'\|_{\infty}\|{u_r}-{u_{\epsilon}}\|_{L^2(\O)}^{1/2}\|{u_r}-{u_{\epsilon}}\|_{H^1(\O)}^{1/2} \|\grad{u_{\epsilon}}\|_{L^6(\O)}\|w_r-u_r\|_{1,\O} \\ \leq\alpha_1\|b'\|_{\infty}\|\grad {u_{\epsilon}}\|_{L^6(\O)}\|{u_{\epsilon}}-{u_r}\|_{H^1(\O)} \bigl[\|{u_{\epsilon}}-{u_r}\|_{1,\O}+\|{u_{\epsilon}}-w_r\|_{1,\O}\bigr].\end{gathered}$$ From , there exists $\beta>0$, independent of $\epsilon$, such that $$\begin{gathered} A(u_r;{u_{\epsilon}}-u_r,{u_{\epsilon}}-u_r)\geq\beta\|\grad(u^{\epsilon}-u_r)\|^2_{0,\O}.\end{gathered}$$ Moreover, from the estimates for $I_1$, $I_2$ in , we gather that $$\begin{gathered} \beta\,\|\grad({u_{\epsilon}}-u_r)\|^2_{0,\Omega}\leq c_1\|\grad({u_{\epsilon}}-u_r)\|_{0,\Omega}\|\grad({u_{\epsilon}}-w_r)\|_{0,\Omega} \\ +\alpha_1\|b'\|_{\infty}\|\grad {u_{\epsilon}}\|_{L^6(\O)}\|{u_{\epsilon}}-{u_r}\|_{H^1(\O)} \bigl[\|{u_{\epsilon}}-{u_r}\|_{1,\O}+\|{u_{\epsilon}}-w_r\|_{1,\O}\bigr].\end{gathered}$$ Thus $$\begin{gathered} \beta\,\|\grad({u_{\epsilon}}-u_r)\|_{0,\Omega}\leq \bigl(c_1+\alpha_1\|b'\|_{\infty}\|\grad{u_{\epsilon}}\|_{L^6(\O)}\bigr)\|{u_{\epsilon}}-w_r\|_{H^1(\Omega)} \\ + \alpha_1\|b'\|_{\infty}\|\grad{u_{\epsilon}}\|_{L^6(\O)}\|{u_{\epsilon}}-{u_r}\|_{1,\O},\end{gathered}$$ and then $$\|{u_{\epsilon}}-u_r\|_{1,\Omega} \bigl(\beta-\alpha_1\|b'\|_{\infty}\|\grad{u_{\epsilon}}\|_{L^6(\O)}\bigr) \leq c_1\|{u_{\epsilon}}-w_r\|_{1,\O}.$$ Proposition \[l:cea\] is important because the best approximation estimate is independent of ${\epsilon}$, and shows in particular that the RFB method converges at least as well as the MsFEM since the RFB approximation spaces contains the spaces employed in the MsFEM. The choice of the approximation spaces is crucial here, since polynomial spaces with no bubbles added, a.k.a. classical Galerkin, yield a method that converges in $h$ albeit non-uniformly with respect to ${\epsilon}$. \[r:dd\] Dropping the “small solution” hypothesis, (also present in [@AV]), an analogous result holds. In particular, the estimate $$\|{u_{\epsilon}}-{u_r}\|_{H^1(\O)} \leq C\bigl(\|{u_{\epsilon}}-w_r\|_{H^1(\O)}+\|{u_{\epsilon}}-{u_r}\|_{L^2(\O)}\bigr) \quad\text{for all }w_r \in V_r$$ results from the above proof. An estimate for $\|{u_{\epsilon}}-{u_r}\|_{H^1(\O)}$ was obtained in [@D-D]\*[Theorem 1]{}, under extra regularity for $b(\cdot)$. Following their proof, it is possible to show that $$\begin{gathered} \|{u_{\epsilon}}-{u_r}\|_{H^1(\O)} \leq C\|{u_{\epsilon}}-w_r\|_{H^1(\O)} \biggl(1+\inf_{\tilde\chi\in V_r}\|\phi-\tilde\chi\|_{H^1(\O)}+\|{u_{\epsilon}}-w_r\|_{H^1(\O)}^2\biggr) \\ +C\|{u_{\epsilon}}-{u_r}\|_{L^2(\O)} \biggl(\inf_{\tilde\chi\in V_r}\|\phi-\tilde\chi\|_{H^1(\O)}+\|{u_{\epsilon}}-{u_r}\|_{L^2(\O)}^2\biggr),\end{gathered}$$ for all $w_r\in V_r$, where $\phi$ is the solution of a linear dual problem. It follows then that $\|{u_{\epsilon}}-{u_r}\|_{L^2(\O)}$ is small enough as long as the mesh size $h$ is small enough, and a best approximation result follows. However, the compactness argument of [@D-D] does not allow, in principle, the mesh size to be independent of the small scales. Finally, strict monotonicity is also a sufficient condition for the best approximation result of Lemma [@evans], i.e, $$\int_\O{\alpha_{\epsilon}(x)}[b(v_r)\grad v_r-b(w_r)\grad w_r]\cdot\grad(v_r-w_r)\,dx\ge c\|v_r-w_r\|_{H^1(\O)}^2$$ for all $v_r$, $w_r\in V_r$. In this case, $$\begin{gathered} \|{u_r}-w_r\|_{H^1(\O)}^2 \le c\int_\O{\alpha_{\epsilon}(x)}[b(u_r)\grad u_r-b(w_r)\grad w_r]\cdot\grad(u_r-w_r)\,dx \\ =c\int_\O{\alpha_{\epsilon}(x)}[b({u_{\epsilon}})\grad {u_{\epsilon}}-b(w_r)\grad w_r]\cdot\grad(u_r-w_r)\,dx \\ \le c\|b({u_{\epsilon}})\grad {u_{\epsilon}}-b(w_r)\grad w_r\|_{L^2(\O)} \|{u_r}-w_r\|_{H^1(\O)} \\ \le c\bigl(\|b({u_{\epsilon}})\grad {u_{\epsilon}}-b(w_r)\grad {u_{\epsilon}}\|_{L^2(\O)} +\|b(w_r)\grad {u_{\epsilon}}-b(w_r)\grad w_r\|_{L^2(\O)}\bigr) \|{u_r}-w_r\|_{H^1(\O)},\end{gathered}$$ and we conclude that $\|{u_r}-w_r\|_{H^1(\O)}\le c\|{u_{\epsilon}}-w_r\|_{H^1(\O)}$ for all $w_r\in V_r$. An estimate as  follows from the triangle inequality. Possible Linearizations {#s:lineariz} ======================= As in the original problem , the RFB approximation , or equivalently , is still given by a nonlinear problem. We investigate here some ideas to linearize the problem. In the next subsection, we investigate fixed point schemes, and in the following subsection, we discuss a proposal named *reduced RFB*. Fixed point formulation {#ss:rfbpf} ----------------------- A first idea to linearize the original problem  is the following. Let $u_{\epsilon}^0\in H_0^1(\Omega)$, and for $n\in\N$, given $u_{\epsilon}^{n-1}\in H_0^1(\Omega)$, compute $u_{\epsilon}^n\in H_0^1(\Omega)$ as the solution of $$\label{e:interat} \int_\Omega\alpha_{\epsilon}(x)b(u_{\epsilon}^{n-1})\grad(u_{\epsilon}^n)\cdot\grad v\,dx =\int_\Omega fv\,dx\quad\text{for all }v\in H_0^1(\Omega).$$ In the context of the RFB method, we use  to propose the following iterative scheme. Let $u_{\epsilon}^0\in V_r$, and $n\in\N$. Given $u_r^{n-1}\in V_r$, compute $u_r^n\in V_r$ solution of $$\label{e:rfbinterat} \int_\Omega\alpha_{\epsilon}(x)b(u_r^{n-1})\grad(u_r^n)\cdot\grad v_r\,dx =\int_\Omega fv_r\,dx\quad\text{for all }v_r\in V_r.$$ Observe that the above scheme discretizes . Hence, *discretization* and *linearization* commutes. Since the problem now is linear, we head back to the situation described in Remark \[obslinear\]. We can also rewrite  in terms of global/local problems. Given $u_h^{n-1}\in V_h$ and $u_b^{n-1}\in V_b$, find $u_h^n\in V_h$ and $u_b^n\in V_b$ such that $$\label{e:rfbinteratexp} \begin{gathered} \int_\Omega\alpha_{\epsilon}(x)b(u_h^{n-1}+u_b^{n-1})\grad(u_h^n+u_b^n)\cdot\grad v_h\,dx =\int_\Omega fv_h\,dx, \\ -\div[\alpha_{\epsilon}(x)b(u_h^{n-1}+u_b^{n-1})\grad(u_h^n+u_b^n)]=f\quad\text{in }K, \end{gathered}$$ for all $v_h\in V_h$ and all $K\in\T_h$. Given $u_{\epsilon}^0\in H_0^1(\O)$ and $u_r^0\in V_r$, let $u_{\epsilon}^n\in H_0^1(\O)$ and $u_r^n\in V_r$ be defined from  and  for $n\in\N$. Then $\lim_{n\to\infty}u_{\epsilon}^n=u_{\epsilon}$ and $\lim_{n\to\infty}u_r^n=u_r$ in $H_0^1(\O)$. We first consider the continuous problem, for a fixed ${\epsilon}$. Note that $\|\grad u_{\epsilon}^n\|_0\le c\|f\|_{-1}$, and then $\|\grad u_{\epsilon}^n\|_{0,\O}$ is bounded. Therefore, there exist $\bar u\in H_0^1(\O)$ and a subsequence of $u_{\epsilon}^n$, indexed by $n\in\N$, but still denoted by $u_{\epsilon}^n$, such that $u_{\epsilon}^n$ weakly converges to $\bar u$ in $H_0^1(\O)$, with strong convergence in $L^2(\O)$. Thus, from the Lebesgue Dominated Convergence Theorem, $b(u_{\epsilon}^n)\grad v\to b(\bar u)\grad v$ strongly in $L^2(\O)$, for all $v\in H_0^1(\O)$. Note also that $\int_\O\grad(u_{\epsilon}^n-\bar u)\cdot\btau\,dx\to0$ for all $\btau\in L^2(\O)$. Indeed, from Helmholtz decomposition, there exist $p\in H_0^1(\O)$, $q\in H^1(\O)$ such that $\btau=\grad p+\curl q$. Therefore, $$\int_\O\grad(u_{\epsilon}^n-\bar u)\cdot\btau\,dx =\int_\O\grad(u_{\epsilon}^n-\bar u)\cdot\grad p\,dx\to0$$ as $n\to\infty$. It follows from these results that, for all $v\in H_0^1(\O)$, $$\begin{gathered} \int_\O[b(u_{\epsilon}^{n-1})\grad u_{\epsilon}^n-b(\bar u)\grad\bar u]\grad v \\ =\int_\O[b(u_{\epsilon}^{n-1})-b(\bar u)]\grad u_{\epsilon}^n\grad v +\int_\O b(\bar u)[\grad u_{\epsilon}^n-\grad\bar u]\grad v \\ \le\|[b(u_{\epsilon}^{n-1})-b(\bar u)]\grad v\|_0\|\grad u_{\epsilon}^n\|_0+\int_\O b(\bar u)[\grad u_{\epsilon}^n-\grad\bar u]\grad v.\end{gathered}$$ Taking $n\to\infty$ we gather that $$\label{e:convprod} \int_\O[b(u_{\epsilon}^{n-1})\grad u_{\epsilon}^n-b(\bar u)\grad\bar u]\grad v\to0.$$ Thus $$0=\lim_{n\to\infty}\int_\O b(u_{\epsilon}^{n-1})\grad u_{\epsilon}^n\grad v-fv\,dx =\int_\O b(\bar u)\grad\bar u\grad v-fv\,dx.$$ Then $\bar u$ solves . From uniqueness of solutions, $\bar u=u_{\epsilon}$, and the whole sequence, and not only a subsequence, $u_{\epsilon}^n$ converges to $\bar u$. To show that the convergence is actually strong, note [@MR1477663] that $$\begin{gathered} \|{u_{\epsilon}}^n-\bar u\|_{H^1(\O)} \le\int_\O\alpha_{\epsilon}b({u_{\epsilon}}^{n-1})\grad({u_{\epsilon}}^n-\bar u)\cdot\grad({u_{\epsilon}}^n-\bar u)\,dx \\ =\int_\O\alpha_{\epsilon}b({u_{\epsilon}}^{n-1})\grad(\bar u)\cdot\grad(\bar u-2{u_{\epsilon}}^n)\,dx+\int_\O\alpha_{\epsilon}b({u_{\epsilon}}^{n-1})\grad({u_{\epsilon}}^n)\cdot\grad({u_{\epsilon}}^n)\,dx \\ =\int_\O\alpha_{\epsilon}b({u_{\epsilon}}^{n-1})\grad(\bar u)\cdot\grad(\bar u-2{u_{\epsilon}}^n)\,dx+\int_\O f{u_{\epsilon}}^n\,dx \\ \to-\int_\O\alpha_{\epsilon}b(\bar u)\grad(\bar u)\cdot\grad(\bar u)\,dx+\int_\O f\bar u\,dx\end{gathered}$$ since  holds. Thus the convergence ${u_{\epsilon}}^n\to\bar u$ is strong in $H^1(\O)$. The second part of the lemma, regarding the RFB approximation, follows from basically the same arguments. Since $\|\grad u_r^n\|_0\le c\|f\|_{-1}$, there exists $\bar u_r\in H_0^1(\O)$ and a subsequence still denoted by $u_r^n$ such that $u_r^n$ weakly converges to $\bar u_r$ in $H_0^1(\O)$, whereas strong convergence holds in $L^2(\O)$. Again, $b(u_r^n)\grad v\to b(\bar u_r)\grad v$ strongly in $L^2(\O)$, for all $v\in H_0^1(\O)$. Note also that $\int_\O\grad(u_r^n-\bar u_r)\cdot\btau\,dx\to0$ for all $\btau\in L^2(\O)$. Indeed, from Helmholtz decomposition, there exist $p\in H_0^1(\O)$, $q\in H^1(\O)$ such that $\btau=\grad p+\curl q$. Thus $$\int_\O\grad(u_r^n-\bar u_r)\cdot\btau\,dx =\int_\O\grad(u_r^n-\bar u_r)\cdot\grad p\,dx\to0$$ as $n\to\infty$. From these results, we gather that for all $v\in H_0^1(\O)$, $$\begin{gathered} \int_\O[b(u_r^{n-1})\grad u_r^n-b(\bar u_r)\grad\bar u_r]\grad v \\ =\int_\O[b(u_r^{n-1})-b(\bar u_r)]\grad u_r^n\grad v +\int_\O b(\bar u_r)[\grad u_r^n-\grad\bar u_r]\grad v \\ \le\|[b(u_r^{n-1})-b(\bar u_r)]\grad v\|_0\|\grad u_r^n\|_0+\int_\O b(\bar u_r)[\grad u_r^n-\grad\bar u_r]\grad v.\end{gathered}$$ Taking $n\to\infty$, it follows that $\int_\O[b(u_r^n)\grad u_r^n-b(\bar u_r)\grad\bar u_r]\grad v\to0$ for all $v\in H_0^1(\O)$. Considering now $v\in V_r$, we have that $$0=\lim_{n\to\infty}\int_\O b(u_r^{n-1})\grad u_r^n\grad v-fv\,dx =\int_\O b(\bar u_r)\grad\bar u_r\grad v-fv\,dx.$$ Since $V_r$ is closed, $\bar u_r\in V_r$. Therefore $\bar u_r=u_r$ solves . If uniqueness also holds, the whole sequence $u_{\epsilon}^n$ converges to $\bar u$. Given $u_{\epsilon}^0\in H_0^1(\O)$ and $u_r^0\in V_r$, let $u_{\epsilon}^n\in H_0^1(\O)$ and $u_r^n\in V_r$ be defined by  and , $n\in\N$. Then, if ${u_{\epsilon}}$ is sufficiently small in $W^{1,6}(\Omega)$, we have that $$\|{u_{\epsilon}}^n-{u_{\epsilon}}\|_{H^1(\O)}+\|{u_r}^n-{u_r}\|_{H^1(\O)}\le \bar\alpha\|{u_{\epsilon}}^{n-1}-{u_{\epsilon}}\|_{H^1(\O)},$$ for $\bar\alpha<1$. Note that $$\begin{gathered} \|{u_{\epsilon}}^n-{u_{\epsilon}}\|_{H^1(\O)}^2 \le\int_\O{\alpha_{\epsilon}(x)}b({u_{\epsilon}}^{n-1})\grad({u_{\epsilon}}^n-{u_{\epsilon}})\grad({u_{\epsilon}}^n-{u_{\epsilon}})\,dx \\ =\int_\O{\alpha_{\epsilon}(x)}[-b({u_{\epsilon}}^{n-1})+b({u_{\epsilon}})]\grad{u_{\epsilon}}\grad({u_{\epsilon}}^n-{u_{\epsilon}})\,dx \\ \le c\|{u_{\epsilon}}^{n-1}-{u_{\epsilon}}\|_{L^2(\O)}^{1/2}\|{u_{\epsilon}}^{n-1}-{u_{\epsilon}}\|_{H^1(\O)}^{1/2} \|\grad{u_{\epsilon}}\|_{L^6(\O)}\|{u_{\epsilon}}^n-{u_{\epsilon}}\|_{H^1(\O)}.\end{gathered}$$ The result for ${u_r}^n$ is analogous. We end this subsection with an alternative linearization proposal, based on . Given $u_h^{n-1}\in V_h$ and $u_b^{n-1}\in V_b$, find $u_h^n\in V_h$ and $u_b^n\in V_b$ such that $$\begin{gathered} \int_\Omega\alpha_{\epsilon}(x)b(u_h^{n-1}+u_b^{n-1})\grad(u_h^n+u_b^{n-1}) \cdot\grad v_h\,dx=\int_\Omega fv_h\,dx, \label{e:rfbinteratexpi} \\ -\div[\alpha_{\epsilon}(x)b(u_h^n+u_b^{n-1})\grad(u_h^n+u_b^n)]=f \label{e:rfbinteratexpii} \quad\text{in }K,\end{gathered}$$ for all $v_h\in V_h$ and all $K\in\T_h$. Observe that the above system is not coupled as in . It is possible to solve first  and only then solve . Reduced Residual Free Bubble Formulation {#ss:rfbred} ---------------------------------------- The idea here is to use the approximation $b(u_h+u_b)\approx b(u_h)$ at the local problem of the second equation in . This induces a linearization that makes static condensation possible. In this case, we search for the approximation $\tilde{u_r}=\tilde u_h+\tilde u_b\in V_r$ such that $$\begin{gathered} \int_\Omega\alpha_{\epsilon}(x)b(\tilde u_h+\tilde u_b)\grad(\tilde u_h+\tilde u_b)\cdot\grad v_h\,dx =\int_\Omega fv_h\,dx,\nonumber \\ -\div[\alpha_{\epsilon}(x)b(\tilde u_h)\grad\tilde u_b] =f+\div[\alpha_{\epsilon}(x)b(\tilde u_h)\grad\tilde u_h] \quad\text{in }K, \label{e:rfblr}\end{gathered}$$ for all $v_h\in V_h$ and all $K\in\T_h$. Thus, the local problem  is linear with respect to $\tilde u_b$. Since  is linear, we can split $\tilde u_b=\tilde u_b^l+\tilde u_b^f$ in two parts, each solving  with $f$ and $\div[\alpha_{\epsilon}(x)b(\tilde u_h)\grad\tilde u_h]$ on the right hand side. However, the local and global problems are still coupled. The local problems for the MsFEM involve $\tilde u_b^l$ only, and to make the method cheaper, it is possible to replace $b(u_h)$ by $b(\int_Ku_h(x)\,dx)$, as in [@E-H-G], or by $(b(u_h(x_K)))$ as in [@CH-Y], where $x_K$ is an interior point of the element. In this way,  reduces to a much simpler equation, given by $$-\div[{\alpha_{\epsilon}(x)}\grad\tilde{u_b}^l]= \div[{\alpha_{\epsilon}(x)}\grad\tilde{u_h}] \quad \text{in } K.$$ From the equation linearity, the computation of the local bubble $u_b^l$ is determined solving the corresponding problems associated to the basis functions. However, such simplification is not possible for the RFB method, due to the presence of the $\tilde u_b^f$ term. Such extra term is important since it can significantly improve the quality of the approximation [@MR2203943; @MR2142535; @SG] in some situations. [^1]: Research of the second author was supported by CNPq, Brazil
--- abstract: 'We study the simplest mode-coupling equation which describes the time correlation function of the spherical $p$-spin glass model. We formulate a systematic perturbation theory near the mode-coupling transition point by introducing multiple time scales. In this formulation, the invariance with respect to the dilatation of time in a late stage yields an arbitrary constant in a leading order expression of the solution. The value of this constant is determined by a solvability condition associated with a linear singular equation for perturbative corrections in the late stage. The solution thus constructed provides exactly the $\alpha$-relaxation time.' address: 'Department of Pure and Applied Sciences, University of Tokyo, Komaba, Tokyo 153-8902, Japan' author: - 'Mami Iwata and Shin-ichi Sasa' title: 'Singular perturbation near mode-coupling transition' --- Introduction ============ About a quarter of a century ago, a peculiar type of slow relaxation was discovered theoretically in researches of glassy systems [@Goetze; @Leut]. This relaxation behavior is characterized by two different time scales, both of which diverge at a temperature. An illustrative example exhibiting such a behavior is the spherical $p$-spin glass model [@p-spin]. The normalized time-correlation function $\phi(t)$ of total magnetization in this model turned out to satisfy exactly the so-called mode-coupling equation, which is written as $$\partial_t \phi(t)=-\phi(t)-g \int_0^t ds \phi^2(t-s)\partial_s \phi(s) \label{MCT}$$ for the case $p=3$. Here, the initial condition is given by $\phi(0)=1$, and the parameter $g$ is proportional to the square of the inverse temperature. Since this equation is derived under an assumption that the system possesses the stationarity, (\[MCT\]) is valid only in a regime $0 \le g < {g_{\rm c}}$, where ${g_{\rm c}}$ will be given later. A remarkable feature of (\[MCT\]) is the existence of nonlinear memory. One can regard (\[MCT\]) as one of the simplest equations that characterize a universality class consisting of models with nonlinear memory. Indeed, some qualitatively new features in glassy systems have been uncovered by studying (\[MCT\]) and its extended forms. (See Ref. [@MCT] as a review.) In particular, two divergent time scales were found just below the transition point $g = {g_{\rm c}}$, and the precise values of the exponents characterizing the divergences were determined. Furthermore, extensive studies have been attempted so as to construct the solution in a systematic manner. On the basis of past achievements, in the present paper, we propose a perturbation method for analyzing (\[MCT\]), which might shed new light on the nature near the mode coupling transition. We shall address the question we study in this paper. Let $f_\infty$ be the value of $\phi(t\to \infty)$. We substitute $\phi(t)=G(t)+f_\infty$ into (\[MCT\]) and take the limit $t \to \infty$. We then obtain $$-f_\infty+g f_\infty^2(1-f_\infty)=0, \label{f:eq}$$ where we have used the relation $G(0)=1-f_\infty$. From the graph $g f_\infty^2(1-f_\infty)$ as a function of $f_{\infty}$, we find that the non-trivial solution ($f_\infty\not =0$) appears when $g \ge {g_{\rm c}}=4$. This transition is called the mode-coupling transition. Note that $f_\infty=1/2$ when $g={g_{\rm c}}$. Below we express this value of $f_\infty$ as ${f_{\rm c}}$. We then introduce a small positive parameter $\epsilon$ by setting $g={g_{\rm c}}-\epsilon$, and we denote the solution of (\[MCT\]) by $\phi_{\epsilon}(t)$. In this paper, we formulate a perturbation theory for (\[MCT\]). As a result, we obtain an asymptotic form of $\phi_{\epsilon}(t)$ in the small $\epsilon$ limit. More concretely, for a given small positive $\epsilon$, we want to express $\phi_{\epsilon}(t)$ in terms of ${\epsilon}$ and ${\epsilon}$-independent functions. For readers’ reference, in figure \[fig4\] (left), we display the numerical solution $\phi_{\epsilon}(t)$ with ${\epsilon}=10^{-3}$. Here, when solving (\[MCT\]), we used the algorithm proposed in Ref. [@Fuchs]. In figure \[fig4\] (right), we also display the ${\epsilon}$-dependence of the $\alpha$-relaxation time $\tau_\alpha$ which is defined by $\phi_{\epsilon}(\tau_\alpha)=1/4$. We want to calculate $\tau_\alpha$ based on our theory. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![$\phi_{\epsilon}(t)$ with ${\epsilon}=10^{-3}$ (left). $\alpha$-relaxation time as a function of $\epsilon$ (right). The circle symbols represent the result of numerical simulation of (\[MCT\]). The dotted line corresponds to the theoretical calculation $\tau_\alpha=20{\epsilon}^{-1.77}$ given by (\[taualpha\]).[]{data-label="fig4"}](p3.eps "fig:"){width="6cm"} ![$\phi_{\epsilon}(t)$ with ${\epsilon}=10^{-3}$ (left). $\alpha$-relaxation time as a function of $\epsilon$ (right). The circle symbols represent the result of numerical simulation of (\[MCT\]). The dotted line corresponds to the theoretical calculation $\tau_\alpha=20{\epsilon}^{-1.77}$ given by (\[taualpha\]).[]{data-label="fig4"}](atime.eps "fig:"){width="6cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- This paper is organized as follows. In section \[prelim\], we set up our theory. In particular, we give a useful expression of a perturbative solution. This section also includes a review of known facts in order to have a self-contained description. In section \[formu\], we formulate a systematic perturbation theory on the basis of our expression, and we determine a leading order form of the solution. We check the validity of our theory by comparing our theoretical result of $\tau_\alpha$ with that measured by direct numerical simulations of (\[MCT\]). Section \[remark\] is devoted to remarks on possible future studies. Technical details are summarized in Appendices. Preliminaries {#prelim} ============= Solution with ${\epsilon}=0$ ---------------------------- We first investigate the solution $\phi_0(t)$. It is expressed by $\phi_0(t)=G_0(t)+{f_{\rm c}}$ with a function $G_0(t)$ which decays to $0$ as $ t \to \infty$. The equation for $G_0$ is written as $$\begin{aligned} \partial_t G_0(t) +{f_{\rm c}}+G_0(t) +g_c\int_0^{t}ds {\left({f_{\rm c}}+G_0(t-s) \right)}^2 \partial_s G_0(s)=0. \label{G0t}\end{aligned}$$ An asymptotic form of $G_0(t)$ in the large $t$ limit can be derived by employing a formula $$x \int_0^t ds[(t-s)^{-x}-t^{-x}]s^{-x-1} =\left(1-\frac{\Gamma^2(1-x)}{\Gamma(1-2x)} \right)t^{-2x}$$ for any $x < 1$ and $t >0$, where $\Gamma(x)$ is the Gamma function. The result is $$\begin{aligned} G_0(t) \simeq c_0 t^{-a}, \label{a:def}\end{aligned}$$ where $a$ is a constant that satisfies a relation $$\frac{\Gamma^2(1-a)}{\Gamma(1-2a)}=\frac{1}{2}.$$ The value of $a$ is estimated as $a=0.395$. In figure \[fig1\], we display the graphs of $\phi_0(t)$ and $G_0(t)$, which are calculated numerically. As shown in \[app:c0\], an approximate expression of $c_0$ is calculated as $c_0^{\rm app}=(a/(1-a))^a(1-a)/2^{a+1}$ by a matching procedure. Its value, $0.194$, is not far from $c_0=0.25$ obtained from a numerical fitting of the graph of $G_0(t)$. It might be possible to improve the approximation in a systematic manner. However, in this paper, we do not pursue such improvements. The important thing here is that the ${\epsilon}$-independent function $G_0(t)$ is defined with understanding of its asymptotic form. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![$\phi_0(t)$ (left) and $G_0(t)$ in the log-log plot (right). The dotted line in the right figure represents $0.25t^{-0.395}$.[]{data-label="fig1"}](p0.eps "fig:"){width="6cm"} ![$\phi_0(t)$ (left) and $G_0(t)$ in the log-log plot (right). The dotted line in the right figure represents $0.25t^{-0.395}$.[]{data-label="fig1"}](g0.eps "fig:"){width="6cm"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Expression of the solution with ${\epsilon}> 0$ {#exp:sol} ----------------------------------------------- One may expect that the solution $\phi_{\epsilon}(t)$ is close to $\phi_0(t)$. However, recall that $\phi_0(t \to \infty)$ changes discontinuously from $0$ to $1/2$ when $g$ passes at $g={g_{\rm c}}$ from below. This fact means that a small perturbation from $g={g_{\rm c}}$ (${\epsilon}=0$) yields a singular behavior. Examples of such [*singular perturbation*]{} can be seen in Refs. [@Holmes; @Bender]. Before formulating effects of the perturbation, we conjecture a functional form of the solution $\phi_{\epsilon}(t)$. First, $\phi_{{\epsilon}}(t)$ should be close to $\phi_0(t)$ in an early stage where $\phi_{\epsilon}> {f_{\rm c}}$. Since $\phi_0(t) \to {f_{\rm c}}$ as $ t \to \infty$, the trajectory $\phi_{\epsilon}(t)$ stays in a region near ${f_{\rm c}}$ for a long time. However, since there is no non-trivial solution $f_\infty \not = 0$ for positive $\epsilon$, $\phi_{\epsilon}$ goes away from the region $\phi_{\epsilon}\simeq {f_{\rm c}}$ and finally approaches the origin $\phi_{\epsilon}= 0$. Such a behavior is substantially different from $\phi_0(t)$. We thus introduce a quantity $A({\epsilon}^{\gamma_2} t)$ that describes the relaxation behavior from $\phi_{\epsilon}\simeq {f_{\rm c}}$ to $\phi_{\epsilon}=0$, where $A(0)={f_{\rm c}}$. The functional form of $A$ is independent of ${\epsilon}$, while its argument is the scaled time $t_2={\epsilon}^{\gamma_2} t$ with a positive constant $\gamma_2$. We also expect that a switching from $\phi_{\epsilon}(t)\simeq \phi_0(t)$ in the early stage to $\phi_{\epsilon}(t)\simeq A({\epsilon}^{\gamma_2}t)$ in the late stage occurs around another characteristic time of $O( {\epsilon}^{-\gamma_1})$ with a positive constant $\gamma_1$. Keeping this behavior in mind, we express the solution as $$\begin{aligned} \phi_{{\epsilon}}(t) = G_0(t) \Theta({\epsilon}^{\gamma_1}t) + A({\epsilon}^{\gamma_2}t)+ \varphi_{\epsilon}(t), \label{rep}\end{aligned}$$ where the switching function $\Theta$ satisfies $\Theta(0)=1$ and $\Theta(\infty)=0$. The functional form of $\Theta$ is independent of ${\epsilon}$, while its argument depends on ${\epsilon}$. $\varphi_{\epsilon}(t)$ represents a small correction that satisfies $\varphi_{\epsilon}\to 0$ in the limit ${\epsilon}\to 0$ for any $t$. $\Theta$, $A$, $\gamma_1$, $\gamma_2$ and $\varphi_{\epsilon}(t)$ will be determined later. In order to have a simple description, we define a set of scaled coordinates $(t_0, t_1, t_2)$ on the time axis as $t_i={\epsilon}^{\gamma_i} t$, where $\gamma_0=0$. Throughout the paper, a time coordinate with an integer subscript represents the scaled coordinate determined by the subscript. Note that $t_0$, $t_1$ and $t_2$ appear as the arguments of $G_0$, $\Theta$ and $A$, respectively. Physically, the first relaxation occurs in the early stage $t_0 \sim O(1)$; the behavior around $\phi_{\epsilon}={f_{\rm c}}$ is observed in the intermediate stage $t_1 \sim O(1)$; and the relaxation behavior from $\phi_{\epsilon}\simeq {f_{\rm c}}$ to $\phi_{\epsilon}=0$ is described in the late stage $t_2 \sim O(1)$. In researches of glassy systems, the intermediate and the late stages are termed the $\beta$-relaxation regime and the $\alpha$-relaxation regime, respectively. Equation for A -------------- We substitute (\[rep\]) into (\[MCT\]) and take the limit ${\epsilon}\to 0$ with the scaled time $t_2={\epsilon}^{\gamma_2} t$ fixed. We then obtain $$\begin{aligned} {A}(t_2) -g_c(1-{f_{\rm c}}) {A}^2(t_2) + g_c\int_{0}^{t_2}ds_2 {A}^2(t_2-s_2) {A}'(s_2)=0. \label{def_A}\end{aligned}$$ In this paper, the prime symbol represents the differentiation with respect to the argument of the function. The equation (\[def\_A\]) provides the explicit definition of $A$ with the condition $A(0)={f_{\rm c}}$. However, this equation [*cannot*]{} determine $A$ uniquely. Indeed, for a given solution $ A(t_2)$ of (\[def\_A\]), $ A(\lambda t_2)$ with any positive $\lambda$ is another solution of (\[def\_A\]). This [*dilatational symmetry*]{} is a remarkable property of (\[def\_A\]). Here, by analyzing the short time behavior in (\[def\_A\]), one can confirm that $A(t_2)-{f_{\rm c}}$ is proportional to $t_2$ in the small $t_2$ limit. Thus, we can choose a special solution of $A$ such that $A'(0)=-1$. In the argument below, $A$ represents this special solution; and the other solutions are described by $A(\lambda t_2)$. For later convenience, we define $A_\lambda$ by $ A_\lambda(t_2)=A(\lambda t_2)$, and $A$ in (\[rep\]) is replaced with $A_\lambda$. In particular, we have $$A_\lambda(t_2)={f_{\rm c}}-\lambda t_2 +o(t_2) \label{A:scale}$$ in the limit $t_2 \to 0$. Note that $\lambda$ is an arbitrary constant until a special requirement is imposed. The functional form of $A(t_2)$ can be obtained by solving (\[def\_A\]) numerically with a simple discretization of time. We display the graph of $A(t_2)$ in figure \[fig2\]. It should be noted here that the mathematical determination of the functional form is not the heart of the problem. The important thing is that the ${\epsilon}$-independent function $A$ is defined without any ambiguities. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![$A(t)$ (left) and its semi-log plot (right). []{data-label="fig2"}](a.eps "fig:"){width="6cm"} ![$A(t)$ (left) and its semi-log plot (right). []{data-label="fig2"}](aL.eps "fig:"){width="6cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- We express the dilatational symmetry in terms of a mathematical equality. Let us define $$\Phi_2(t_2;A_\lambda)\equiv A_\lambda(t_2) -{g_{\rm c}}(1-{f_{\rm c}})A_\lambda^2(t_2) + {g_{\rm c}}\int_{0}^{t_2}ds_2 A_\lambda^2(t_2-s_2) A_\lambda'(s_2). \label{def_A_2}$$ Since (\[def\_A\]) is identical to $\Phi_2(t_2;A)=0$, the dilatational symmetry is expressed by $\Phi(t_2; A_\lambda )=0$ for any $\lambda$. Then, taking the derivative with respect to $\lambda$, we obtain $$\begin{aligned} \int_0^\infty ds_2 L_A(t_2,s_2) \partial_{\lambda} A_\lambda(s_2)=0, \label{L_zero}\end{aligned}$$ where $L_A$ is the linearized operator around $A_\lambda$, which is defined by $$L_A(t_2,s_2)=\frac{\delta \Phi_2(t_2;A_\lambda)}{\delta A_\lambda(s_2)}.$$ Its explicit form is given by $$\begin{aligned} L_A(t_2,s_2) &=& \delta(t_2-s_2) {\left(1-2{g_{\rm c}}(1-{f_{\rm c}}) A_\lambda(s_2)+{g_{\rm c}}{f_{\rm c}}^2\right)}{\nonumber\\}&&+{g_{\rm c}}\theta(t_2-s_2)2 A_\lambda'(t_2-s_2) {\left( A_\lambda(s_2)+ A_\lambda(t_2-s_2)\right)}. \label{cn_def}\end{aligned}$$ It will be found below that (\[L\_zero\]) plays a key role in our formulation. Now, from the definition of $\gamma_1$, we have ${f_{\rm c}}+G_0({\epsilon}^{-\gamma_1})\simeq A_\lambda({\epsilon}^{\gamma_2-\gamma_1})$. By substituting (\[a:def\]) and (\[A:scale\]) into this relation, we obtain $$\gamma_1 =\frac{\gamma_2}{1+a}. \label{gamma1:det1}$$ Functional form of $\Theta$ --------------------------- We can formulate a systematic perturbation theory with employing an arbitrary switching function $\Theta(t_1)$ when it decays faster than a power-law function $t_1^{-1+a}$, as we will see in the next section. For example, one can choose a physically reasonable form $$\Theta(t_1)=\exp(-t_1/{t_{{\rm c}}}), \label{theta:ex}$$ where $t={t_{{\rm c}}}{\epsilon}^{-\gamma_1}$ corresponds to the time when the graph $\phi_0(t)$ is closest to that of $A_\lambda ({\epsilon}^{\gamma_2} t)$. That is, ${t_{{\rm c}}}$ satisfies $a c_0 {t_{{\rm c}}}^{-a-1}=\lambda $. Note, however, that there is no reason that we must choose this form. Indeed, other forms such as $ \Theta(t_1)=\exp(-2t_1/{t_{{\rm c}}})$ and $ \Theta(t_1)=\exp(-(t_1/{t_{{\rm c}}})^2)$ might also be physically reasonable to the same extent as (\[theta:ex\]). Of course, the final result should be independent of the choice of the functional form. Summary ------- In our formulation, we set the unperturbative solution ${\phi_{\rm u}}$ as $$\begin{aligned} {\phi_{\rm u}}(t) = G_0(t) \Theta({\epsilon}^{\gamma_1}t) + A_\lambda ({\epsilon}^{\gamma_2}t), \label{unpsol}\end{aligned}$$ and we express the perturbative solution $\phi_{\epsilon}$ by $$\phi_{\epsilon}(t)={\phi_{\rm u}}(t)+\varphi_{\epsilon}(t). \label{sol:rep:f}$$ $G_0$ and $A$ are already determined. $\gamma_1$ is connected with $\gamma_2$ in (\[gamma1:det1\]). $\Theta$ is assumed to take an arbitrary form. Thus, the problem we solve is the determination of $\gamma_2$ and $\lambda$ as well as the perturbative calculation of the correction $\varphi_{\epsilon}(t)$. Note that $\gamma_2$ and $\lambda$ appear in the leading order expression of the solution. In particular, since the value of $\lambda$ has never been known, the calculation of $\lambda$ is a cornerstone of our theory. Systematic perturbation {#formu} ======================= preliminary ----------- For any trajectory $\psi(t)$ with $\psi(0)=1$, we define $$F_{\epsilon}(t;\psi) \equiv \partial_t \psi +\psi +g\int_0^{t}ds \psi^2(t-s) \partial_s \psi(s). \label{Fdef}$$ Let $L_{\epsilon}(t,s;\psi)$ be the linearized operator of $ F_{\epsilon}(t;\psi)$, which is defined by $$\begin{aligned} L_{\epsilon}(t,s;\psi)=\frac{\delta F_{\epsilon}(t;\psi)}{\delta \psi(s)}. \label{phi_model1}\end{aligned}$$ Its explicit form is written as $$\begin{aligned} L_{\epsilon}(t,s;\psi)&=&\delta(t-s)(1+g)+\delta'(t-s) {\nonumber\\}& & -g\theta(t-s){\left(\partial_s\psi^2(t-s)-2 \psi(s)\psi'(t-s)\right)}. \label{Ldef}\end{aligned}$$ The mode-coupling equation (\[MCT\]) is expressed by $$\begin{aligned} F_{\epsilon}(t;\phi_{\epsilon})&=&0. \label{phi_model0}\end{aligned}$$ Calculation {#calc} ----------- The substitution of (\[sol:rep:f\]) into (\[phi\_model0\]) yields non-trivial ${\epsilon}$ dependences through the evaluation of the integration term in (\[Fdef\]) with using the scaled coordinates. In order to avoid a complicated description, we focus our presentation on an important part of the calculation. We first evaluate $F_{\epsilon}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}})$ in the small $\epsilon$ limit with $t_2$ fixed. As explained in \[app\], we derive $$F_{\epsilon}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}}) \simeq {\epsilon}\frac{1}{{g_{\rm c}}}A_\lambda(t_2) +{\epsilon}^{\gamma_2-\gamma_1(1-a)} 2{g_{\rm c}}({f_{\rm c}}+A_\lambda(t_2))A_\lambda^\prime(t_2) c_0 \theta, \label{F_ep}$$ where higher order terms of $O({\epsilon}^{\gamma_2-\gamma_1(1-2a)})$ are neglected, and $\theta$ is a constant determined by $$\theta=\int_0^\infty ds s^{-a}\Theta(s). \label{c1_def}$$ We here make three important remarks on (\[F\_ep\]). First, if we did not introduce the switching function $\Theta$ in the expression of the solution (\[rep\]), we would have a form rather different from (\[F\_ep\]), for which the analysis seems to be hard. Second, the function $\Theta$ should provide a finite value of $\theta$. This means that $\Theta(t_1)$ should decay faster than a power-law function $t_1^{-1+a}$. Third, the two terms in (\[F\_ep\]) should balance each other. Otherwise, a contradiction occurs. (See an argument below (\[final\]).) The last remark leads to the relation $\gamma_2-\gamma_1(1-a)=1$. By combining it with (\[gamma1:det1\]), we obtain well-known results $$\gamma_1=\frac{1}{2a}, \label{g1}$$ and $$\gamma_2=\frac{1}{2a}+\frac{1}{2}, \label{g2}$$ which correspond to the exponents characterizing divergences of the $\beta$-relaxation time and the $\alpha$-relaxation time in glassy systems, respectively. Then, (\[F\_ep\]) with (\[g1\]) and (\[g2\]) becomes $$F_{\epsilon}({\epsilon}^{-\gamma_2}t_2;{\phi_{\rm u}})= {\epsilon}{{\cal F}}_2^{(1)}(t_2)+O({\epsilon}^{3/2}), \label{Fexp}$$ where ${{\cal F}}_2^{(1)}$ is the ${\epsilon}$-independent function given by $${{\cal F}}_2^{(1)}(t_2) = \frac{1}{{g_{\rm c}}}A_\lambda(t_2) +2{g_{\rm c}}({f_{\rm c}}+A_\lambda (t_2))A_\lambda^\prime( t_2) c_0 \theta. \label{F_eporder}$$ Furthermore, we can prove $${\epsilon}^{-\gamma_2} L_{\epsilon}({\epsilon}^{-\gamma_2}t_2,{\epsilon}^{-\gamma_2}s_2;{\phi_{\rm u}}) =L_A(t_2,s_2)+O({\epsilon}), \label{Lexp}$$ where $L_A$ is given by (\[cn\_def\]). Now, let us compare (\[Fexp\]) and (\[phi\_model0\]) with (\[sol:rep:f\]). We then find that the perturbative correction is expressed as $$\varphi_{\epsilon}( {\epsilon}^{-\gamma_2}t_2) ={\epsilon}\bar \varphi_2^{(1)}(t_2)+ O({\epsilon}^{3/2}) \label{sol:exp}$$ in the regime $t_2 \sim O(1)$ with the limit ${\epsilon}\to 0$. We also write $$\varphi_{\epsilon}( {\epsilon}^{-\gamma_1}t_1) ={\epsilon}^{\alpha}\bar \varphi_1^{(\alpha)}(t_1) +o({\epsilon}^{\alpha}) \label{alpha-def}$$ in the regime $t_1 \sim O(1)$ with the limit ${\epsilon}\to 0$, where $\alpha$ is a positive constant. Then, (\[phi\_model0\]) with (\[sol:rep:f\]) becomes $$F_{{\epsilon}}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}}+{\epsilon}^{\alpha}\bar \varphi_1^{(\alpha)}) +{\epsilon}\int_0^\infty ds_2 L_A(t_2,s_2) \bar \varphi_2^{(1)}(s_2) =o({\epsilon}),$$ where the contribution of $\varphi_{\epsilon}(t_0)$ is included in the right-hand side. Here, by an argument similar to \[app\], we can estimate $$F_{{\epsilon}}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}}+{\epsilon}^{\alpha}\bar \varphi_1^{(\alpha)}) ={\epsilon}{{\cal F}}_2^{(1)}(t_2)+O({\epsilon}^{\alpha+1/2}). \label{estF}$$ In order to describe a theoretical framework in a simple manner, for the moment, we focus on the case $\alpha >1/2$. The other case $\alpha \le 1/2$ will be discussed in section \[theta:sec\]. The equation for $\bar \varphi_2^{(1)}(t_2)$ is then simply written as $$\int_0^\infty ds_2 L_A(t_2,s_2) \bar \varphi_2^{(1)}(s_2) = - {{\cal F}}_{2}^{(1)}(t_2). \label{eq1}$$ Solvability condition {#solv:sec} --------------------- We notice that (\[eq1\]) is a linear equation for $\bar \varphi_2^{(1)}$, which is singular because there exists the zero eigenvector $\Phi_0=\partial_\lambda A_\lambda$ associated with the dilatational symmetry (\[L\_zero\]). Let $\Phi_0^\dagger$ be the adjoint zero eigenvector that satisfies $$\int_0^\infty ds_2 L_{A}(s_2,t_2) \Phi_0^\dagger(s_2)=0. \label{adj}$$ Then, there exists a solution of (\[eq1\]) only when the condition $$\int_0^\infty dt_2 \Phi_0^\dagger(t_2) {{\cal F}}_{2}^{(1)}(t_2)=0 \label{solv}$$ is satisfied. Otherwise, (\[eq1\]) leads to $0 \not = 0$ and hence there is no solution $\bar \varphi_2^{(1)}$. The equality (\[solv\]) is called the [*solvability condition*]{} for the singular equation (\[eq1\]). Note, however, that the solvability condition is not satisfied as an identity. Here, let us recall that $\lambda$ is still an arbitrary constant. Thus, we are allowed to determine the value of $\lambda$ so that the solvability condition (\[solv\]) can be satisfied. Only for this special value of $\lambda$, the perturbation theory can be formulated consistently. Concretely, since we find that $\Phi_0^\dagger(s_2)=\delta(s_2)$ from (\[adj\]) with (\[cn\_def\]), the solvability condition (\[solv\]) becomes $${{\cal F}}_{2}^{(1)}(0)=0. \label{sol-conc}$$ The explicit form ${{\cal F}}_{2}^{(1)}(0)$ obtained from (\[F\_eporder\]) leads to $$\lambda=\frac{1}{64c_0 \theta}. \label{final}$$ Here, let us go back to (\[F\_ep\]). If $\gamma_2-\gamma_1(1-a)$ were not equal to 1, the condition (\[sol-conc\]) could not be satisfied for any positive $\lambda$. In this sense, one can regard that the solvability condition determines the exponent $\gamma_2$ as well as the constant $\lambda$. Determination of $\lambda$ {#theta:sec} -------------------------- Apparently, (\[final\]) shows that $\lambda$ depends on the choice of $\Theta$. However, the situation is a little bit complicated. We shall explain the way how to determine the value of $\lambda$ in detail. We study the case ${\epsilon}\to 0$ with $t_1$ fixed. In this limit, (\[rep\]) can be expressed as $$\phi_{{\epsilon}}({\epsilon}^{-\gamma_1}t_1) ={f_{\rm c}}+{\epsilon}^{1/2} [c_0 t_1^{-a} \Theta(t_1)-\lambda t_1] +\varphi_{\epsilon}({\epsilon}^{-\gamma_1} t_1), \label{rep-I}$$ where we have used (\[a:def\]) and (\[A:scale\]). Since $\Theta$ is arbitrary, $\varphi_{\epsilon}({\epsilon}^{-\gamma_1} t_1)$ includes a term of $O({\epsilon}^{1/2})$ unless a special $\Theta$ is employed. This means that $\alpha$ in (\[alpha-def\]) is equal to 1/2. As is seen from (\[estF\]), when $\alpha=1/2$, $\bar \varphi_2^{(1)}(t_2)$ must be calculated with taking account of $\bar \varphi_1^{(1/2)}$. Concretely, the right hand side of (\[eq1\]) should be replaced with $- \tilde {{\cal F}}_{2}^{(1)}(t_2)$, where $$F_{{\epsilon}}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}}+{\epsilon}^{1/2}\bar \varphi_1^{(1/2)}) ={\epsilon}\tilde {{\cal F}}_2^{(1)}(t_2)+o({\epsilon}).$$ Then, the solvability condition (\[sol-conc\]) is also replaced with $\tilde {{\cal F}}_{2}^{(1)}(0)=0$. Without the replacement, (\[final\]) provides nothing more than an approximation of $\lambda$. For example, (\[final\]) with (\[theta:ex\]) leads to $\lambda=[(64\Gamma(1-a))^{(1+a)/(2a)} c_0^{1/a} a^{(1-a)/(2a)}]^{-1} \simeq 0.022$ as one approximate value. Now, let us calculate the precise value of $\lambda$. One natural method is to choose a functional form of $\Theta$ so that the condition $\alpha >1/2$ is satisfied. We denote this special $\Theta$ by $\Theta_*$. Then, for a given $\Theta$, the correction $\bar \varphi_1^{(1/2)}$ is determined by $$c_0 t_1^{-a} \Theta(t_1)+\bar\varphi_1^{(1/2)}(t_1) =c_0 t_1^{-a}\Theta_*(t_1).$$ Therefore, (\[sol-conc\]) using $\Theta_*$ is equivalent to $\tilde {{\cal F}}_{2}^{(1)}(0)=0$ using $\Theta$. In other words, the precise calculation of $\lambda$ starting from $\Theta$ can be done through $\Theta_*$. This also indicates explicitly that the final and precise result is independent of the choice of $\Theta$. In any cases, the problem is focused on the calculation of $\Theta_*$. As explained in \[app:eqQ\], we can derive the equation for $Q(t_1)\equiv c_0 t_1^{-a} \Theta_*(t_1)$ in the form $$\begin{aligned} \frac{1}{8}-8\lambda \int_0^{t_1} ds_1 [Q(s_1)-Q(t_1)/2] \nonumber \\ +2 Q^2+4 \int_{0}^{t_1}ds_1 [Q(t_1-s_1)-Q(t_1)] Q'(s_1)=0. \label{Q:det}\end{aligned}$$ We study this equation by regarding $\lambda$ as a parameter whose value is not specified beforehand. We denote this solution by $Q(t_1;\lambda)$. For almost all $\lambda$, $Q(t_1;\lambda)$ is not bounded as $t_1 \to \infty$, while there exists the special value $\lambda_*$ such that $Q(t_1;\lambda_*) \to 0$ as $t_1 \to \infty$. A necessary condition for this property is easily derived by considering the limit $t_1 \to \infty$ in (\[Q:det\]): $$\lambda_*=\frac{1}{64} \left[\int_0^{\infty} ds_1 Q(s_1;\lambda_*) \right]^{-1}. \label{scon}$$ This is equivalent to the expression (\[final\]) that determines the value of $\lambda$ by the solvability condition (\[sol-conc\]) under the assumption $\alpha >1/2$. Therefore, once we find $\lambda_*$ such that $Q(t_1;\lambda_*) \to 0$ as $t_1 \to \infty$, this $\lambda_*$ is the precise value of $\lambda$ that we want to have. Simultaneously, we obtain $\Theta_*(t_1)$ from $Q(t_1;\lambda_*)$. The problem of finding $\lambda_*$ is investigated by a shooting method. We first solve (\[Q:det\]) numerically for a given $\lambda$. Basically, we employ a simple discretization method. In order to treat properly the singular behavior near $t=0$, we utilize the result of the short time expansion of $Q(t_1;\lambda)$ near $t=0$. (See \[app:exp\] for the short time expansion.) Suppose that we already investigated the system with $\lambda_k$, $k=0,1, \cdots, n$. We here note that $Q(t_1;\lambda) \to -\infty$ as $t_1 \to \infty$ when $\lambda=0$ and that $Q(t_1;\lambda) \to \infty$ as $t_1 \to \infty$ when $\lambda$ is sufficiently large. Based on this observation, we define $\underline{\mu}_n\equiv \max \lambda_k$ such that $Q(t_1;\lambda_k) \to -\infty$ as $t_1 \to \infty$, and $\overline{\mu}_n\equiv\min \lambda_k$ such that $Q(t_1;\lambda_k) \to \infty$ as $t_1 \to \infty$. We then choose $\lambda_{n+1}$ as $\lambda_{n+1}= (\underline{\mu}_n+\overline{\mu}_n)/2$. Starting from $\lambda_0=0$ and $\lambda_1=1$, we can determine the sequence $\{\lambda_n\}$ for which $\lambda_\infty=\lim_{n \to \infty}\lambda_n$ exists. From the construction method, $Q(t_1;\lambda_\infty) \to 0$ as $t_1 \to \infty$. Therefore, $\lambda_*$ is given by $\lambda_\infty$. By performing this procedure numerically, we estimate $\lambda_*=0.017$. In this manner, we have determined the precise value of $\lambda$ and the function $\Theta_*$. We display the functional form of $\Theta_*$ in figure \[fig3\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ![$\Theta_*(t)$ (left) and its semi-log plot (right). []{data-label="fig3"}](t.eps "fig:"){width="6cm"} ![$\Theta_*(t)$ (left) and its semi-log plot (right). []{data-label="fig3"}](tL.eps "fig:"){width="6cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- Remarks {#result} ------- At the end of this section, we make two remarks. First, as a demonstration of our result, we study the $\alpha$-relaxation time $\tau_\alpha$ defined by $\phi_{\epsilon}(\tau_\alpha)=1/4$. Let $\tau_A$ be $A(\tau_A)=1/4$. Then, from the expression of the solution (\[sol:rep:f\]), $\tau_\alpha$ is estimated as $\tau_\alpha=(\tau_A/\lambda) {\epsilon}^{-\gamma_2}$. By using the value $\tau_A=0.346$ obtained from numerical integration of (\[def\_A\]), we arrive at the theoretical prediction $$\tau_\alpha=20{\epsilon}^{-1.77}. \label{taualpha}$$ In figure \[fig4\] (right), we display the result of numerical simulations of (\[MCT\]) with ${\epsilon}=0.1\times 2^{-j}$, $j=0,\cdots, 10$. The numerical data are perfectly placed on the theoretical result (\[taualpha\]). This is an evidence that the expression (\[final\]) is correct. The second remark is on the systematic formulation. In principle, higher order terms such as $\bar \varphi_j^{(3/2)}(t_j)$ and $\bar \varphi_j^{(\gamma_2)}(t_j)$ can also be calculated in a manner similar to that described in sections \[calc\] and \[solv:sec\]. Such a perturbation theory with using a solvability condition has been employed in many problems [@Bogoliubov; @Kuramoto; @Cross]. Concluding remarks {#remark} ================== We have formulated a systematic perturbation theory for (\[MCT\]). Due to the dilatational symmetry (\[L\_zero\]), an arbitrary constant $\lambda$ appears in the unperturbed solution ${\phi_{\rm u}}(t)$. Then, the value of $\lambda$ is determined by the solvability condition (\[solv\]) associated with the linear equation (\[eq1\]) for the perturbative correction $\bar \varphi_2^{(1)}$. The advantage of our systematic perturbation is in a possibility of developing new and important directions. Concretely, following the three problems will be studied soon. The first problem is to derive the fluctuation intensity of $\hat C(t,t')=\sum_{jk} \sigma_j(t)\sigma_k(t')/N$ just below the mode-coupling transition point for the spherical $p$-spin glass, where $\sigma_j$ is a real spin variable that satisfies the spherical constraint $\sum_{j=1}^N \sigma_j^2=1$. Note that $\phi(t-t')= C(t-t')/C(0)$ with $C(t-t')={\left\langle}\hat C(t,t') {\right\rangle}$ in the stationary regime. In a straightforward approach, one may study an effective potential for $\hat C(t,t')$ [@CJT]. Indeed, by employing a diagrammatic expansion with neglecting vertex corrections, the singular behavior of the effective potential was evaluated [@BB]. Then, since the minimum of the potential corresponds to the solution of (\[MCT\]), the existence of the dilatational symmetry yields the Goldstone mode which carries a divergent part of fluctuations in the late stage. More explicitly, $\lambda$ in our expression is treated as a fluctuating quantity, and it is identified with the Goldstone mode. (A Related description of fluctuations near another type of bifurcation points can be seen in Refs. [@Iwata1; @Iwata2].) The analysis along this line will shed a new light to the understanding of fluctuations near mode-coupling transition points. As an alternative approach to the description of fluctuations near the mode-coupling transition, response properties against an auxiliary external field conjugated to $\hat C$ [@Franz] and against a one-body potential field [@Miyazaki] were investigated. Their studies successfully derived the scaling form of a singular part of the fluctuation intensity of $\hat C$ based on an idea that such response functions are related to the fluctuation intensity. The extension of our work so as to describe the responses may provide a more quantitative result than the scaling form. Such an extension is related to a study of dynamical behavior in the aging regime, because its behavior is described by a coupled equation of the time correlation function $C(t,t')$ and the response function $R(t,t')$, which are functions of two times [@Culiandolo]. In addition to a complicated structure of the equation, the dilatational symmetry is replaced with the time reparameterization symmetry. Since the symmetry is much wider than the dilatational symmetry, several new features may appear in the analysis. See Ref. [@Culiandolo2] as a review for the argument on the basis of the time reparameterization symmetry. The third problem is to analyze a rather wide class of systems with nonlinear memory. The qualitative change of the solution $f_\infty$ of (\[f:eq\]) is the same type as that observed in an elementary saddle-node bifurcation [@Guckenheimer]. Despite this similarity, the dynamical behavior near the saddle-node bifurcation is much simpler than that of (\[MCT\]) owing to the lack of nonlinear memory. Note that an edge deletion process of $k$-core percolation in a random graph is precisely described by a saddle-node bifurcation [@kcore] and it has been pointed out that $k$-core percolation problems are related to jamming transitions [@Silbert]. Since nonlinear memory effects might appear in jamming transitions, it is important to study a mixed type of dynamical systems which connect the elementary saddle-node bifurcation with the mode-coupling transition. The calculation presented in this paper may be useful in the analysis of such models. By studying these problems, we will have deeper understanding of slow relaxation with nonlinear memory. We also hope that this paper provokes mathematical studies of the simplest mode-coupling equation (\[MCT\]). This work was supported by a grant from the Ministry of Education, Science, Sports and Culture of Japan, No. 19540394. Mami Iwata acknowledges the support by Hayashi memorial foundation for female natural scientists. Approximate expression of $c_0$ {#app:c0} =============================== We perform a short time expansion $$G_0(t)=\sum_{n=0}^\infty g_n t^n, \label{short}$$ which is valid around $t=0$. All the coefficients $g_n$ can be determined from a recursive formula. Concretely, $g_0=1/2$, $g_1=-1$ and $g_2=5/2$. Respecting the lowest order result $G_0(t)=g_0+g_1t$, we assume $$G_0(t)=\frac{1}{2(1+2t)}$$ for $t \le t_*$, where $t_*$ will be determined later. On the other hand, from the asymptotic form $$G_0(t)=c_0 t^{-a} \label{large}$$ in the limit $ t \to \infty$, we assume $G_0(t)=c_0 t_*^{-a}$ for $t \ge t_*$. Since $G_0(t)$ is smooth at $t=t_*$, we have $$\begin{aligned} \frac{1}{1+2t_*} &=& 2c_0 t_*^{-a}, \\ \frac{2}{(1+2t_*)^2} &=& 2a c_0 t_*^{-a-1}.\end{aligned}$$ These equations lead to $t_*=a/[(1-a)2]$ and $c_0=(a/(1-a))^a(1-a)/2^{a+1}$. Derivation of ${{\cal F}}_2^{(1)}$ {#app} ================================== We shall extract a leading order contribution of $F_2({\epsilon}^{-\gamma_2}t_2;{\phi_{\rm u}})$ in the limit ${\epsilon}\to 0$. In order to simplify the calculation, we utilize an identity $$\begin{aligned} \int_0^t ds f(t-s)g'(s) = &&\int_{t/2}^t ds [f(t-s)g'(s)+g(t-s)f'(s)] \nonumber \\ && -f(t)g(0)+f(t/2)g(t/2),\end{aligned}$$ which plays a key role in an efficient numerical integration algorithm for solving mode-coupling equations [@Fuchs]. By substituting (\[unpsol\]) into (\[Fdef\]) and using this identity, we obtain $$\begin{aligned} && F_{\epsilon}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}}) =A_\lambda'(t_2)+A_\lambda( t_2) -g[A_\lambda^2( t_2)-A_\lambda^3( t_2/2)] \nonumber \\ && +g \int_{t_2/2}^{t_2} ds_2 A_\lambda^\prime( s_2)\left[{\phi_{\rm u}}^2({\epsilon}^{-\gamma_2}(t_2-s_2)) +2A_\lambda( s_2){\phi_{\rm u}}({\epsilon}^{-\gamma_2}(t_2-s_2)) \right]. \label{Fstart}\end{aligned}$$ Here, we take $\Delta t$ satisfying ${\epsilon}^{\gamma_2-\gamma_1} \ll \Delta t \ll 1 $ such that ${\phi_{\rm u}}({\epsilon}^{-\gamma_2} t_2) \simeq A_\lambda (t_2)$ in the regime $ \Delta t \le t_2 \le \infty$. More explicitly, we assume $\Delta t={\epsilon}^{\gamma'}$ with $\gamma_2-\gamma_1 > \gamma' >0$. We divide the integration regime in the second line of (\[Fstart\]) into two parts, $[t_2/2, t_2-\Delta t]$ and $[t_2-\Delta t,t_2]$. Let $I_1$ and $I_2$ be the integration values over the former and the latter region, respectively. By a straightforward calculation, we can estimate $I_2$ as $$\begin{aligned} I_2& \simeq & 2g A_\lambda'(t_2)({f_{\rm c}}+A_\lambda(t_2)) {\epsilon}^{\gamma_2} \int_{0}^{{\epsilon}^{-\gamma_2}\Delta t} ds G_0(s)\Theta({\epsilon}^{\gamma_1}s) \nonumber \\ & & +g\int_{t_2-\Delta t}^{t_2} ds_2 A_\lambda'(s_2)(A_\lambda^2(t_2-s_2)+2A_\lambda(t_2-s_2) A_\lambda(s_2)) \label{I2}\end{aligned}$$ in the lowest order evaluation. We next combine the second line of (\[I2\]) with $I_1$ and return it to the original form. As the result, we obtain $$\begin{aligned} F_{\epsilon}({\epsilon}^{-\gamma_2} t_2;{\phi_{\rm u}}) \simeq & & A_\lambda (t_2)-g(1-{f_{\rm c}})A_\lambda^2( t_2) +g \int_{0}^{t_2}ds_2 {A_\lambda}^2(t_2-s_2) {A_\lambda}'(s_2) \nonumber \\ & &+2g A_\lambda'(t_2)({f_{\rm c}}+A_\lambda(t_2)) {\epsilon}^{\gamma_2} \int_{0}^{{\epsilon}^{-\gamma_2}\Delta t} ds G_0(s)\Theta({\epsilon}^{\gamma_1}s), \label{Fsecond}\end{aligned}$$ where higher order terms are ignored. With the aid of (\[def\_A\]), we rewrite the first line of (\[Fsecond\]) as ${\epsilon}A(\lambda t_2)/{g_{\rm c}}$. Furthermore, from an estimation $$\int_{0}^{{\epsilon}^{-\gamma_2}\Delta t } ds G_0(s)\Theta({\epsilon}^{\gamma_1}s) \simeq c_0 {\epsilon}^{-\gamma_1(1-a)}\int_0^\infty ds s^{-a}\Theta(s), \label{est}$$ which is valid in the limit ${\epsilon}\to 0$, the second line of (\[Fsecond\]) turns out to be of $O(\epsilon^{\gamma_2-\gamma_1(1-a)})$. These results lead to (\[F\_eporder\]). We also find that the higher order terms we have neglected in (\[Fsecond\]) are of $O({\epsilon}^{\gamma_2-\gamma_1(1-2a)})$ by an estimation similar to (\[est\]). Derivation of (\[Q:det\]) {#app:eqQ} ========================= We take ${\Delta t}= {\epsilon}^{-\alpha'}$, where $\alpha'$ satisfies $\alpha' <\gamma_1-1/2$. We also define $$\begin{aligned} {w}(t_1)\equiv c_0t_1^{-a}\Theta_*(t_1)-\lambda t_1.\end{aligned}$$ Then, for sufficiently small $\epsilon$, $h_{\epsilon}(t)\equiv \phi_{{\epsilon}}(t) - {f_{\rm c}}$ is expressed by $$h_{\epsilon}(t)=G_0(t)+O({\epsilon}) \label{hex1}$$ for $0 \le t \le {\Delta t}$, and $$h_{\epsilon}(t)={\epsilon}^{1/2}w({\epsilon}^{\gamma_1} t)+{\epsilon}^{\alpha} \bar \varphi_1^{(\alpha)}({\epsilon}^{\gamma_1}t) \label{hex2}$$ for ${\Delta t}\le t \ll {\epsilon}^{-\gamma_2}$. By substituting $\phi_{{\epsilon}}(t)={f_{\rm c}}+h_{\epsilon}(t)$ into (\[MCT\]), we can write the equation for $h_{\epsilon}(t)$. The further substitution of (\[hex1\]) and (\[hex2\]) into the obtained equation for $h_{\epsilon}$ yields $$\begin{aligned} && {\epsilon}{\left(2 {w}^2(t) +1 /8 + 4 \int_{{\Delta t}{\epsilon}^{\gamma_1}}^{t_1-{\Delta t}{\epsilon}^{\gamma_1}} ds_1 {\left( {w}(t_1-s_1)- {w}(t_1)\right)} {w}'(s_1)\right)} {\nonumber\\}&& = O\left( {\epsilon}^{1/2+\gamma_1-\alpha'}, {\epsilon}^{3/2},{\epsilon}^{\gamma_1},{\epsilon}^{\alpha+1/2} \right). \label{eq_tilQ}\end{aligned}$$ Extracting the ${\epsilon}$-independent terms in the limit ${\epsilon}\to 0$, we obtain $$\begin{aligned} 2 {w}^2(t_1) +1/8 + 4 \int_0^{t_1} ds_1 {\left( {w}(t_1-s_1)- {w}(t_1)\right)} {w}'(s_1)=0.\end{aligned}$$ We substitute $w(t_1)=Q(t_1)-\lambda t_1$ into this equation. The result becomes (\[Q:det\]). Short time expansion of $Q$ {#app:exp} =========================== We assume the form $$Q(t_1)=\sum_{k=0}^\infty q_k t_1^{a(2k-1)}+\lambda t_1. \label{Qexp}$$ By substituting (\[Qexp\]) into (\[Q:det\]), we can determine $q_k$ $(k \ge 1)$ recursively from $q_0=c_0$. Concretely, the recursion equation becomes $$q_1=-\frac{1}{64 c_0 (V_{0,1}-1/2)}$$ and $$q_{k+1}=-\frac{1}{2q_0(V_{0,k+1}-1/2)} \left[\sum_{j=1}^k q_j q_{k+1-j} (V_{j,k+1-j}-1/2 ) \right]$$ for $k \ge 1$, where $$V_{m,n}=\frac{{p(2m-1)}{p(2n-1)}}{p(2m+2n-2)}$$ with $p_n=\Gamma(1+an)$. References {#references .unnumbered} ========== [99]{} Götze W 1984 Z. Phys. B [**56**]{} 139 Leutheusser E 1984 Phys. Rev. A [**29**]{} 2765 Crisanti A, Horner H, and Sommers H J 1993 Z. Phys. B [**92**]{} 257 Götze W 1991 Liquids, Freezing and Glass Transition (Les Houches 1989 Session LI) ed J P Hansen Fuchs M, Götze W, Hofacker I, and Latz A 1991 J. Phys. Condens. Matter. [**3**]{} 5047 Holmes M H 1995 [*Introduction to perturbation methods*]{} (Springer-Verlag, New York) Bender M and Orszag S A 1999 [*Advanced Mathematical Methods for Scientists and Engineers*]{}, (Springer-Verlag, New York) Bogoliubov N N and Mitropolsky Y A 1961 [*Asymptotic Methods in the theory of Nonlinear Oscillations*]{} (Goldon and Breach) Kuramoto Y 1989 Prog. Theor. Phys. Suppl. [**99**]{} 244 Cross M C and Hohenberg P C 1993 Rev. Mod. Phys. [**65**]{} 851 Cornwall J M, Jackiw R, and Tomboulis 1974 Phys. Rev. D [**10**]{} 2428 Biroli G and Bouchaud J F, Europhys. Lett. [**67**]{} 21 Iwata M and Sasa S 2007 Europhys. Lett. [**77**]{} 50008 Iwata M and Sasa S 2008 Phys. Rev. E [**78**]{} 055202(R) Franz S and Parisi G 2000 J. Phys. Condens. Matter. [**12**]{} 6335 Biroli G, Bouchaud J P, Miyazaki K and Reichman D R 2006 Phys. Rev. Lett. [**97**]{} 195701 Culiandolo L F and Kurchan J 1993 Phys. Rev. Lett. [**71**]{} 173 Chamon C and Cugliandolo L F 2007 J. Stat. Mech. P07022 Guckenheimer J and Holmes P 1983 [*Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields*]{} (Springer-Verlag, New York) Iwata M and Sasa S 2009 J. Phys. A: Math. Theor. [**42**]{} 075005 Silbert L E, Liu A J, and Nagel S R 2005 Phys. Rev. Lett. [**95**]{} 098301
--- abstract: 'We study few-body problems in mixed dimensions with $N \ge 2$ heavy atoms trapped individually in parallel one-dimensional tubes or two-dimensional disks, and a single light atom travels freely in three dimensions. By using the Born-Oppenheimer approximation, we find three- and four-body bound states for a broad region of heavy-light atom scattering length combinations. Specifically, the existence of trimer and tetramer states persist to negative scattering lengths regime, where no two-body bound state is present. These few-body bound states are analogous to the Efimov states in three dimensions, but are stable against three-body recombination due to geometric separation. In addition, we find that the binding energy of the ground trimer and tetramer state reaches its maximum value when the scattering lengths are comparable to the separation between the low-dimensional traps. This resonant behavior is a unique feature for the few-body bound states in mixed dimensions.' author: - Tao Yin - Peng Zhang - Wei Zhang title: 'Stable Heteronuclear Few-Atom Bound States in Mixed Dimensions' --- Introduction ============ One striking feature of few-body physics is the presence of universality under a resonant short-range interaction, where the low-energy behavior of the system does not depend on the details of its structure or interactions at short distances. Of particular interest is the existence of bound trimer states for three identical bosons in three dimensions with a resonant two-body interaction, as discussed in 1970 by Vitaly Efimov [@efimov-70]. At infinite scattering length, these three-body bound states form an infinite geometric spectrum with a constant ratio between two successive binding energies, indicating a discrete scaling symmetry [@tolle-11]. Besides, the bound trimer states persist rather counterintuitively to negative scattering length regime, where two-body bound states are not existent. After its original proposal, the Efimov physics has attracted great attention in multi-disciplinary systems, including atomic nuclei [@jensen-04; @mazumdar-06], $^4$He trimers [@lim-77; @bruhl-05], and other molecules [@baccarelli-00]. However, a direct evidence of such peculiar behavior was not achieved for more than three decades until its first observation in an ultracold gas of neutral atoms [@kraemer-06]. Thanks to the extraordinary controllability of the mutual atomic interaction by tuning through a magnetic Feshbach resonance, signatures of trimer bound states have been observed in trapped atomic gases for both negative and positive scattering length regimes [@kraemer-06; @knoop-09; @zaccanti-09; @gross-09; @pollack-09]. In addition to the original problem of identical bosons, the study of three-body physics has been extended to a variety of other three-particle systems [@braaten-06], including three distinguishable particles with different scattering lengths and/or different masses [@amado-72; @efimov-72; @Fonseca], two identical fermions with a third atom [@efimov-72; @petrov-03; @petrov-04], and the three-atom systems with with non-zero angular momentum [@endo-11]. Of particular interest is the case of three distinguishable fermions in an ultracold gas of three-component $^6$Li atoms. In such a system, there exits a broad magnetic Feshbach resonance such that all three scattering lengths can be tuned around resonance simultaneously [@ottenstein-08], leading to a promising candidate to observe the few-body universal behavior [@huckans-09; @williams-09; @braaten-09; @wenz-09; @nakajima-10; @naidon-11; @nakajima-11; @lompe-10a; @lompe-10b]. Besides, the few-body problem has also been analyzed in ultracold gases of different atomic species by tuning the interaction across an interspecies Feshbach resonance [@barontini-09]. Due to the multi-channel nature of the inter-atomic interaction, the Efimov states in the three-dimensional (3D) ultracold atomic gases are only metal-stable states. Through the three-body recombination process, two of the three atoms in an Efimov trimer can further form a deeply bound dimer and the third one would escape from the trap. In order to prepare stable trimer states, one has to figure out a mechanism to significantly reduce or even prevent three-body recombination. Since the three-body recombination process only occurs when three atoms all come to a close range, one possible route towards this goal is to use geometric confinement to separate atoms such that they can not travel to a same spot. For instance, if two of the three atoms are individually trapped in two spatially separated one-dimensional (1D) tubes or two-dimensional (2D) disks, and interact with each other via the third atom which is free in all three dimensions (3D), the three-body recombination is inherently forbidden and the trimer states are stable if they exist in this mixed dimensional configuration. The few-body problem in mixed dimensions has been recently discussed by Nishida and Tan [@nishida-08], where they consider two species of atoms confined in different dimensions and find trimer bound states for a certain range of mass ratio. However, since the atoms in lower dimensions are not geometrically separated, this configuration suffers the same problem of three-body recombination and the trimer states are unstable. Therefore, Nishida consider the problem of two atoms trapped in two separated 1D tubes or 2D layers, and interacts with the third atom which is free in 3D [@nishida-10; @nishida-11]. This 1D-1D-3D or 2D-2D-3D mixture thus can support stable Efimov trimer states. In this manuscript, we adopt another approach based on Born-Oppenheimer approximation (BOA) to study some few-body problems in mixed dimensions, and investigate the existence and properties of stable few-body bound states in a variety of configurations. For the three-body problems, we consider the systems with two heavy atoms trapped in two parallel 1D tubes (1D-1D-3D) or 2D disks (2D-2D-3D), plus one light atom moving freely in 3D (see Fig. 1 for illustration). We conclude that the light atom can induce an effective interaction between the two heavy atoms which are spatially separated by the low dimensional traps. Due to this effective interaction, the two heavy atoms can be bound with each other and lead to the formation of a three-body bound state in a very broad parameter region, including the regimes with negative $s$-wave scattering lengths between the light and the two heavy atoms, where two-body bound states are not present. In addition to their existence in mixed dimensions, the universal three-body bound states also acquire some unique features due to the geometric confinement. Especially, we find that the two heavy atoms experience the strongest effective interaction when the scattering length between heavy and light atoms equals to the distance between the two low-dimensional traps. As a consequence of this resonance phenomenon, the binding energy of the ground trimer state takes a peak value around the resonance point, where the scattering length is of a finite value. We emphasis that, the BOA provides a very clear physical picture with which the new resonance phenomenon in the mixed-dimensional systems can be easily explored and clearly described. We also compare our results with the exact expression [@nishida-10; @nishida-11] given by an effective field theory, and conclude that BOA works well even in systems with a mass ratio only about $6$. This finding suggests that BOA is a powerful tool for the study of stable heteronuclear few-body bound states in mixed dimensions. To demonstrate the usage of BOA for general few-body problems, we consider as an example the 1D-1D-1D-3D geometry with three heavy atoms confined individually in parallel 1D tubes and a light atom in 3D free space. We find four-body bound states living in a wide range of scattering lengths. A similar resonance phenomenon is also observed when the scattering length becomes close to the mutual distances between 1D tubes, in which case the binding energy of the ground tetramer state reaches its maximum when the three 1D tubes form an equilateral triangle. We also show the scheme to generalize the BOA to problems with $N \ge 3$ heavy atoms and a single light one in an arbitrary mixed dimensional geometry. The mixed dimensional systems discussed in this manuscript can be realized in a mixture of two-species ultracold gases with species-selective dipole potential, as illustrated in recent experiments [@catani-09]. The remainder of this manuscript is organized as follows. In Sec. II we first consider the 1D-1D-3D geometry and outline the BOA approach for the three-body problem. We calculate the effective interaction potential between the two heavy atoms, and observe the new resonance phenomenon. In Sec. III we solve for the three-body bound states, from which we conclude that a stable trimer state can exist in a broad parameter region, and the binding energy of the ground trimer state takes largest value under the new resonance condition. Similar results of the 2D-2D-3D system are shown in Sec. IV. In Sec. V we extend the usage of BOA to the four-body problem in 1D-1D-1D-3D geometry, and discuss the existence and properties of bound tetramer states. In Sec. VI, we show the general scheme to apply BOA in problems with more than 3 atoms in arbitrary mixed dimensional geometries. Our main findings are concluded in Sec. VII, and the Bethe-Peierls boundary condition used in our BOA approach is derived in Appendix A. BOA for three-atom bound states in 1D-1D-3D systems =================================================== In this section we present the Born-Oppenheimer approach for a three-body system with two heavy atoms individually trapped in two parallel 1D tubes and a light atom moving freely in the 3D space. The straightforward generalization to 2D-2D-3D systems will be given in Sec. IV, while the discussion for four-body problems in 1D-1D-1D-3D systems is given in Sec. V. System and Hamiltonian ---------------------- As shown in Fig. 1(a), the 1D-1D-3D system includes two heavy atoms $A_{1}$ and $A_{2}$, plus a light atom $B$. The atoms $A_{1}$ and $A_{2}$ are trapped in two parallel 1D tubes centered along the lines $\left( x= \pm L/2,y=0\right)$, while the light atom $B$ moves freely in the 3D space. The quantum state of this system can be described by the wave function $\Psi (\vec{r}_{B};z_{1},z_{2})$, where $z_{1,2}$ are the $z$-coordinate of atoms $A_{1,2}$ in the 1D tubes, and $\vec{r}_{B}=(x_{B},y_{B},z_{B})$ is the coordinate of atom $B$ in 3D. ![(color online) (a) The 1D-1D-3D system with two heavy atoms $A_1$ and $A_2$ confined in two 1D tubes and the light atom $B$ moving freely in 3D. (b) The 2D-2D-3D system with two heavy atoms $A_1$ and $A_2$ confined in two 2D planes and the light atom $B$ moving freely in the 3D space.](fig1){width="8cm"} In this manuscript, we use the natural units $\hbar =m_{B}=L=1$, where $m_{B}$ is the mass of atom $B$. The Hamiltonian for the motion of the three atoms is $$\begin{aligned} H=-\frac{1}{2}\nabla _{B}^{2}-\frac{1}{2m_{1}}\frac{\partial ^{2}}{\partial z_{1}^{2}}-\frac{1}{2m_{2}}\frac{\partial ^{2}}{\partial z_{2}^{2}}+V_{1B}+V_{2B}, \label{hf}\end{aligned}$$ where $m_{1,2}$ are the masses of atoms $A_{1,2}$ in the natural unit, and $V_{1B,2B}$ are the interaction potentials between $A_{1,2}$ and $B$, respectively. In this work we only consider the cases where the distance $L$ between the two tubes is much larger than the characteristic length of the interaction potential between $A_{1}$ and $A_{2}$. Hence the $A_{1}$-$A_{2}$ interaction can be safely ignored. BOA for three-body bound states ------------------------------- The three-body bound state is given by the solution of the eigen-equation $$\begin{aligned} H\Psi =E\Psi. \label{ee}\end{aligned}$$ When the masses of the heavy atoms $A_{1,2}$ is much larger than the one of $B$, or $m_{1,2} \gg 1$ in the natural unit, the eigen-equation (\[ee\]) can be solved with BOA. This approximation is applicable when the motion of the heavy atoms $A_{1,2}$ is slow enough such that the quantum transitions between different instantaneous eigen-states of the light atom $B$ with fixed positions $z_{1,2}$ of $A_{1,2}$ are negligible. Therefore, the total wave function $\Psi $ of the three-body bound state can be approximated as a factorized form $$\begin{aligned} \Psi (\vec{r}_{B};z_{1},z_{2})=\phi (z_{1},z_{2}) \psi (\vec{r}_{B},z_{1},z_{2}), \label{pf2}\end{aligned}$$ where $\psi (\vec{r}_{B},z_{1},z_{2})$ is an instantaneous bound-state solution of the eigen-equation of the Hamiltonian of atom $B$ with fixed values of $z_1$ and $z_2$. As shown in Appendix A, we can further replace the interaction potentials $V_{1B}$ and $V_{2B}$ with the Bethe-Peierls boundary conditions $$\begin{aligned} \psi (r_{1B} \to 0)\propto \left( 1-\frac{a_{1}}{r_{1B}}\right) +% \mathcal{O}(r_{1B}); \label{c1} \\ \psi (r_{2B} \to 0)\propto \left( 1-\frac{a_{2}}{r_{2B}}\right) +% \mathcal{O}(r_{2B}). \label{c2}\end{aligned}$$Here $r_{1B,2B}$ are the relative distances between the heavy atoms $A_{1,2}$ and the light atom $B$, $a_{1,2}$ are the mixed-dimensional scattering lengths between $A_{1,2}$ and $B$. Notice that the Bethe-Peierls boundary conditions (\[c1\]) and (\[c2\]) are derived from the first-principle calculation where the $3$D motion of all the three atoms $A_{1,2}$ and $B$ are taken into account. Then the mixed-dimensional scattering lengths $a_{1,2}$ are determined by both the 3D $s$-wave scattering lengths between $A_{1,2}$ and $B$, as well as the intensity of the transverse confinements of the 1D traps. Thus, $a_{1,2}$ can be tuned either through a 3D magnetic Feshbach resonance [@Feshbach] or via a mixed-dimensional confinement-induced resonance [@nishida-08]. With the Bethe-Peierls boundary conditions, the wave function $\psi (\vec{r}_{B},z_{1},z_{2})$ is determined by $$\begin{aligned} -\frac{1}{2}\nabla _{B}^{2}\psi (\vec{r}_{B},z_{1},z_{2}) = V_{\rm{eff}}(z_{1},z_{2})\psi (\vec{r}_{B},z_{1},z_{2}), \label{ee3}\end{aligned}$$ through which the shape of the wave function $\psi (\vec{r}_{B},z_{1},z_{2})$ and the relevant eigen-energy $V_{\rm{eff}}(z_{1},z_{2})$ can be determined for a given value of $z_{1,2}$. In the approach of BOA, the instantaneous energy $V_{\rm{eff}}(z_{1},z_{2})$ of the light atom $B$ serves as an effective potential between the two slowly moving heavy atoms. Then the wave function $\phi(z_{1},z_{2}) $ in Eq. (\[pf2\]) satisfies the Schr[ö]{}dinger equation $$\begin{aligned} \left[ -\frac{1}{2m_{1}}\frac{\partial ^{2}}{\partial z_{1}^{2}} -\frac{1}{2m_{2}}\frac{\partial ^{2}}{\partial z_{2}^{2}} +V_{\rm{eff}}(z_{1},z_{2})\right] \phi (z_{1},z_{2})\nonumber\\ =E\phi (z_{1},z_{2}), \label{eef}\end{aligned}$$ where $E$ is the total energy of the trimer state defined in Eq. (\[ee\]). In this manuscript, we focus only on the ground state of the three-body eigen-equation (\[ee\]), which is consisted of the ground-state solutions $\psi $ and $\phi$ of (\[ee3\]) and (\[eef\]), respectively. In summary, to derive the three-body bound state with BOA, we should first find the ground-state solution $\psi $ of the instantaneous eigen-equation (\[ee3\]) of the light atom $B$, and then solve the effective eigen-equation (\[eef\]) of the heavy atoms $A_{1,2}$ where the instantaneous eigen-energy $V_{\rm{eff}}(z_{1},z_{2})$ of $\psi$ plays a role as interaction potential between $A_{1}$ and $A_2$. Therefore, the BOA provides a simple and clear physical picture for the three-body problem, i.e., the light atom $B$ induces an effective interaction between the two heavy atoms, which determines the properties of the three-body bound state. With this picture, one can perform not only quantitative calculations but also qualitative discussions for the appearance and features of the trimer states when the potential function $V_{\rm{eff}}(z_{1},z_{2})$ is known from (\[ee3\]). This is a major advantage of the BOA approach. In the end of this subsection we emphasize that, since in BOA the transitions between different solutions of the instantaneous eigen-equation (\[ee3\]) is neglected, this approximation can only be used when the gap between $V_{\rm{eff}}(z_{1},z_{2})$ and other eigen-energies of (\[ee3\]) \[with boundary conditions (\[c1\]) and (\[c2\])\] is large enough. In the cases where $V_{\rm{eff}}(z_{1},z_{2})$ is close to the lower bound of the continuous spectrum, the application of BOA may be questionable. Effective interaction between the two heavy atoms ------------------------------------------------- In the discussion above we outline the procedure for the derivation of the three-body bound states with BOA. In this subsection we solve Eqs. (\[c1\]-\[ee3\]) to calculate the instantaneous eigen-state $\psi (\vec{r}_{B},z_{1},z_{2})$ of the light atom $B$, and the light-atom-induced effective potential $V_{\rm{eff}}(z_{1},z_{2})$ between the two heavy atoms. A straightforward calculation shows that the lowest ground state solution $\psi$ (up to a normalization factor) of Eq. (\[ee3\]) and the corresponding energy $V_{\rm{eff}}(z_{1},z_{2})$ are given by $$\begin{aligned} \psi (\vec{r}_{B},z_{1},z_{2}) &=&\frac{e^{-\kappa (r_{12})r_{1B}}}{r_{1B}} +\xi (r_{12})\frac{e^{-\kappa (r_{12})r_{2B}}}{r_{2B}} \label{psi1}; \\ V_{\rm{eff}}(z_{1},z_{2}) &=&-\frac{\kappa ^{2}(r_{12})}{2},\label{v1}\end{aligned}$$ where $r_{12}=\sqrt{1+(z_{1}-z_{2})^{2}}$ is the distance between $A_{1}$ and $A_{2}$. Substituting the expression of $\psi (\vec{r}_{B},z_{1},z_{2})$ into the Bethe-Peierls boundary conditions (\[c1\]) and (\[c2\]), one can derive the values of $\kappa $ and $\xi $ in terms of the distance $(z_{1}-z_{2})$, and then obtain expressions for $\psi (\vec{r}_{B},z_{1},z_{2})$ and $V_{\rm{eff}}(z_{1},z_{2})$. Notice that, as a bound state, the wave function $\psi (\vec{r}_{B},z_{1},z_{2})$ must approach zero in the limit $r_{1B}\to \infty $ or $r_{2B}\to \infty $. Therefore, the condition $\kappa >0$ must be satisfied when we solve the equations of $\kappa $ and $\xi $. According to Eq. (\[v1\]), the effective potential $V_{\rm{eff}}(z_{1},z_{2})$ is a function of distance $z_{12} \equiv z_{1}-z_{2}$ between the two heavy atoms along the axial direction of 1D tubes. Then the wave function $\phi (z_{1},z_{2})$ in the total wave function (\[pf2\]) is also a function of $z_{12}$, indicating the translational symmetry along the $z$-axis. From now on, we rewrite $V_{\rm{eff}}(z_{1},z_{2})$ as $V_{\rm{eff}}(z_{12})$, and $\phi (z_{1},z_{2})$ as $\phi (z_{12})$, and write Eq. (\[eef\]) as $$\begin{aligned} \left[ -\frac{1}{2m_{\ast }}\frac{\partial ^{2}}{\partial z_{12}^{2}} +V_{\rm{eff}}(z_{12})\right] \phi \left( z_{12} \right) =E\phi \left(z_{12}\right), \label{ef2}\end{aligned}$$ where $m_{\ast }=m_{1}m_{2}/(m_{1}+m_{2})$ is the reduced mass of the two heavy atoms. From Eq. (\[ef2\]), we can see clearly that $V_{\rm{eff}}$ serves as an effective interaction between the two heavy atoms $A_{1,2}$, and determines the existence and behavior of the three-body bound states. Next, we discuss the feature of $V_{\rm{eff}}$ in different parameter regions. ### $a_{1}=a_{2}=a>0$ In this case the two heavy atoms $A_{1,2}$ have the same positive scattering length with the light atom $B$. Since $\psi $ is the ground-state solution of Eq. (\[ee3\]), a straightforward calculation shows that in this symmetric case we have $\xi =1$ and $\kappa $ given by the equation $$\begin{aligned} -\kappa +\frac{e^{-\kappa r_{12}}}{r_{12}}=-\frac{1}{a}. \label{ke}\end{aligned}$$This equation can be solved analytically, leading to, $$\begin{aligned} \kappa =\frac{1}{a}+\frac{W\left( e^{-r_{12}/a}\right) }{r_{12}}, \label{k1}\end{aligned}$$where $W(z)$ is Lambert $W$ function or the principle root of equation $z=We^{W}$. Substituting the result (\[k1\]) into Eq. (\[v1\]), we finally obtain an analytic expression of the effective interaction between the two heavy atoms:$$\begin{aligned} V_{\rm{eff}}(z_{12})=U\left( a;z_{12}\right) -\frac{1}{2a^{2}}, \label{vv1}\end{aligned}$$where the regularized part $U\left( a;z_{12}\right) $ is given by $$\begin{aligned} U\left( a;z_{12}\right) &=&-\frac{1}{2} \frac{W\left( e^{-\sqrt{1+z_{12}^{2}}/a}\right)^{2}} {1+z_{12}^{2}} \nonumber \\ &&\hspace{-0.5cm} -\frac{1}{a}\frac{W\left( e^{-\sqrt{1+z_{12}^{2}}/a}\right) } {\sqrt{1+z_{12}^{2}}}, \label{u1}\end{aligned}$$ which approaches zero in the limit $\left\vert z_{12}\right\vert \rightarrow \infty $. Therefore, the characters of bound states are essentially determined by the behavior of $U\left( a;z_{12}\right)$. With the knowledge of the $W$ function, we can easily find that when $a>0$, $U\left( a;z_{12}\right)$ is a pure symmetric potential well with $$\begin{aligned} U\left( a,z_{12}\right) =U\left( a,-z_{12}\right) <0.\end{aligned}$$ In Fig. \[uz1z2\], we plot $U\left( a;z_{12}\right) $ for a set of typical values of scattering lengths. It is clearly shown that $U\left( a;z_{12}\right) $ provides a simple $1$D potential well for the two heavy atoms. This behavior guarantees that there exists at least one bound-state solution $\phi$ of Eq. (\[ef2\]), and then the total system has at least one three-body bound state. ![(color online) The regularized effective potential $U(a,z_{12})$ between the two heavy atoms $A_{1,2}$ with scattering lengths $a_1=a_2=a=0.2$ (red solid line with open circles), $1$ (blue solid line) and $\infty$ (black dashed line). We also plot the effective potential $V_{\rm eff}(z_{12})$ for $a_1=a_2=-5$ (green dashed-dotted line). The natural unit of $\hbar = m_B = L =1$ is used throughout this paper. []{data-label="uz1z2"}](V_z){width="8cm"} Intuitively speaking, one would expect that the atom-atom interaction effect be most significant when the scattering length takes infinite value. However, we find from Fig. 2 that the depth of the effective interaction $U\left( a;z_{12}\right) $ takes a maximum value when $a=1$ in our natural unit, rather than $a \to +\infty $. This observation suggests that the light-atom-induced interaction between the two heavy atoms $A_{1,2}$ is most significant when the scattering length between a single heavy atom and the light one equals to the distance separating the two 1$D$ tubes. This novel property can be considered as a kind of resonance effect given by the special configuration of mixed dimensional systems. This resonance effect can also be proved analytically with the character of the $W$ function. For any given value of $a$, the potential $U\left(a,z_{12}\right) $ has only one minimum point, which is localized at the origin $z_{12}=0$. Thus, the depth of the potential well takes the form $$\begin{aligned} D\left( a\right) \equiv -U\left( a,0\right) =\frac{1}{2}W\left( e^{-1/a}\right) ^{2}+\frac{1}{a}W\left( e^{-1/a}\right).\end{aligned}$$It is easy to show that $D\left( a\right) $ takes the maximum value when $a=1$. In Fig. \[depth\] we plot the potential depth as a function of $1/a$, exhibiting the resonance signature at $a = 1$. ![(color online) The depth $D(a)$ of the regularized part $U(a,z_{12})$ of the effective interaction between the two heavy atoms in the case of $a_1=a_2=a>0$. Notice that $D(a)$ takes a maximum value at $a=1$, indicating a new resonance behavior for mixed dimensional systems.[]{data-label="depth"}](da){width="8cm"} ### $a_{1}=a_{2}=a<0$ In this case, by substituting Eq. (\[psi1\]) into the Bethe-Peierls boundary conditions (\[c1\]) and (\[c2\]), we also get $\xi =1$ and $\kappa $ given by Eqs. (\[ke\]) and (\[k1\]) for $r_{12}<\left\vert a\right\vert $. However, for $r_{12}>\left\vert a \right\vert $, there is no positive solution of Eq. (\[ke\]) for $\kappa $. This suggests that the Schrödinger equation (\[ee3\]) with Bethe-Peierls boundary conditions (\[c1\]) and (\[c2\]) do not support any instantaneous bound state $\psi $ of the light atom $B$, and then one cannot derive any effective interaction for the two heavy atoms $A_{1,2}$ within BOA. As a consequence, when the scattering length $|a| < 1$, there would be no three-body bound state since the condition $r_{12} > |a|$ is satisfied with arbitrary 1D distance $z_{12}$ between the two heavy atoms. On the other hand, when $\left\vert a \right\vert >1$, the BOA can give the effective interaction potential $$\begin{aligned} V_{\rm{eff}}\left(z_{12}\right) =-\frac{1}{2}\left[ \frac{1}{a} +\frac{W\left(e^{-\sqrt{1+z_{12}^{2}}/a}\right) }{\sqrt{1+z_{12}^{2}}}\right] ^{2},\end{aligned}$$ provided that $|z_{12}| < \sqrt{a^{2}-1}$ or $r_{12}<\left\vert a\right\vert $. In the outer region of $|z_{12}| > \sqrt{a^{2}-1}$, the potential takes zero value as $V_{\rm{eff}}\left( z_{12}\right) =0$. In Fig. \[uz1z2\], we also show $V_{\rm{eff}}\left( z_{12}\right) $ with negative scattering length. We would like to emphasize that BOA can only be used when $V_{\rm{eff}}$ is well-separated from the continuous spectrum of the Schrödinger equation (\[ee3\]). This criteria is actually broken in the region $r_{12} \sim \left\vert a\right\vert $ or $z_{12} \sim \sqrt{a^{2}-1}$, where we have $V_{\rm{eff}}\left( r_{12}\right) \sim 0$. Then the effective potential is not applicable in these regions. Fortunately, if the potential is deep enough, the ground-state wave function $\phi $ of the heavy atoms $A_{1,2}$ would be mainly localized in the region $z_{12} \sim 0$ or $r_{12} \ll \left\vert a\right\vert $, where BOA is applicable. Thus, the ground-state wave function and its corresponding binding energy obtained from BOA is still reliable. Notice that in this negative scattering length regime, $A_{1,2}$ and $B$ cannot form any two-body bound state, hence the appearance of a three-body bound state is a non-trivial universal phenomenon. ### $0<a_{1}<a_{2}$ or $a_{2}<0<a_{1}$ Now we consider the general cases where the scattering lengths $a_1$ and $a_2$ are different. In these cases one can also derive the values of $\xi $ and $\kappa $ by substituting the expression (\[psi1\]) into the Bethe-Peierls boundary conditions ([c1]{}) and (\[c2\]). When $0<a_{1}<a_{2}$ or $a_{2}<0<a_{1}$, we know that in the limit $r_{12}\rightarrow \infty $, that is the two heavy atoms are far away from each other, the instantaneous ground state of the light atom $B$ is the two-body bound state of $B$ and $A_{1}$. Considering the expression (\[psi1\]) of the instantaneous bound state, we have$$\begin{aligned} \xi \left( r_{12}\rightarrow \infty \right) =0.\end{aligned}$$With the help of this condition, we obtain the result$$\begin{aligned} \xi =\frac{-\Delta +\sqrt{\Delta ^{2}+4e^{-2\kappa r_{12}}/r_{12}^{2}}}{2}% r_{12}e^{\kappa r_{12}}, \label{xie}\end{aligned}$$where$$\begin{aligned} \Delta \equiv \frac{1}{a_{1}}-\frac{1}{a_{2}}>0.\label{delta}\end{aligned}$$Then the value of $\kappa $ is given by $$\begin{aligned} -\kappa +\frac{-\Delta +\sqrt{\Delta ^{2}+4e^{-2\kappa r_{12}}/r_{12}^{2}}}{2% }=-\frac{1}{a_{1}}. \label{kkkq}\end{aligned}$$By solving Eqs. (\[xie\]) and (\[kkkq\]) numerically, we can obtain the values of $\xi $ and $\kappa $, and then the effective potential $V_{\mathrm{eff}}$. It is easy to show that $V_{\rm eff}<0$ for all values of $z_{12}$. Therefore, there is also at least one three-body bound state. When $a_{2}<0<a_{1}$, although the atoms $A_{1}$ and $B$ can form a two-body bound state, there is no two-body bound state for $A_{2}$ and $B$. In this sense the existence of a three-body bound state is also a non-trivial phenomenon. ### $a_{1}<a_{2}<0$ In this case a straightforward calculation shows that the values of $\xi $ and $\kappa $ are also determined by Eqs. (\[xie\]) and (\[kkkq\]). Nevertheless, similar to the case of $a_{1}=a_{2}=a<0$, there are also some regions where the instantaneous bound state $\psi $ does not exist. Specifically, we can define a critical distance $r_{12}^{\ast }$ as $$\begin{aligned} r_{12}^{\ast }=2\left[ \left( \Delta -\frac{2}{a_{1}}\right) ^{2}-\Delta^{2}\right] ^{-1/2}\end{aligned}$$with $\Delta$ defined in (\[delta\]). It is apparent that when $r_{12}>r_{12}^{\ast }$, we cannot find any real $\kappa $ which satisfies Eq. (\[kkkq\]). In this sense, $r_{12}^{\ast}$ can be understood as the range of the effective interaction between $A_1$ and $A_2$. When this range is smaller than the distance between the two 1D tubes, i.e. $r_{12}^{\ast} < 1$, the two heavy atoms are always separated far enough such that the BOA does not give any effective mutual interaction. On the other hand, when $r_{12}^{\ast }>1$ the effective potential of the two heavy atoms can be defined as$$\begin{aligned} V_{\rm eff}=\left\{ \begin{array}{c} -\kappa^{2}/2;\ \ \ 1\leq r_{12}\leq r_{12}^{\ast } \\ 0;\ \ \ r_{12}>r_{12}^{\ast }.% \end{array}% \right.\end{aligned}$$This potential is also not reliable in the region $r_{12}\sim r_{12}^{\ast }$ where the condition for BOA is broken. However, as shown blow, this approach could lead to the existence of a bound state wave function $\phi $ which takes negligible value in this questionable region, such that the discussion within BOA remains valid. Since the negative scattering lengths do not support any two-body bound states, the existence of such a three-body bound state in this region is of great interest. Three-body universal bound states in $1$D-$1$D-$3$D systems =========================================================== ![(color online) The binding energy of the ground three-body bound state in the $1$D-$1$D-$3$D system with reduced mass of the heavy atoms $m_*=3.33$ and $9.5$. These values correspond to the cases of ($A_{1}= A_2= ^{40}$K, $B= ^{6}$Li) and ($A_{1}=A_2=^{133}$Cs, $B=^{7}$Li), respectively.[]{data-label="be113"}](3E0){width="9cm"} In the previous section, we have obtained the instantaneous bound-state wave function $\psi$ of the light atom $B$ and the effective interaction potential $V_{\rm eff}(z_{12})$ between the two heavy atoms. We have shown that $V_{\rm eff}$ is most significant when the two-body scattering length is resonant with the distance between the two $1$D tubes. In this section we derive the wave functions and binding energies of the relevant three-body bound states, and further confirm the observation of this new resonance effect. In Fig. \[be113\], the binding energy $E_{\rm 3b}$ of the ground trimer state is plotted as a function of $1/a_1$ and $1/a_2$ with heavy-atom reduced masses $m_*=3.33$ and $9.5$ in the natural unit. These values correspond to the cases of ($A_{1}= A_2= ^{40}$K, $B= ^{6}$Li) and ($A_{1}=A_2=^{133}$Cs, $B=^{7}$Li), respectively. Here, the binding energy $E_{\rm 3b}$ is defined as the energy gap between the three-body ground state $E$ and the threshold of the effective interaction, i.e. $$\begin{aligned} E_{\rm 3b}=V_{\rm eff}(\infty) - E.\end{aligned}$$ From Fig. \[be113\], we notice that a three-body bound state exists for a wide range of positive and negative scattering length combinations, as discussed in the previous section. Nevertheless, the binding energy reaches a peak value when the two scattering lengths $a_1$ and $a_2$ are close with each other, especially in the region around $a_1 \sim a_2 \sim 1$. This observation is consistent with the expectation outlined in the previous section, which shows that when $a_1 = a_2 = a$, the effective potential well for $A_1$-$A_2$ interaction is deepest as the scattering lengths are resonant with the distance between the two 1D tubes $a=1$. ![(color online) The binding energy $E_{\rm 3b}$ of the ground trimer state as a function of $1/a$ with $a_1=a_2=a$. The reduced masses used in this plot are $m_*=3.33$ (black solid line), $9.5$ (blue dashed line) and $\infty$ (red solid line with open circles), respectively.[]{data-label="be113b"}](3E1){width="7.5cm"} To further investigate the relationship between the binding energy and the two-body scattering lengths, we focus on the case of $a_1 = a_2 = a$, and illustrate in Fig. \[be113b\] the binding energy in terms of $1/a$ with respect to different reduced masses $m_*$ of the two heavy atoms. One significant feature of this result is that the resonant behavior is present for all different reduced masses, i.e. the binding energy of the ground trimer state reaches its maximum in the region around $a = 1$. Besides, we also notice that for a given two-body scattering length, the binding energy increases with reduced mass $m_*$, and approaches to an asymptotic value in the limit $m_* \to \infty$. This tendency is also confirmed by Fig. \[be113c\] where the binding energies for $a_1=a_2=a=1$ and $a_1=a_2=a=\infty$ are plotted as functions of the reduced mass $m_*$. ![(color online) The binding energy $E_{\rm 3b}$ of the ground trimer state as a function of the reduced mass $m_*$ with $a_1=a_2=a=1$ (blue solid line with open triangle) and $a_1=a_2=a=\infty$.[]{data-label="be113c"}](3EMu){width="7.5cm"} The three-body bound states in the $1$D-$1$D-$3$D systems with $a_1=a_2$ are also discussed in Ref. [@nishida-11] within an effective field theory or the exact solution of three-body Schr$\ddot{\rm o}$dinger equation. In Fig.  \[be113d\], we compare our BOA results of the ground trimer state energy with the exact expression given by Ref. [@nishida-11] for $m_*=3.33$ and $a_1 =a_2 = a$. Notice that the BOA results are very close to the exact solution around the resonance point $a=1$ for such a rather small mass ratio. This consistency suggests that the BOA approach is reliable provided that the three-body bound state energy is away from the threshold. ![(color online) The binding energy of the ground three-body bound state in the 1D-1D-3D system with reduced mass $m_*=3.33$ and scattering lengths $a_1=a_2=a$. Here, we plot the results given by the BOA (red solid line with open circles) and by the exact solution of the Schr$\ddot{\rm o}$dinger equation [@nishida-11] (blue solid line). Notice that the BOA can give reliable results provided that the binding energy of the trimer state is away from the threshold.[]{data-label="be113d"}](Eb_sqrt_1D.eps){width="7.5cm"} Three-body universal bound states in $2$D-$2$D-$3$D systems =========================================================== The discussion on $1$D-$1$D-$3$D systems outlined in the previous section can be directly generalized to other mixed-dimensional configurations. In this section we consider a $2$D-$2$D-$3$D system \[Fig. 1(b)\] where the two heavy atoms $A_{1,2}$ are trapped individually in two $2$D confinements, localized in the planes of $x= \pm L/2$. The light atom $B$ is also assumed to move freely in the $3$D space. we also adopt the natural units with $\hbar = m_B = L =1$. When the masses of $A_{1,2}$ is much larger than that of $B$, the system can also be treated via BOA. The wave function $\Psi$ of the possible three-body bound state also takes the factorized form as in Eq. (\[pf2\]), i.e., $\Psi=\phi\psi$ with $\psi$ the instantaneous bound state of the light atom $B$. In this case, the instantaneous energy of $\psi$ serves as an effective $2$D interaction between the two heavy atoms, and can be obtained by replacing the argument $z_1-z_2$ in $V_{\rm eff}(z_1-z_2)$ with $\rho=\sqrt{(y_1-y_2)^2+(z_1-z_2)^2}$. Following the same procedure as outlined in Sec. II, we can show that in the case of $a_1=a_2=a$, the depth of the $2$D effective potential also takes its maximal value when $a=1$ in the natural unit. This observation indicates that the resonance phenomenon also exists in the $2$D-$2$D-$3$D configuration. Notice that the 2D-2D-3D geometry is invariable under a rotation along the $x$-axis. This $\rm{SO(2)}$ symmetry thus leads to the conservation of the $x$-component angular momentum of $A_1$-$A_2$ relative motion. Therefore, the wave function $\phi$ in the three-body bound state $\Psi$ can be expressed as $$\begin{aligned} \phi= \sum_{\ell} \phi_\ell(\rho)e^{i \ell \theta},\end{aligned}$$ where $\tan\theta=(z_1-z_2)/(y_1-y_2)$ is the polar angle of $A_{1,2}$ relative motion in the $y$-$z$ plane, and the radial wave function $\phi_\ell(\rho)$ satisfies the $2$D Schrödinger equation $$\begin{aligned} &&\left[-\frac{1}{2 m_*}\left(\frac{d^2}{d\rho^2}+\frac{1}{\rho}\frac{d}{d\rho} -\frac{\ell^2}{\rho^2}\right) +V_{\rm eff}(\rho)\right]\phi_\ell(\rho) \nonumber \\ && \hspace{4cm} =E_l\phi_\ell(\rho). \notag \\ \label{4.phi}\end{aligned}$$ Here, the quantum number $\ell=0,\pm 1,\pm 2,...$ indicates the relative angular momentum of $A_{1,2}$ along the $x$-direction. The pure ground state of the system occurs in the channel $\ell=0$. The radial equation (\[4.phi\]) can be solved numerically as in the 1D-1D-3D case. For the ground zero-angular momentum channel $\ell =0$, we also find three-body bound states with reduced mass $m_* = 3.33$ and 9.5. The binding energy of the ground trimer state is illustrated in Fig. \[be223\] in terms of $1/a_1$ and $1/a_2$. Notice that the binding energy is significantly amplified in the parameter region $a_1 \sim a_2$, and reaches its maximum when the scattering lengths are resonant with the 2D surfaces spacing $a_1 \sim a_2 \sim 1$. Besides, the binding energy also increases with reduced mass $m_*$ of the two heavy atoms. In Fig. \[fig.4.E1\] we also compare the BOA results with the exact expression [@nishida-10] for the case of $m_*=3.33$ and $a_1=a_2=a$, and find good agreement when the trimer binding energy is away from the threshold. All these features are analogous with the case of $1$D-$1$D-$3$D geometry. ![(color online) The binding energy of the ground three-body bound state in the $2$D-$2$D-$3$D geometry with reduced mass of the heavy atoms $m_*=3.33$ and $9.5$.[]{data-label="be223"}](4E0){width="9cm"} ![(color online) The binding energy of the ground three-body bound state in 2D-2D-3D geometry with reduced mass $m_*=3.33$ and scattering lengths $a_1=a_2=a$. Here, we plot the results given by the BOA (red solid line with open circles) and by an effective field theory [@nishida-10] (blue solid line), and find good agreement provided that the binding energy is away from the threshold.[]{data-label="fig.4.E1"}](Eb_sqrt_2D.eps){width="7cm"} Four-body universal bound states in 1D-1D-1D-3D systems ======================================================= From the discussion in the previous sections, we notice that the BOA works well throughout a wide range of scattering length for a fairly small mass ratio of about $6$, provided that the binding energy of the bound trimer state is away from the threshold. This observation suggests that this approach can be directly applied to mixed dimensional systems with more than three atoms, and to give reliable results for few-body bound state energy when it is sizable. In this section, we consider as an example the 1D-1D-1D-3D system with three heavy atoms $A_1$, $A_2$, and $A_3$ trapped individually in parallel 1D tubes and a single light atom $B$ moving freely in 3D. We consider the configuration of three 1D tubes arranged along the $z$ direction, and intersect with the $x$-$y$ plane at $(x=\pm L/2, y=0)$ and $(x=x_0, y=y_0)$, as shown schematically in Fig. \[fig1113\]. The three intersection points form a triangle in the $x$-$y$ plane. Since the system properties are invariant under different length scales, we assume that $L$ is the shortest side of the triangle, and use it as the length unit $L=1$ in the following discussion. ![(color online) The 1D-1D-1D-3D system with three heavy atoms $A_1, A_2$ and $A_3$ confined in three 1D tubes and the light atom $B$ moving freely in 3D.](fig1113.eps){width="7cm"} \[fig1113\] The quantum states of such a system can be described by the wave function $\Psi (\vec{r}_{B};z_{1},z_{2}, z_{3})$, where $z_i$ is the $z$-coordinate of the heavy atom $A_i$, and $\vec{r}_{B}$ is the coordinate of the light atom $B$. Within the BOA, the wave function $\Psi$ can be separated as $$\begin{aligned} \label{eqn:4body-BOA} \Psi(\vec{r}_{B};z_{1},z_{2}, z_{3}) = \phi(z_{1},z_{2}, z_{3}) \psi(\vec{r}_{B};z_{1},z_{2}, z_{3}).\end{aligned}$$ Here, $\psi$ is the wave function of the instantaneous bound state of the light atom, which is given by the Schr[ö]{}dinger equation $$\begin{aligned} \label{eqn:4body-SE} - \frac{1}{2}\nabla_B^2 \psi(\vec{r}_{B};z_{1},z_{2}, z_{3}) = V_{\rm eff} (z_{1},z_{2}, z_{3}) \psi(\vec{r}_{B};z_{1},z_{2}, z_{3}) \nonumber\\\end{aligned}$$ with the Bethe-Peierls boundary conditions $$\begin{aligned} \label{eqn:4body-BP} \Psi(r_{iB} \to 0) \propto \left( 1- \frac{a_i}{r_{iB}}\right) +\mathcal{O}(r_{iB}).\end{aligned}$$ Here, $r_{iB}$ and $a_i$ are the distance and mixed-dimensional scattering length between the atoms $A_i$ and $B$, respectively. The ground state of $\psi$ can be obtained by solving the eigen-equation (\[eqn:4body-SE\]) for a give set of $(z_{1},z_{2}, z_{3})$, which takes the form $$\begin{aligned} \label{eqn:4body-psi} \psi(\vec{r}_{B};z_{1},z_{2}, z_{3}) &=& \frac{e^{-\kappa r_{1B}}}{r_{1B}} + c_2 \frac{e^{-\kappa r_{2B}}}{r_{2B}} + c_2 \frac{e^{-\kappa r_{2B}}}{r_{2B}}, \nonumber \\ V_{\rm eff} (z_1, z_2, z_3) &=& -\frac{\kappa^2}{2}.\end{aligned}$$ The parameter $\kappa$ is determined by the boundary conditions (\[eqn:4body-BP\]), leading to $$\begin{aligned} \label{eqn:4body-kappa} \kappa - c_2 \frac{e^{-\kappa r_{12}}}{r_{12}} - c_3 \frac{e^{-\kappa r_{13}}}{r_{13}} &=& \frac{1}{a_1}; \nonumber \\ - \frac{e^{-\kappa r_{12}}}{r_{12}} + c_2 \kappa - c_3 \frac{e^{-\kappa r_{23}}}{r_{23}} &=& \frac{c_2}{a_2}; \nonumber \\ - \frac{e^{-\kappa r_{13}}}{r_{13}} - c_2 \frac{e^{-\kappa r_{23}}}{r_{23}} + c_3 \kappa &=& \frac{c_3}{a_3}.\end{aligned}$$ A numerical solution of these equations hence gives the effective potential $V_{\rm eff}$ among the three heavy atoms. By imposing the potential to the Schr[ö]{}dinger equation $$\begin{aligned} \label{eqn:4body-SE2} &&\left[ - \sum_{i = 1,2,3} \frac{1}{2m_i} \frac{\partial^2}{\partial z_i^2} + V_{\rm eff} (z_{1},z_{2}, z_{3})\right] \phi(z_{1},z_{2}, z_{3}) \nonumber \\ && \hspace{4cm} = E \phi(z_{1},z_{2}, z_{3}),\end{aligned}$$ we can obtain the energy $E$ for the four-body bound states. From now on, we focus on the special case of $a_1 = a_2 = a_3 = a$ and $m_1 = m_2 =m_3 =m$, that is the scattering lengths and the masses of the three heavy atoms are all the same. This is also the most relevant case for experiments, where atoms trapped in low dimensional traps are of the same species. Since the system is translationally invariant along the $z$-direction, we define a new set of coordinates $$\begin{aligned} X &=& z_1 - z_2, \nonumber \\ Y &=& z_3 - \frac{z_1 + z_2}{2},\end{aligned}$$ and calculate the effective potential $V_{\rm eff} (X,Y)$ in these new variables $$\begin{aligned} \label{eqn:4body-Ueff} V_{\rm eff}(X,Y) = U(a; X, Y) - \frac{1}{2a^2},\label{bigu}\end{aligned}$$ where $U(a; X,Y)$ is the regularized part. We first consider the special geometry where the three 1D tubes are arranged equidistantly to form an equilateral triangle in the $x$-$y$ plane (i.e., $x_0=0$ and $y_0=\sqrt{3}/2$). In Fig. \[fig:4body-potential\], we show the regularized effective potential $U(a;X,Y)$ for scattering length $a=1$. Notice that the effective potential acquires its global minimum at $(X=0, Y=0)$ or $z_1=z_2=z_3$, that is the three atoms are staying in a surface perpendicular to the 1D tubes and forming an equilateral triangle. Besides, we also observe three energy potential valleys, which correspond to the cases where the distance between two of the three atoms equals to $1$. ![(color online) The regularized effective potential $U(a;X,Y)$ for two body scattering length $a_1=a_2=a_3=a=1$ in the 1D-1D-1D-3D system with equilateral triangle configuration.[]{data-label="fig:4body-potential"}](U1113){width="7cm"} The same phenomenon can also be observed for other values of scattering length $a \neq 1$. In fact, the effective potential $U(a; X,Y)$ always reaches its minimum at $(X=0, Y=0)$. However, the potential is deepest only when the scattering length $a=1$. In Fig. \[fig:4body-Da\], we show the depth of the effective potential well as a function of $a$, which reaches its maximum at $a = 1$. This result suggests that the resonance we observed in three-body problems as discussed above also occurs in the four-body system. ![(color online) The depth $D(a)$ of the regularized part $U(a; X,Y)$ of the effective interaction. In this plot, we consider the case of $a_1=a_2=a_3=a$ in the 1D-1D-1D-3D system with equilateral triangle configuration. Notice that $D(a)$ takes maximum value at the resonance point of $a=1$.[]{data-label="fig:4body-Da"}](da1113){width="7cm"} With the knowledge of the effective potential, we can numerically solve the Schr[ö]{}dinger equation (\[eqn:4body-SE2\]) to obtain the eigenenergies of four-body bound states. In the new set of variables $X$ and $Y$, this equation can be rewritten as $$\begin{aligned} \label{eqn:4body-SE3} &&\left[ - \frac{1}{m} \frac{\partial^2}{\partial X^2} - \frac{3}{4m} \frac{\partial^2}{\partial Y^2} + V_{\rm eff} (X,Y)\right] \phi(X,Y) \nonumber \\ && \hspace{4cm} = E \phi(X,Y),\end{aligned}$$ where $\phi(X,Y)$ is the wave function of the heavy atoms. As in the three-body calculation, the binding energy of the tetramer states is defined as the difference between the eigenenergy $E$ and the effective potential energy for $X \to \infty$ and $Y \to \infty$, $$\begin{aligned} \label{eqn:4body-bindingE} E_{\rm 4b} = V_{\rm eff}(\infty,\infty) - E.\end{aligned}$$ ![(color online) The binding energy $E_{\rm 4b}$ of the ground four-body bound state in the 1D-1D-1D-3D system with equilateral triangle configuration. for different values of $a$ with reduced mass $m_*=9.5$ (blue solid line with circles) and 3.33 (green solid line with triangle).[]{data-label="fig:4body-bindingE1"}](Ea1113){width="8cm"} The binding energy of the ground four-body bound state for different values of $a$ is plotted in Fig. \[fig:4body-bindingE1\], where we consider two mass ratios as in the previous discussion. Notice that the binding energy reaches its maximum near $a = 1$, as we expected from the effective potential. This result confirms the appearance of the resonance phenomenon in the four-body system. Up to now, we consider only a special configuration of 1D-1D-1D-3D geometry where the three 1D tubes form an equilateral triangle, and observe a resonance phenomena for tetramer binding energy as the scattering length gets close to the mutual distance between 1D tubes. An intuitive expectation is that this most symmetric configuration should be the case of maximal resonance, for the scattering length can be resonant with any two of the three atoms. In order to demonstrate this idea, we consider general configurations of the three 1D tubes, such that they form a triangle of arbitrary shape with three sides $L=1$, $L_1$ and $L_2$ (see Fig. 10). For the system properties are invariant as scaled with length, we assume $L=1$ to be the shortest side of the triangle. We further take the scattering lengths $a_1=a_2=a_3=1$. In Fig. \[fig:4body-geometry\], we show the depth of the effective potential $U(a;X,Y)$ for arbitrary arrangement of the three 1D tubes. It is clearly shown that, the depth of the effective potential takes its maximum value when $L_1=L_2=1$ or the $1$D tubes have the configuration of equilateral triangle. This is consistent with our expectation that maximal resonance appears in this most symmetric configuration. ![(color online) The depth of the effective potential $U(a;X,Y)$ in the 1D-1D-1D-3D system with $a_1=a_2=a_3=1$ and the inter-tube distances $L_1=1$ and $L_{1,2}$ defined in Fig. 10.[]{data-label="fig:4body-geometry"}](D_L1L2.eps){width="7cm"} BOA for many-body problems in mixed-dimensional systems ======================================================= In the previous sections, we study the three-body and four-body bound states in mixed-dimensional systems within the BOA. Now we generalize this approach to mixed-dimensional problems with arbitrary $N$ heavy atoms trapped individually in 1D or 2D confinements, while a single light atoms moving freely in the 3D space. In such a configuration, the wave function of the possible few-body bound states takes the form $$\begin{aligned} \Psi(\vec r_B;\vec s)=\phi(\vec s)\psi(\vec r_B;\vec s),\end{aligned}$$ where $\vec s=(\vec{r}_1,\vec{r}_2,..,\vec{r}_N)$ are the 1D or 2D coordinates of the heavy atoms $A_1,A_2,...,A_N$, and ${\vec r}_B$ is the coordinate of the light atom $B$. As in the previous sections, $\psi(\vec r_B;\vec s)$ is the wave function of the instantaneous bound state of the light atom, which is determined by the Schrödinger equation $$\begin{aligned} -\frac{1}{2}\nabla^2_B\psi(\vec r_B;\vec s) =V_{\rm eff}(\vec s)\psi(\vec r_B;\vec s) \label{5.psi}\end{aligned}$$ with Bethe-Peierls boundary conditions $$\begin{aligned} \psi(r_{iB}\rightarrow0)\propto\left(1-\frac{a_i}{r_{iB}}\right) +\mathcal{O}(r_{iB}). \label{5.bethe}\end{aligned}$$ Here, $r_{iB}$ is the distance between the atoms $A_i$ and $B$. By solving Eq. (\[5.psi\]), we obtain the general form of the instantaneous bound state $$\begin{aligned} \psi(\vec r_B;\vec s)=\frac{e^{-\kappa r_{1B}}}{r_{1B}}+\sum_{i=2}^N c_i\frac{e^{-\kappa r_{iB}}}{r_{iB}}, \label{5.psi2}\end{aligned}$$ where the value of $\kappa$ and the coefficients $c_i$ are given by the equations $$\begin{aligned} \frac{1}{a_1} &=& \kappa -\sum_{j=2}^Nc_i\frac{e^{-\kappa r_{iB}}}{r_{iB}}; \\ \frac{c_l}{a_l} &=& \kappa c_l-\frac{e^{-\kappa r_{lB}}}{r_{lB}} -\sum_{i=2,i\neq l}^Nc_i\frac{e^{-\kappa r_{iB}}}{r_{iB}}.\end{aligned}$$ From the equations above, we can solve for the value of $\kappa$ in terms of the coordinate $\vec{s}$ of the heavy atoms, and then obtain the instantaneous wave function of $\psi(\vec r_B;\vec s)$ and the effective interaction among the heavy atoms $$\begin{aligned} V_{\rm eff}(\vec s)=-\frac{\kappa^2}{2}.\end{aligned}$$ Finally, the heavy-atoms wave function $\phi(\vec s)$ of the few-body bound state is given by $$\begin{aligned} \left[-\sum_{i=1}^N\frac{1}{2m_i}\nabla^2_i+V_{\rm eff}(\vec s)\right]% \phi(\vec s)=E\phi(\vec s)\end{aligned}$$ with $m_i$ the mass of the heavy atom $A_i$. Conclusion ========== In this manuscript we show our BOA-based results on the stable three-body or four-body bound states in mixed dimensional systems with $N \ge 2$ heavy atoms individually trapped in different 1D or 2D confinements, while a single light atom moving freely in the 3D space. The BOA approach can provide a clear physical picture with a well-defined effective interaction among the heavy atoms. We show that in mixed dimensions, the three-body or four-body bound states can occur within a broad range of two-body scattering lengths, as the Efimov states in 3D. Nevertheless, the binding energy of the ground bound state reaches its maximum value when the two-body scattering length gets close to the distance between the low-dimensional traps. This is due to a new resonance phenomenon in mixed dimensions, where the effective interaction among the heavy atoms acquires a deepest potential well under the resonant condition. The feasibility of this BOA approach is then confirmed by a direct comparison with exact results in 1D-1D-3D and 2D-2D-3D configurations, hence suggests a possible extension in the problems with more than three atoms in mixed dimensions. This work is supported by National Natural Science Foundation of China (11074305, 10904172), the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (10XNL016). WZ would also like to thank the China Postdoctoral Science Foundation and NCET Program for support. The Bethe-Peierls Boundary Condition for BOA in Mixed-Dimensional Systems ========================================================================= In this appendix, we derive the Bethe-Peierls boundary condition \[e.g., Eqs. (\[c1\]), (\[c2\]), (\[eqn:4body-BP\]) and (\[5.bethe\])\] used in the Born-Oppenheimer approach for the mixed dimensional systems. For simplicity, here we consider the system with one heavy atom $A$ confined in a $1$D trap which is arranged along the $z$-axis, plus a light atom $B$ moving freely in $3$D. The generalization to other cases is straightforward. The expression of Bethe-Peierls boundary condition should be derived from the asymptotic behavior of the two-body wave functions. As a first-principle discussion, we first take into account the $3$D motions of both atoms $A$ and $B$, and then reduce our result in the mixed-dimensional model where only the motion along the $z$ direction is considered for atom $A$. The total Hamiltonian of the two atoms is given by $$\begin{aligned} H_{AB}=T_{Az}+T_{A\perp }+V_{A\perp }+T_{B}+V_{AB}\left( r_{AB}\right) . \label{bobph2b}\end{aligned}$$Here, the kinetic energy of atom $A$ along the $z$ direction is given by $$\begin{aligned} T_{Az}=-\frac{1}{2m_{A}}\frac{\partial ^{2}}{\partial z_{A}^{2}}\end{aligned}$$with $m_{A}$ the mass of atom $A$, and $\vec{r}_{i=A,B}=\left( x_{i},y_{i},z_{i}\right) $ the coordinate of the corresponding atoms. The transverse kinetic energy $T_{A\perp }$ of atom $A$ and the total kinetic energy $T_{B}$ of atom $B$ are defined as $$\begin{aligned} T_{A\perp } &=&-\frac{1}{2m_{A}}\left( \frac{\partial ^{2}}{\partial x_{A}^{2}}+\frac{\partial ^{2}}{\partial y_{A}^{2}}\right); \\ T_{B} &=&-\frac{1}{2}\nabla_{B}^2.\end{aligned}$$ Here we use the natural unit $\hbar=m_B=1$. In the Hamiltonian (\[bobph2b\]) we also have the transverse harmonic potential $$\begin{aligned} V_{A\perp }=\frac{m_{A}\omega _{\perp }^{2}}{2}\left( x_{A}^{2}+y_{A}^{2}\right)\end{aligned}$$with frequency $\omega _{\perp }$, and the atom-atom interaction potential $V_{AB}\left( r_{AB}\right) $ which is a function of the distance between the two particles $r_{AB}=\left\vert \vec{r}_{A}-\vec{r}_{B}\right\vert$. We further denote the effective range of the interaction potential as $r_{\ast }$, such that we have $V_{AB}\left( r_{AB}\right) \approx 0$ in the region of $r_{AB} \gg r_{\ast }$. When the confinement of the heavy atom $A$ is strong, the transverse motion of atom $A$ in the $x$-$y$ plane is much more rapid than its motion along the $z$ direction. Therefore, we need to consider both the position $\vec{r}_{B}$ of the light atom $B$ and the transverse coordinates $\left(x_{A},y_{A}\right) $ of the heavy atom $A$ as fast degrees of freedom. Only the longitudinal coordinate $z_{A}$ of atom $A$ is treated as the slow variable. Within the BOA, the total wave function of the system takes the form$$\begin{aligned} \Psi \left( \vec{r}_{A},\vec{r}_{B}\right) =\phi \left( z_{A}\right) \psi (% \vec{r}_{B},x_{A},y_{A};z_{A}), \label{bobpbigpsi}\end{aligned}$$where $\psi (\vec{r}_{B},x_{A},y_{A};z_{A})$ is given by the eigen-equation$$\begin{aligned} H_{F}\left( z_{A}\right) \psi (\vec{r}_{B},x_{A},y_{A};z_{A})=E\left( z_{A}\right) \psi (\vec{r}_{B},x_{A},y_{A};z_{A})\nonumber\\ \label{bobpee}\end{aligned}$$of the Hamiltonian $$\begin{aligned} H_{F}\left( z_{A}\right) =T_{A\perp }+V_{A\perp }+T_{B}+V_{AB}\left( r_{AB}\right) \label{bobphf}\end{aligned}$$with fixed values of $z_{A}$. To solve Eq. (\[bobpee\]), we expand the solution $\psi $ with eigen-states of the transverse Hamiltonian $T_{A\perp }+V_{A\perp }$ of atom $% A$ $$\begin{aligned} \psi (\vec{r}_{B},x_{A},y_{A};z_{A})=\sum_{n=0}^{\infty} \phi _{n} \left(x_{A},y_{A}\right) \psi _{n}\left( \vec{r}_{B};z_{A}\right) . \label{psi}\end{aligned}$$ Here, $\phi _{n}\left( x_{A},y_{A}\right) $ is the $n^{\rm th}$ eigen-state of $T_{A\perp }+V_{A\perp }$. Considering the translational symmetry along the $z$-axis, we take $z_{A}=0$, and the relevant wave function $\psi _{n}\left( \vec{r}_{B};0\right) $ of the light atom $B$ is given by $$\begin{aligned} &&\left[ T_{B}+\left( n+1\right) \omega _{\perp }\right] \psi_{n} + \left[ \sum_{m}V_{nm}\left( \vec{r}_{B}\right) \psi_{m} \right] \nonumber \\ &&\hspace{4cm} = E\left( 0\right) \psi _{n}. \label{bobpe22}\end{aligned}$$ Here, the matrix element of the interaction potential takes the form $$\begin{aligned} V_{nm}\left( \vec{r}_{B}\right) &=& \int dx_{A}dy_{A}\phi _{n}^{\ast }\left(x_{A},y_{A}\right) \nonumber \\ && \hspace{0.5cm} \times V_{AB}\left( r_{AB}\right) \phi _{m}\left( x_{A},y_{A}\right) . \label{bobpvmn}\end{aligned}$$ Therefore, the eigen-equation (\[bobpee\]) or (\[bobpe22\]) can be solved via a multi-channel scattering theory of atom $B$, with the transverse states $\phi _{n}\left( \vec{r}_{B};z_{A}\right) $ of atom $A$ serving as the scattering channels. In the low-energy case with $\omega _{\perp }<E<2 \omega _{\perp }$, the ground channel with the transverse state $\phi _{0}\left(x_{A},y_{A}\right) $ assumes the only open channel. Now we consider the asymptotic behavior of the wave function in the long-distance limit with $\left\vert \vec{r}_{B}\right\vert \gg (r_{\ast},l_{\perp })$, where $l_{\perp }=\sqrt{1/\left( m_{A}\omega _{\perp }\right) }$ denotes the characteristic length of the transverse confinement. In this region, the mutual distance $r_{AB}$ between the two atoms would be much larger than the effective range $r_{\ast }$ of the interaction, such that we can neglect the term $V_{AB}$ in Eq. (\[bobphf\]). According to the scattering theory, in such a region the wave function $\psi _{n}\left( \vec{r}_{B};0\right) $ in the close channels with $n>0$ decays exponentially with $\left\vert \vec{r}_{B}\right\vert $, and can be safely neglected. The wave function $\psi _{0}\left( \vec{r}_{B};0\right) $ in the open channel takes the form $$\begin{aligned} \psi _{0}\left( \vec{r}_{B};0\right) &\sim &\sum_{l=0}^{\infty }\sum_{m=-l}^{l}C_{l,m}\frac{Y_{lm}\left( \theta _{B},\phi _{B}\right) }{% k\left\vert \vec{r}_{B}\right\vert } \nonumber \label{bobppsi0a} \\ &&\hspace{-1.5cm} \times \left( \hat{\jmath}_{l}\left( k\left\vert \vec{r}_{B}\right\vert \right) +kf_{l,m}\left( k\right) \hat{h}_{l}^{\left( +\right) }\left( k\left\vert \vec{r}_{B}\right\vert \right) \right),\end{aligned}$$where $k=\sqrt{2\left( E- \omega _{\perp }\right) }$, $Y_{lm}\left( \theta ,\phi \right) $ are the spherical harmonic functions of the azimuth angles $\left( \theta _{B},\phi _{B}\right) $ of $\vec{r}_{B}$, $\hat{\jmath}_{l}\left( z\right) $ is the Riccati-Bessel function, and $\hat{h}_{l}^{\left( +\right) }\left( z\right) $ is the Riccati-Hankel function. The coefficients $C_{l,m}$ are given by the boundary condition, while the scattering amplitudes $f_{l,m}\left( k\right) $ are determined by the effective potential $V_{nm}\left( \vec{r}_{B}\right) $ defined in (\[bobpvmn\]). In the low-energy case with small $k$, we can neglect all the high-partial wave scattering amplitudes $f_{l,m}\left( k\right) $ with $l>0$, and approximate the $s$-wave scattering amplitude $f_{0,0}\left( k\right) $ with $f_{0,0}\left( k=0\right) $. Then the long-distance behavior of wave function $\psi $ becomes$$\begin{aligned} &&\psi (\vec{r}_{B},x_{A},y_{A};0) \simeq \phi _{0}\left( x_{A},y_{A}\right) \psi _{0}\left( \vec{r}_{B};0\right) \nonumber \\ &&\sim \phi _{0}\left( x_{A},y_{A}\right) \left[ \frac{1}{k\left\vert \vec{r}% _{B}\right\vert }\left( \sin \left( k\left\vert \vec{r}_{B}\right\vert \right) -ka_{AB}e^{ik\left\vert \vec{r}_{B}\right\vert }\right) \right. \nonumber \\ &&\left. +\sum_{l=1}^{\infty }\sum_{m=-l}^{l}C_{l,m}\frac{Y_{lm}\left( \theta _{B},\phi _{B}\right) }{k\left\vert \vec{r}_{B}\right\vert }\hat{% \jmath}_{l}\left( k\left\vert \vec{r}_{B}\right\vert \right) \right] \label{bobppsi0b}\end{aligned}$$with the scattering length $a_{AB}$ defined as $$\begin{aligned} a_{AB}=-f_{0,0}\left( k=0\right) . \label{bobpaab}\end{aligned}$$ The expression (\[bobppsi0b\]) implies that in the intermediate" region of $$\begin{aligned} \left[ r_{\ast },l_{\perp } \right] \ll \left\vert \vec{r}_{B}\right\vert \ll \frac1k,\end{aligned}$$ the behavior of $\psi $ takes the form of $$\begin{aligned} \psi (\vec{r}_{B},x_{A},y_{A};0)\sim \phi _{0}\left( x_{A},y_{A}\right) \left( 1-\frac{a_{AB}}{\left\vert \vec{r}_{B}\right\vert }\right) .\end{aligned}$$Therefore, we can replace the real interaction potential $V_{AB}\left( r_{AB}\right) $ in (\[bobph2b\]) with a Bethe-Peierls-type boundary condition$$\begin{aligned} \lim_{\left\vert \vec{r}_{B}\right\vert \rightarrow 0}\psi (\vec{r}% _{B},x_{A},y_{A};0)\propto \phi _{0}\left( x_{A},y_{A}\right) \left( 1-\frac{% a_{AB}}{\left\vert \vec{r}_{B}\right\vert }\right) . \label{bobpbobp}\end{aligned}$$Under this boundary condition, the solution of the eigen-equation$$\begin{aligned} &&\left[ T_{A\perp }+V_{A\perp }+T_{B}\right] \psi (\vec{r}% _{B},x_{A},y_{A};0) \nonumber\\ && \hspace{3cm} =E \psi (\vec{r}_{B},x_{A},y_{A};0)\end{aligned}$$takes the form of Eq. (\[bobppsi0b\]) for all $\left\vert \vec{r}% _{B}\right\vert \neq 0$, and becomes a reasonable approximation for the solution of (\[bobpee\]). In this reduced mixed-dimensional model, the transverse coordinates $\left( x_{A},y_{A}\right) $ of the heavy atom $A$ is taken to be fixed values of $\left( 0,0\right) $. Together with the assumption $z_{A}=0$, we have$$\begin{aligned} \left\vert \vec{r}_{B}\right\vert =r_{AB},\end{aligned}$$and then the boundary condition (\[bobpbobp\]) can be expressed as$$\begin{aligned} \lim_{r_{AB}\rightarrow 0}\psi \left( \vec{r}_{B};0\right) \propto \left( 1-% \frac{a_{AB}}{r_{AB}}\right) . \label{bobpnb}\end{aligned}$$Here, $\psi \left( \vec{r}_{B};0\right) $ is the wave function of the light atom $B$ with the position of atom $A$ fixed at $z_{A}=0$. For non-zero $z_{A}$, the condition (\[bobpnb\]) can be generalized to $$\begin{aligned} \lim_{r_{AB}\rightarrow 0}\psi \left( \vec{r}_{B};z_{A}\right) \propto \left( 1-\frac{a_{AB}}{r_{AB}}\right) . \label{bobpnb3}\end{aligned}$$That is the Bethe-Peierls boundary condition used in the BOA discussed in the main text of this manuscript. We notice that there is another type of Bethe-Peierls boundary condition as discussed in Ref. [@nishida-08; @nishida-11], where the total wave function $\Psi$ of the reduced mixed-dimensional two-body problem is assumed to satisfy the condition $$\begin{aligned} \lim_{D_{AB}\rightarrow 0}\Psi \propto \left( 1-\frac{a_{\rm eff}}{D_{AB}}% \right) \label{bobp2}\end{aligned}$$with $$\begin{aligned} D_{AB}=\sqrt{x_{B}^{2}+y_{B}^{2}+\frac{m_{A}+1}{m_{A}}\left( z_{A}-z_{B}\right) ^{2}}.\end{aligned}$$This condition is slightly different from our result of Eq. (\[bobpnb3\]). The difference can be understood by noticing that when solving for the wave function of atom $B$ under BOA, we fix the position of the heavy atom $A$, such that the relevant Bethe-Peierls boundary condition (\[bobp2\]) becomes isotropic. It is pointed out that, in the limit of $m_{A} \gg 1$, the condition (\[bobp2\]) approaches to (\[bobpnb3\]) and we have $a_{AB}=a_{\rm eff}$. Therefore, we approximate $a_{AB}$ as $a_{\rm eff}$ when comparing the BOA results with the effective field theory [@nishida-11]. It is straightforward to generalize the discussion above to more general cases with $N$ heavy atoms $A_{1},...,A_{N}$ individually confined in $N$ low-dimensional traps, and one light atom $B$ moving freely in $3$D. In that case, we can fix the positions of the heavy atoms under BOA, and use the Bethe-Peierls boundary condition $$\begin{aligned} \lim_{r_{iB}\rightarrow 0}\psi \left( \vec{r}_{A},\vec{r}_{B}\right) \propto \left( 1-\frac{a_{iB}}{r_{iB}}\right) \label{bobpnb2}\end{aligned}$$to solve the Schr$\ddot{\rm o}$dinger equation of the light atom. Here, $r_{iB}$ is the distance between the heavy atom $A_{i}$ and the light atom $B$. That is the approach we used in our main text. [99]{} V. Efimov, Phys. Lett. [**33B**]{}, 563 (1970); Nucl. Phys. A [**210**]{}, 157 (1973). S. Tölle, H.-W. Hammer, B.C. Metsch, Comptes Rendus Physique [**12**]{}, 59 (2011). A.S. Jensen, K. Riisager, D.V. Fedorov, and E. Garrido, Rev. Mod. Phys. [**76**]{}, 215 (2004). I. Mazumdar, A.R.P. Rau, and V. Bhasin, Phys. Rev. Lett. [**97**]{}, 062503 (2006). T.K. Lim, S.K. Duffy, and W.C. Damer, Phys. Rev. Lett. [**38**]{}, 341 (1977). R. Brühl, A. Kalinin, O. Kornilov, J.P. Toennies, G.C. Hegerfeldt, and M. Stoll, Phys. Rev. Lett. [**95**]{}, 063002 (2005). I. Baccarelli, G. Delgado-Barrio, F. Gianturco, T. Gonzalez-Lezana, S. Miret-Artes, and P. Villarreal, Europhys. Lett. [**50**]{}, 567 (2000). T. Kraemer, M. Mark, P. Waldburger, J.G. Danzl, C. Chin, B. Engeser, A.D. Lange, K. Pilch, A. Jaakkola, H.-C. Nägerl, and R. Grimm, Nature [**440**]{}, 315 (2006). S. Knoop, F. Ferlaino, M. Mark, M. Berninger, H. Schöbel, H.-C. Nägerl, and R. Grimm, Nature Phys. [**5**]{}, 227 (2009). M. Zaccanti, B. Deissler, C. D’rrico, M. Fattori, M. Jona-Lasinio, S. Müller, G. Roati, M. Inguscio, and G. Modugno, Nature Phys. [**5**]{}, 586 (2009). N. Gross, Z. Shotan, S. Kokkelmans, and L. Khaykovich, Phys. Rev. Lett. [**103**]{}, 163202 (2009). S.E. Pollack, D. Dries, and R.G. Hulet, Science [**326**]{}, 1683 (2009). E. Braaten, H.-W. Hammer, Phys. Rep. [**428**]{}, 259 (2006). R.D. Amado and J.V. Noble, Phys. Rev. D [**5**]{}, 1992 (1972). V. Efimov, Sov. Phys. JETP Lett. [**16**]{}, 34 (1972); Nucl. Phys. A [**210**]{}, 157 (1973). A. C. Fonseca, E. F. Redish and P. E. Shanley, Nucl. Phys. A [**320**]{}, 273 (1979). D.S. Petrov, Phys. Rev. A [**67**]{}, 010703(R) (2003). D.S. Petrov, C. Salomon, and G.V. Shlyapnikov, Phys. Rev. Lett. [**93**]{}, 090404 (2004); Phys. Rev. A [**71**]{}, 012708 (2005). Shimpei Endo, Pascal Naidon and Masahito Ueda, Few- Body Systems, DOI: 10.1007/s00601-011-0229-6. T.B. Ottenstein, T. Lompe, M. Kohnen, A.N. Wenz, and S. Jochim, Phys. Rev. Lett. [**101**]{}, 203202 (2008). J.H. Huckans, J.R. Williams, E.L. Hazlett, R.W. Stites, and K.M. O’ara, Phys. Rev. Lett. [**102**]{}, 165302 (2009). J.R. Williams, E.L. Hazlett, J.H. Huckans, R.W. Stites, Y. Zhang, and K.M. O’ara, Phys. Rev. Lett. [**103**]{}, 130404 (2009). E. Braaten, H.-W. Hammer, D. Kang, and L. Platter, Phys. Rev. Lett. [**103**]{}, 073202 (2009). A.N. Wenz, T. Lompe, T.B. Ottenstein, F. Serwane, G. Zürn, and S. Jochim, Phys. Rev. A [**80**]{}, 040702(R) (2009). S. Nakajima, M. Horikoshi, T. Mukaiyama, P. Naidon and M. Ueda, Phys. Rev. Lett. [**105**]{} 023201 (2010). P. Naidon and M. Ueda, Comptes Rendus Physique [**12**]{}, 13 (2011). S. Nakajima, M. Horikoshi, T. Mukaiyama, P. Naidon, and M. Ueda, Phys. Rev. Lett. [**106**]{}, 143201 (2011). T. Lompe, T.B. Ottensetin, F. Serwane, K. Viering, A.N. Wenz, G. Zürn, and S. Jochim, Phys. Rev. Lett. [**105**]{}, 103201 (2010). T. Lompe, T.B. Ottensetin, F. Serwane, A.N. Wenz, G. Zürn, and S. Jochim, Science [**330**]{}, 940 (2010). G. Barontini, C. Weber, F. Rabatti, J. Catani, G. Thalhammer, M. Inguscio, and F. Minardi, Phys. Rev. Lett. [**103**]{}, 043201 (2009). Y. Nishida and S. Tan, Phys. Rev. Lett. [**101**]{}, 170401 (2008); Phys. Rev. A [**79**]{}, 060701 (2009); Phys. Rev. A [**82**]{}, 062713 (2010). Y. Nishida, Phys. Rev. A [**82**]{}, 011605(R) (2010). J. Catani, G. Barontini, G. Lamporesi, F. Rabatti, G. Thalhammer, F. Minardi, S. Stringari, and M. Inguscio, Phys. Rev. Lett. [**103**]{}, 140401 (2009). G. Lamporesi, J. Catani, G. Barontini, Y. Nishida, M. Inguscio, and F. Minardi, Phys. Rev. Lett. [**104**]{}, 153202 (2010). C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. [**82**]{}, 1225 (2010). Y. Nishida and S. Tan, eprint-arXiv:1104.2387 (2011).
--- abstract: 'We propose a general scheme for the derivation of the signals resonant inelastic (and elastic) x-ray scattering (RIXS) gives access to. In particular, we find that RIXS should allow to directly detect many hidden orders, such as spin nematic, bond nematic, vector and scalar spin chiralities. To do so, we choose to take the point of view of effective operators, leaving microscopic details unspecified, but still keeping experimentally-controllable parameters explicit, like the incoming and outgoing polarizations of the x-rays. We ask not what microscopic processes can lead to a specific outcome, but, rather, what couplings are in principle possible. This approach allows to systematically enumerate all possible origins of contributions to a general RIXS signal. Although we mainly focus on magnetic insulators, for which we give a number of examples, our analysis carries over to systems with charge and other degrees of freedom, which we briefly address. We hope this work will help guide theorists and experimentalists alike in the design and interpretation of RIXS experiments.' author: - Lucile Savary - 'T. Senthil' bibliography: - 'rixs.bib' title: 'Probing Hidden Orders with Resonant Inelastic X-Ray Scattering' --- Introduction {#sec:introduction} ============ Many systems have ground states with well-defined order parameters which couple directly to conventional probes such as neutrons or light. The accessible data usually comes in the form of “structure factors,” i.e. correlation functions of two “elementary” observables. Classic examples are magnetically ordered states, e.g. ferromagnets and antiferromagnets whose magnetic structure and fluctuations can be resolved by methods like neutron scattering, muon spin resonance ($\mu$SR), nuclear magnetic resonance (NMR) etc.. However, many of the “exotic” phases proposed by theorists do not fall into this category. Some states exist, for example, which possess a well-defined local order parameter, but still evade robust characterization using “conventional” probes. The order is then commonly referred to as “hidden.” Typically, the order parameters of such systems have quantum numbers which are multiples of those which elementary particles give access to when coupled linearly to the system. For example neutrons can excite $S=1$ magnons, but not $S=2$ excitations (owing to the dipolar coupling between the neutron and electron’s spins). Perhaps the simplest and best-known example of a hidden order is that of spin quadrupolar (also called nematic) order [@penc2011]. In that case, the expectation values of the spin projections, $\langle S_i^\mu\rangle$ (note the spins transform as “dipoles”) are zero, but those of “quadrupolar” operators, like $\langle S_i^\mu S_i^\nu\rangle$, are not. Many other types of hidden orders have been proposed in the literature. Among those are spin “bond nematic” [@chubukov1991; @starykh2014], where the order parameter contains spins on neighboring sites, and spin vector and scalar chiralities, which involve antisymmetric products of spins. Hidden orders also arise in conducting systems, with the famous example of nematic (in that case, “nematic” refers to rotation –discrete or continuous– symmetry breaking in real space) order in the pnictide superconductors. Here we show that Resonant Elastic and Inelastic X-Ray Scattering (REXS and RIXS) can in principle measure spin nematic, vector and scalar chirality, and many more correlation functions (static and dynamical for REXS and RIXS, respectively). In general, we propose an enveloping scheme which allows to systematically enumerate which correlation functions will contribute to the RIXS signal in any given polarization geometry. REXS signals are obtained from RIXS in the $\omega\rightarrow0$ limit. In particular, in the case of static order, REXS signal should display corresponding “Bragg” peaks. “Resonant scattering” refers to techniques where the energy of an incoming probe is tuned to a “resonance” (a.k.a. “edge”). In that case, not only is the absorption (virtual or real) cross-section dramatically enhanced, but the latter may also involve nontrivial operators, allowing to probe correlation functions of complex order parameters, i.e. typically those of hidden orders, which are otherwise hardly accessible. This is clear upon thinking in terms of perturbation theory in the probe-system coupling amplitude, and we soon specialize to an x-ray probe. The scattering amplitude up to second order is given by [@messiah1962; @ament2011] $$\label{eq:11} \mathcal{T}_{\mathtt{f}\mathtt{i}}=\langle\mathtt{f}|\hat{H}'|\mathtt{i}\rangle+\sum_n\frac{\langle\mathtt{f}|\hat{H}'|n\rangle \langle n|\hat{H}'|\mathtt{i}\rangle}{E_{\mathtt{i}}-E_n},$$ where $|\mathtt{i},\mathtt{f}\rangle$ denote the initial and final states of the {system + electromagnetic (EM) field}, $\hat{H}'$ is the coupling Hamiltonian between matter and the EM field, $\{|n\rangle\}$ forms a complete set of states (the “important” ones will be discussed later) of the system, and $E_\alpha$ is the energy of the state $|\alpha\rangle$. When there exist states $|n\rangle$ which are close in energy to $E_{\mathtt{i}}$, the system is said to be at resonance with the probe and the second order amplitude in Eq.  largely dominates the first. Moreover, within perturbation theory, the former contains, among others, the following chain of (virtual) processes: the absorption of a photon, the evolution of the resulting system, followed by the emission of a photon. The RIXS signal is the cross-section relative to the amplitude of such a process, when the incoming x-ray light is tuned to a resonance which involves the excitation of a core electron to a valence level, i.e. when $|n\rangle$ is a state of the pure system (no photons) and contains a “core hole”. Typical orders of magnitude for such x-ray frequencies range between 0.01 and 10 keV [@dewey1969; @bearden1967; @ament2011], i.e. correspond to photon wavevectors of order 1-10$^{-3}$ Å$^{-1}$. Detailed microscopic analyses of RIXS processes in a number of systems have been described at length in the literature [@ament2011], some even predicting the observation of correlation functions of complex order parameters [@ko2011; @michaud2011]. Here we do not belabor on them, but rather base the analysis entirely on the observation that the initial (before the photon is absorbed) and final states (after the photon is emitted) of the system both belong to its low-energy manifold. Essentially, in that approach, the only important feature of the microscopics is the reduction of (at least spatial) symmetries to those of the core-hole site point group. Such a symmetry-based strategy has a few major advantages. An accurate description of all possible microscopic processes is a very complex many-body problem, which is moreover subjected to many uncertainties concerning the atomic structure in a material. As a consequence, such approaches are inherently material-specific. It is moreover very difficult to exhaust all possible processes through microscopic reasoning. The symmetry procedure bypasses these issues. This type of [*fully*]{} effective approach was recently insightfully pioneered in Ref.  in the context of magnetic insulators, where the author gave the form of on-site effective RIXS operators for up to two on-site spin operators. Here we constructively rederive and generalize Ref. ’s main result to all possible symmetry-allowed couplings, including those which involve multiple-site operators and degrees of freedom other than just spins. Moreover, the broader context of the derivation presented here helps make more transparent the correlations possibly probed in RIXS, on which we focus. The remainder of this paper is organized as follows. We first review the form of the light-matter interaction, the important symmetries to be considered, and derive the form of the effective operators whose correlations RIXS measures in insulating magnetic systems, which are summarized in Table \[tab:couplings\]. We then turn to the study of three important examples of hidden orders as may be realized in real materials: spin nematic order, bond nematic order, vector and scalar chiralities, and calculate the expected RIXS signals in these three concrete cases. At the end of the paper we briefly address systems with charge degrees of freedom. Effective operators {#sec:effective-operators} =================== The leading order Hamiltonian $\hat{H}'$ which couples light to matter and is involved in the [*second*]{}-order amplitude of the interaction cross-section is given by, in the Coulomb gauge ${\boldsymbol{\nabla}}\cdot\mathbf{A}=0$ [@ament2011][^1]: $$\label{eq:8} \hat{H}'=\sum_\mathbf{r}\left[\hat{\psi}_\mathbf{r}^\dagger \,\frac{e\mathbf{p}}{m}\,\hat{\psi}_\mathbf{r}\cdot\mathbf{\hat{A}}_\mathbf{r}+\hat{\psi}_\mathbf{r}^\dagger\,\frac{e\hbar{\boldsymbol{\sigma}}}{2m}\,\hat{\psi}_\mathbf{r}\cdot\left({\boldsymbol{\nabla}}\times\mathbf{\hat{A}}_\mathbf{r}\right)\right],$$ with the vector potential $$\label{eq:9} \mathbf{\hat{A}}_\mathbf{r}=\sum_\mathbf{k}\sqrt{\frac{\hbar}{2V\epsilon_0\omega_\mathbf{k}}}\sum_{{\boldsymbol{\varepsilon}}}\left({\boldsymbol{\varepsilon}}^*\hat{a}_{\mathbf{k},{\boldsymbol{\varepsilon}}}^\dagger e^{-i\mathbf{k}\cdot\mathbf{r}}+{\rm h.c.}\right).$$ $\hat{H}'$ acts in the product space of the electrons $\mathcal{H}_{e-}$ and photons $\mathcal{H}_{phot}$, $\mathcal{H}=\mathcal{H}_{e-}\times\mathcal{H}_{phot}$, $\hat{\psi}^\dagger$ and $\hat{\psi}$ are the electron creation and annihilation second-quantized operator fields, $\hbar$ is Planck’s constant over $2\pi$, $e$ and $m$ are the electron charge and mass, respectively, $\hat{a}^\dagger$ and $\hat{a}$ are the photon creation and annihilation operators, ${\boldsymbol{\varepsilon}}$ denotes the photon polarization, $V$ is the volume in which the EM field is enclosed, $\epsilon_0$ is the dielectric polarization of vacuum and $\omega_\mathbf{k}=\omega_{-\mathbf{k}}=c|\mathbf{k}|$ where $c$ is the speed of light. Here, for concreteness, we make two approximations, often used in the literature [@ament2011]: we consider [*(i)*]{} that $|\mathbf{k}\cdot\delta\mathbf{r}|\ll1$ at the relevant x-ray wavelengths, where $\mathbf{r}=\mathbf{R}+\delta\mathbf{r}$ where $\mathbf{R}$ denotes the position of a lattice site, and so, at zeroth order, $e^{i\mathbf{k}\cdot\delta\mathbf{r}}\approx1$ [^2], and [*(ii)*]{} that in Eq.  the magnetic term ($\propto{\boldsymbol{\sigma}}$) is subdominant compared to the “electric” one ($\propto\mathbf{p}$). We return to these approximations in Appendix \[sec:higher-multipoles\]. Therefore, the second-order RIXS amplitude for processes with a core hole at site $\mathbf{R}$ reduces to $$\begin{aligned} \label{eq:10} \mathcal{T}_\mathbf{R}^{\mathtt{if}}&=& \sum_{\mathbf{q},\mathbf{q}',{\boldsymbol{\tilde{\varepsilon}}},{\boldsymbol{\tilde{\varepsilon}}}'}\left\langle \mathtt{f}\left|\left[{\boldsymbol{\tilde{\varepsilon}}}'\hat{a}_{\mathbf{q}'}e^{i\mathbf{q}'\cdot\mathbf{R}}+{\boldsymbol{\tilde{\varepsilon}}}'{}^*\hat{a}^\dagger_{\mathbf{q}'}e^{-i\mathbf{q}'\cdot\mathbf{R}}\right]\right.\right.\\ &&\qquad\qquad\qquad\left.\left.\times\hat{\mathcal{O}}_\mathbf{R}\left[{\boldsymbol{\tilde{\varepsilon}}}\hat{a}_{\mathbf{q}}e^{i\mathbf{q}\cdot\mathbf{R}}+{\boldsymbol{\tilde{\varepsilon}}}^*\hat{a}^\dagger_\mathbf{q}e^{-i\mathbf{q}\cdot\mathbf{R}}\right]\right|\mathtt{i}\right\rangle\nonumber\\ &=&\mathcal{A}_{\mathbf{k},\mathbf{k}'}\left\langle f\left|\varepsilon_\mu'{}^*\hat{\mathcal{O}}_\mathbf{R}^{\mu\nu}\varepsilon_\nu\right|i\right\rangle e^{i(\mathbf{k}-\mathbf{k}')\cdot\mathbf{R}}, \label{eq:10c}\end{aligned}$$ where $\hat{\mathcal{O}}\sim\frac{1}{\sqrt{\omega_\mathbf{q}\omega_{\mathbf{q}'}}}\mathbf{p} \,\hat{G}\,\mathbf{p}\, $, with $\hat{G}=\sum_n\frac{|n_\mathbf{R}\rangle\langle n_\mathbf{R}|}{E_i+\hbar\omega_\mathbf{q}-E_n}$, where $|n\rangle$ are restricted to “intermediate” states with a core hole at site $\mathbf{R}$ (i.e. close to resonance). The second expression Eq.  is obtained by requiring $|\mathtt{i}\rangle=|i\rangle\otimes|\mathbf{k}{\boldsymbol{\varepsilon}}\rangle$ and $|\mathtt{f}\rangle=|f\rangle\otimes|\mathbf{k}'{\boldsymbol{\varepsilon}}'\rangle$. Importantly, $\hat{\mathcal{O}}$ acts purely in electronic space, and moreover [*within the low-energy manifold*]{}, provided the system immediately “returns” to a low-energy state as the outgoing photon is emitted, as is usually assumed. We therefore ask: what effective operator acts purely in this manifold which reproduces the matrix elements $\mathcal{T}^{\mathtt{if}}_\mathbf{R}$? If we know the low-energy manifold and a basis which spans it, and if the basis elements are physically meaningful, we shall immediately obtain which correlation functions RIXS produces. We insist once again that, within this approach, all “intermediate processes,” no matter how complicated, are in a sense included, and need not be discussed. As usual, most general arguments stem from symmetry considerations, which we now address. The core hole is immobile, which imposes a strong symmetry constraint on $\hat{\mathcal{O}}_\mathbf{R}$: it should be invariant in real space under point (site $\mathbf{R}$) group symmetries. Another constraint comes from the “locality” of the effect of the core-hole in the “intermediate propagation time” $\tau=1/\Gamma\sim10^{-15}$ s [@ament2011], which implies that only operators which act in close proximity to the site of the core hole should be involved. While this statement may appear somewhat loose, a quick order-of-magnitude analysis shows that, [*even in a metal*]{}, electrons will not travel over more than very few lattice spacings over the time $\tau$ [^3]. Finally, since transition amplitudes are scalars, by keeping the polarization dependence explicit, we impose constraints on the combination of operators which multiply the polarization components. This is what we address now and is summarized in Table \[tab:couplings\]. For concreteness and ease of presentation of the derivation we now focus on magnetic insulators, though we note that the same ideas carry over to systems with charge (and other) degrees of freedom, to which we return at the end of the paper, in Sec. \[sec:other-degr-freed\]. Indeed, because of the “locality” of the effective scattering operators, insulating systems are more readily tackled. Local (in the sense of acting only on degrees of freedom living in a small neighborhood in real space) operators in insulating systems yield a very natural description of the system, and the low-energy manifold, being finite (generally a well-defined $J$ multiplet, possibly split by crystal fields) and sharply defined (usually a gap separates multiplets), can be spanned by effective “spin” operators (finite vector spaces of identical dimensions are isomorphic). Therefore only a spin operator basis compatible with the combinations of polarizations remains to be found. In the absence of both spin-orbit coupling [*at low energies*]{} (core levels always experience very strong spin-orbit coupling [@ament2011]) and of a magnetic field, the system should be rotationally symmetric in spin space. Moreover, in principle, in the Hamiltonian, under spatial symmetries, the spins are left invariant. However, here, in the RIXS structure factor, the situation is more subtle. Spin excitations (and hence spin operators) may only arise in the structure factor thanks to spin-orbit coupling at the core. Therefore, in principle the structure factor itself should display signs spin-orbit coupling [@haverkort2010b; @haverkort2010], with the effective spin operators transforming under [*lattice*]{} symmetries. Even upon neglecting transition operators which break rotational symmetry if spin orbit coupling is weak at the valence level, the effective spins still transform under [*real space*]{} symmetry operations.[^4] Then, the polarizations and (effective) spins (the latter make up the operators $\hat{\mathcal{O}}^{\mu\nu}$, as mentioned above) transform as usual vectors and pseudo-vectors, respectively, under spatial transformations, and according to ${\boldsymbol{\varepsilon}}\rightarrow-{\boldsymbol{\varepsilon}}^*$ and $\mathbf{S}\rightarrow-\mathbf{S}$ under time reversal (see Appendix \[sec:transformation-rules\]). In other words, under the full spherical symmetry group, using the notations from Ref. , ${\boldsymbol{\varepsilon}}$ and $\mathbf{S}$ transform under $D_1^-$ and $D_1^+$, respectively (under $SO(3)$ operations, both the polarization and spin vectors transform under the $L=1$ representation). Since $D_1^\pm\times D_1^\pm=D_0^++D_1^++D_2^+$ ($1\times 1=0+1+2$ for $SO(3)$), [*any*]{} combination of spin operators which transform under the same representations can in principle be involved in the RIXS signal. Depending on the number of neighboring operators one chooses to include (and on the value of $S(S+1)$), possibilities differ. The situation for up to three spin operators (on the same or nearby sites, from “locality”) is summarized in Table \[tab:couplings\] (see in particular the caption), and details of the derivation are given in Appendix \[sec:derivation-table-i\]. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- representation polarizations one spin two spins three spins ---------------- ---------------------------------------------------------------------------------- ---------------- ------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------ 0 ${\boldsymbol{\varepsilon}}'{}^*\cdot{\boldsymbol{\varepsilon}}$ $\mathbf{S}_i\cdot\mathbf{S}_j$ $\left(\mathbf{S}_i\times\mathbf{S}_j\right)\cdot\mathbf{S}_k$ 1 ${\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}}$ $\mathbf{S}_i$ $\mathbf{S}_i\times\mathbf{S}_j$ $\left(\mathbf{S}_i\cdot\mathbf{S}_j\right)\mathbf{S}_k$,$\left(\mathbf{S}_i\times\mathbf{S}_j\right)\times\mathbf{S}_k$,$ \left\llbracket\mathbf{S}_i,\mathbf{S}_j\right\rrbracket\cdot\mathbf{S}_k$ 2 $\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket$ $\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket$ $ \left\llbracket\mathbf{S}_i\times\mathbf{S}_j,\mathbf{S}_k\right\rrbracket$,$\left\llbracket\mathbf{S}_i,\mathbf{S}_j\right\rrbracket\times\mathbf{S}_k$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [*On-site terms.—*]{} Upon considering on-site terms only ($i=j=k$), where one need not take into account any further lattice symmetries, and [*up to two*]{} spin operators, we recover the expression from Ref. :[^5] $$\label{eq:15} T_i=\alpha_0({\boldsymbol{\varepsilon}}'{}^*\cdot{\boldsymbol{\varepsilon}})+\alpha_1({\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}})\cdot\mathbf{S}_i+\alpha_2\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket \llbracket\mathbf{S}_i,\mathbf{S}_i\rrbracket,$$ where $T_i=\varepsilon'^*_\mu\mathcal{O}_i^{\mu\nu}\varepsilon_\nu$, and where $\llbracket \mathbf{S}_i,\mathbf{S}_j\rrbracket$ is the traceless symmetric second rank tensor constructed from $\mathbf{S}_i$ and $\mathbf{S}_j$, i.e. given by: $\llbracket \mathbf{S}_i,\mathbf{S}_j\rrbracket_{\mu\nu}= \frac{1}{2}\left(S_i^\mu S_j^\nu+S_i^\nu S_j^\mu\right)-\frac{1}{3}(\mathbf{S}_i\cdot\mathbf{S}_j)\delta_{\mu\nu}$, and analogously for $\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket$. The symmetric product $\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket \llbracket\mathbf{S}_i,\mathbf{S}_i\rrbracket=\sum_{\mu,\nu}\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket_{\mu\nu} \llbracket\mathbf{S}_i,\mathbf{S}_i\rrbracket_{\mu\nu}$ has all indices contracted. The $\alpha_n$ are material-specific coefficients [@haverkort2010]. The generalization to discrete symmetries is [*formally*]{} straightforward (though usually gruesome in practice) and discussed in detail in Appendix \[sec:lower-symmetry\]. [*Off-site terms.—*]{} The above considerations take care of the symmetry aspects relative to spin space. To fulfill the constraints associated with the lattice, which enters through $\mathbf{S}_\mathbf{r}\rightarrow [\det R]\,R\cdot\mathbf{S}_{R\cdot\mathbf{r}}$ where $R$ is a spatial operation (see Appendix \[sec:transformation-rules\]), the expressions must be appropriately symmetrized. For example, take a 1d chain of $S=1/2$, and consider a maximum of two spin terms. Then, if lattice sites are centers of inversion, the transition operator will be (still assuming spherical symmetry in spin space): $$\begin{aligned} \label{eq:16} T_i&=&\alpha_0 ({\boldsymbol{\varepsilon}}'{}^*\cdot{\boldsymbol{\varepsilon}})\mathbf{S}_i\cdot(\mathbf{S}_{i-1}+\mathbf{S}_{i+1})\nonumber\\ &&+({\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}})\cdot\left(\alpha_{1,1}\mathbf{S}_i+\alpha_{1,2}\mathbf{S}_i\times(\mathbf{S}_{i-1}+\mathbf{S}_{i+1})\right)\nonumber\\ &&+\alpha_2 \llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket\llbracket\mathbf{S}_i,\mathbf{S}_{i-1}+\mathbf{S}_{i+1}\rrbracket,\end{aligned}$$ where the $\alpha_n$ and $\alpha_{n,m}$ are material-specific coefficients which multiply terms which belong to the same irreducible representation ($n$) (or copy ($m$) thereof if an irreducible representation appears multiple times). From Table \[tab:couplings\], one may directly read out the quantities whose correlation functions will contribute to the RIXS signal, as well as which polarization geometry will reveal them while switching off (most of) the other contributions (e.g.${\boldsymbol{\varepsilon}}'{}^*\parallel{\boldsymbol{\varepsilon}}$ will “switch off” the ${\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}}$ “channel”). Indeed the differential cross-section is given by [@messiah1962] $$\begin{aligned} \label{eq:14} &&\frac{\delta^2 \sigma}{\delta\Omega\delta E}\nonumber\\ &&\propto\sum_{f}\left|\sum_{\mathbf{R},\mathbf{q}}\langle f|T_{\mathbf{q}}|i\rangle e^{i(\mathbf{q}+\mathbf{k}-\mathbf{k}')\cdot\mathbf{R}}\right|^2\delta(E_f+\omega_{\mathbf{k}'}-E_i-\omega_{\mathbf{k}}) \nonumber\\ &&\propto\sum_\mathbf{q}\langle i|T_{-\mathbf{q}}T_\mathbf{q}|i\rangle\delta(\mathbf{q}+\mathbf{k}-\mathbf{k}')\delta(\Delta E-\omega_\mathbf{q}),\end{aligned}$$ where $\delta\Omega$ and $\delta E$ denote elementary solid angle (related to the momentum transfer $\widehat{\mathbf{k}-\mathbf{k}'}$) and energy, respectively, and where $\Delta E$ is the measured energy transfer. Before moving on to the discussion of specific examples, we make a couple of important remarks. [*(i)*]{} It is important to note that, for effective spin-$1/2$ systems, only off-site terms can contribute to, for example, the $\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket$ channel. Indeed, there exist only four (counting the identity) linearly independent $S=1/2$ operators. Therefore, while off-site contributions are expected to be weaker (they may only arise from so called “indirect” processes [@ament2011]), in an effective $S=1/2$ system, a “multi-site” signal in the $\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket$ channel will not “compete” with signal from possibly-larger onsite couplings, offering hope to unambiguously detect such correlations. [*(ii)*]{} We caution that, of course, this symmetry-based approach does not any give information on the absolute or relative strengths of the signals in the different channels. Moreover, “selection rules” relative to the chosen “edge” need to be additionally taken into account. [*(iii)*]{} An additional word of caution is in order: as far as we understand, the measurement of the [*outgoing*]{} polarization is not currently possible on instruments being used at this point, although the new state-of-the art facility currently under construction (which will also provide much higher resolution in energy, currently at around $100$ meV) will be able to. Spin nematic in the bilinear-biquadratic $S=1$ model on the triangular lattice {#sec:spin-nemat-bilin} ============================================================================== The $S=1$ bilinear biquadratic model with Hamiltonian $$\label{eq:9} H=\sum_{\langle i,j\rangle}\left(J_1\mathbf{S}_i\cdot\mathbf{S}_j+J_2(\mathbf{S}_i\cdot\mathbf{S}_j)^2\right),$$ on the triangular lattice has been quite intensively studied, especially so in recent years after it was suggested that it could be relevant to the insulating material NiGa$_2$S$_4$, where Ni$^{2+}$ is magnetic, with $S=1$ [@nakatsuji2005; @tsunetsugu2006; @lauchli2006; @bhattacharjee2006; @stoudenmire2009; @kaul2012]. This material is made of stacked triangular planes of Ni$^{2+}$ ions, and displays no long-range spin ordering but low-temperature specific heat which grows with temperature as $T^2$ [@nakatsuji2005]. The latter facts motivated the minimal description of NiGa$_2$S$_4$ by the model Eq. , which, for $J_1>0$, features two quadrupolar phases, one “ferroquadrupolar” and one “antiferroquadrupolar.” These phases are characterized by a vanishing expectation value for the spins, $\langle S_i^\mu\rangle=0$, but an on-site “quadrupolar” (a.k.a. “spin nematic”) order parameter: $\langle\{S^\mu_i,S^\nu_i\}-2\delta_{\mu\nu}\rangle\neq0$ (a diagonal part is subtracted to obtain a traceless operator). Since here we look not to accurately make predictions for the actual material NiGa$_2$S$_4$, but rather to demonstrate that RIXS will provide unambiguous signatures of quadrupolar order, we now restrict our attention to the minimal bilinear-biquadratic model Eq. , despite the fact that the latter will clearly not account for all the experimental features (not discussed here) of NiGa$_2$S$_4$ [@stoudenmire2009]. The wavefunctions of nematic states are simple single-site product wavefunctions. For spin-1 systems, product wavefunctions can generally be expressed as $|\psi\rangle=\prod_i\mathbf{d}_i\cdot|\mathbf{r}_i\rangle$, where we have defined $|\mathbf{r}_i\rangle=(|x_i\rangle,|y_i\rangle,|z_i\rangle)$, where $\mathbf{d}_i\in\mathbb{C}^3$ and $|\mathbf{d}_i|=1$. The states $|\mu_i\rangle$ are time-reversal invariant states defined such that $S_i^\mu|\mu_i\rangle=0$, i.e., in terms of the usual eigenstates of the $S_i^z$ operator, $|x\rangle=\frac{i}{\sqrt{2}}(|1\rangle-|\overline{1}\rangle)$, $|y\rangle=\frac{1}{\sqrt{2}}(|1\rangle+|\overline{1}\rangle)$ and $|z\rangle=-i|0\rangle$ [@smerald2013]. In the case of a “pure” quadrupolar phase, for this basis choice (with time-reversal invariant states), $\mathbf{d}_i\in\mathbb{R}^3$ [@smerald2013], which one can check indeed leads to $\langle\mathbf{S}_i\rangle=\mathbf{0}$. The vector $\mathbf{d}_i$ at each site is called the “director,” and corresponds to the direction along which the spins do [*not*]{} fluctuate. In nematic states, the direction along which the director points may vary at each site, like in the “antiferroquadrupolar” phase of the above model, where the [*directors*]{} form a three-sublattice 120$^\circ$ configuration. In the ferroquadrupolar phase, the directors on each site point in the same direction, which can be arbitrarily (since the Hamiltonian is isotropic in spin space) taken to be the $z$ direction. In that case, the unit cell is not enlarged. In ordered (or field-polarized) ferromagnets and antiferromagnets, the low-energy elementary excitations of the system are spin flips/waves, i.e. $S^z=\pm1$ local excitations. In nematic states, where it is the directors which are ordered, spin waves translate to “flavor waves” where there are now two pairs of conjugate “transverse” bosons. Flavor wave spectra and dipolar and quadrupolar correlations for the model Eq.  on the triangular lattice have been calculated in several references [@lauchli2006; @penc2011; @pires2014; @voll2015]. Our derivation is provided in Appendix \[sec:spin-nematic\], and here we give the full RIXS structure factor for the model, assuming on-site spin operators only (expected to provide the largest contributions to the signal), and spherical symmetries (a derivation is provided in Appendix \[sec:lower-symmetry\]), and provide a few plots in Figure \[fig:1\] for various polarization geometries and assumptions on relative absorption coefficients (about which symmetry analysis gives no further information). $$\begin{aligned} \label{eq:10} &&\mathcal{I}_{\omega,\mathbf{q}}^{\rm RIXS}\propto \sqrt{\frac{A_{\mathbf{q}}^2}{A_{\mathbf{q}}^2-B_{\mathbf{q}}^2}}\left[\left(\kappa_{xy}^{(2)}{}^2+\kappa_{yz}^{(2)}{}^2\right)\left(1-\frac{B_{\mathbf{q}}}{A_{\mathbf{q}}}\right)\right.\\ &&\left.\qquad\qquad\qquad+\left(\kappa_z^{(1)}{}^2+\kappa_x^{(1)}{}^2\right)\left(1+\frac{B_{\mathbf{q}}}{A_{\mathbf{q}}}\right)\right]\delta\left(\omega-\omega_{\mathbf{q}}\right)\nonumber,\end{aligned}$$ where $A_\mathbf{q}=\frac{1}{2}(J_1\gamma_\mathbf{q}-6J_2)$, $B_\mathbf{q}=\frac{\gamma_\mathbf{q}}{2}(J_2-J_1)$, $\omega_\mathbf{q}=2\sqrt{A_\mathbf{q}^2-B_\mathbf{q}^2}$ with $\gamma_\mathbf{q}=2\left(\cos q_1+\cos(\frac{1}{2}[q_1+\sqrt{3}q_2])+\cos(\frac{1}{2}[q_1-\sqrt{3}q_2])\right)$ and $\kappa^{(1)}_{\mu}=\alpha_1\epsilon_{\mu\lambda\rho}\varepsilon^\lambda\varepsilon'{}^*{}^\rho$ ($\epsilon$ is the second rank fully antisymmetric tensor), $\kappa^{(2)}_{\mu\nu}=\alpha_2(-2/3\delta_{\mu\nu}({\boldsymbol{\varepsilon}}'{}^*\cdot{\boldsymbol{\varepsilon}})+\varepsilon'{}^\mu{}^*\varepsilon{}^\nu+\varepsilon{}^\mu{}^*\varepsilon'{}^\nu{})$ (note that $\alpha_1$ and $\alpha_2$ depend, in particular, on the details of the atomic and crystal structures [@haverkort2010]), see Appendix \[sec:spin-nematic\]. Quadrupolar correlations are therefore [*directly*]{} seen. Clearly, one recovers the proper scaling of the amplitudes for the Goldstone mode (the system spontaneously breaks spin-rotation symmetry in the ferroquadrupolar phase) at $\mathbf{q}=\mathbf{0}$ at low energy, $\omega_\mathbf{q}\sim|\mathbf{q}|$ and $\mathcal{I}^{\rm RIXS,ferro}\sim1/\omega_\mathbf{q}$. Figure \[fig:1\] illustrates the associated smoking gun evidence for quadrupolar order provided by RIXS. ![Color plots of the a) spin-spin correlation function (as probed by e.g. inelastic neutron scattering or RIXS with $\boldsymbol{\varepsilon}'{}^*\perp\boldsymbol{\varepsilon}$ and $\alpha_2$ small enough), and b) signal probed by RIXS for ${\boldsymbol{\varepsilon}}=(i,1,0)/\sqrt{2}$ and ${\boldsymbol{\varepsilon}}'=(1,i,0)/\sqrt{2}$, both for the model of Eq.  with $J_2/J_1=-\tan(7\pi/16)$ (ferroquadrupolar phase [@lauchli2006]) on the triangular lattice. c) and d) Equal time (integrated over frequency) versions of the signals shown on plots a) and b), respectively. Note that the intensities are [*independently*]{} normalized. On figure b), the intensity is seen to diverge at the Goldstone mode, in sharp contrast with the vanishing of the spin-spin correlation function at the same point in figure a). In plots a) and b), $\tilde{\omega}=\omega/\sqrt{J_1^2+J_2^2}$.[]{data-label="fig:1"}](fig1-triangular){width="3.3in"} Bond nematic and vector chirality in nearest and next-nearest neighbor $S=1/2$ Heisenberg chains in a field {#sec:bond-nematic-vector} =========================================================================================================== The $S=1/2$ ferromagnetic nearest-neighbor and antiferromagnetic next-nearest-neighbor Heisenberg model on a chain $$\label{eq:12} H=\sum_i\left(-J_1\mathbf{S}_i\cdot\mathbf{S}_{i+1}+J_2\mathbf{S}_i\cdot\mathbf{S}_{i+2}-hS_i^z\right)$$ with $J_{1,2}>0$ is thought to be a minimal model for LiCuVO$_4$, a distorted “inverse spinel” (with chemical formula ABB’O$_4$) material such that the system can be seen in a first approximation as a set of parallel edge-shared CuO$_2$ chains separated by Li and V atoms [@enderle2005; @hagiwara2011; @mourigal2012]. Cu$^{2+}$ are magnetic ions with spin $1/2$. As will be important later, we note that the point group symmetry at each Cu site contains inversion symmetry. This material displays a complex phase diagram, which is now believed to show, from low to high field: incommensurate helical order, spin density wave order along the chains, and, possibly, right below the saturation field, a spin nematic state. Why the $J_1-J_2$ Heisenberg model of Eq.  seems like a reasonable starting point to describe this material may be articulated as follows: [*(i)*]{} there is experimental evidence for chain structure physics (see above), [*(ii)*]{} Cu usually displays weak spin orbit coupling, suppressing any strong anisotropy in spin space, and [*(iii)*]{} further-neighbor interactions in such compounds are usually sizable, owing to the configurations of the exchange paths. In fact, $J_1$ and $J_2$ were estimated to be $19$ K and $44$ K, respectively, using neutron diffraction and susceptibility data on single crystals [@enderle2005; @nawa2014]. Moreover, in some parameter regime, a number of the phases numerical simulations obtain for the model are reminiscent of those experimentally observed in LiCuVO$_4$, as we now discuss. For $J_2/J_1>1/4$, in a non-zero but weak enough field, the minimal [*model*]{} has been shown to exhibit a nonzero vector spin chirality $\mathbf{\hat{z}}\cdot{\boldsymbol{\chi}}_{i,{i+1}}=\mathbf{\hat{z}}\cdot(\mathbf{S}_i\times\mathbf{S}_{i+1})$ and $\mathbf{\hat{z}}\cdot{\boldsymbol{\chi}}_{i,{i+2}}=\mathbf{\hat{z}}\cdot(\mathbf{S}_i\times\mathbf{S}_{i+2})$ (a non-zero $z$-component of the chirality does not break any continuous symmetry of the model in a field applied along the $z$-axis and is therefore allowed), reminiscent of the helical order in LiCuVO$_4$. More precisely, DMRG and exact diagonalization have probed signs of long-range chirality correlations [@kolezhuzk2005; @hikihara2008; @sudan2009], and the bosonization of the field theory—which unveils a Luttinger liquid phase— predicts $\langle{\boldsymbol{\chi}}_{i,{i+1}}\cdot\mathbf{\hat{z}}\rangle\neq0$ and $\langle{\boldsymbol{\chi}}_{i,{i+2}}\cdot\mathbf{\hat{z}}\rangle\neq0$ [@hikihara2008; @mcculloch2008; @sudan2009]. The higher-field phase of the model numerically shows evidence of (bond) quadrupolar correlations [@chubukov1991; @hikihara2008; @sudan2009; @starykh2014]. Again, here we claim not to provide a detailed description of the material, but we propose that RIXS might be able to probe vector chirality as well as bond-nematic order in this system. In order to compute the RIXS signal, we proceed like in Ref.  closely follow their derivation, and start from the limit $J_1\ll J_2$ of two decoupled chains (each with lattice spacing $2a_0$). Each one may then be independently bosonized. We use the conventional notations for the boson fields, $\theta_{1,2}$ and $\phi_{1,2}$, where the indices are chain labels, and $[\phi_\nu(x',\tau'),\partial_x\theta_\mu(x,\tau)]=i\delta_{\mu\nu}\delta(x-x')\delta(\tau-\tau')$, for $\mu,\nu=1,2$. The spin operators are given by [@kolezhuzk2005; @hikihara2008; @mcculloch2008; @starykh2014] $$\begin{aligned} \label{eq:22} S^+_{\mu}(x)&=&e^{i\sqrt{\pi}\theta_\mu(x)}\\ &&\times\left((-1)^jb+b'\sin(2\pi Mj+\sqrt{4\pi}\phi_\mu(x))\right)\nonumber\\ S^z_{\mu}(x)&=&M+\frac{1}{\sqrt{\pi}}\partial_x\phi_\mu(x)\\ &&\qquad-(-1)^ja\sin(2\pi Mj+\sqrt{4\pi}\phi_\mu(x)),\nonumber\end{aligned}$$ where $x$ is the coordinate of a site, while $j\in\mathbb{Z}$ labels a “unit cell” of two sites $\{1,2\}$ (sites can be labelled by $l=2j+\mu$), $M$ is the total magnetization (due to the field), and $a,b,b'$ are non-universal constants. Note that here the subscript $\mu$ in $S_\mu^\alpha$ is unrelated to the subscript $i$ in Eq. . As mentioned above, when $J_1=0$, the two chains are decoupled and each one obtains two free-boson theories, with the action $\mathcal{S}_{\rm eff}^\mu=\int dx\int d\tau [\frac{v}{2}\left(K(\partial_x\theta_\mu)^2+\frac{1}{K}(\partial_x\phi_\mu)^2\right)+i\partial_x\theta_\mu\partial_\tau\phi_\mu]$, where $K$ and $v$ are the Luttinger liquid parameter and spin velocity of the antiferromagnetic Heisenberg ($J_2$) spin chain in a field. $J_1\neq0$ introduces couplings between the chains. Then it is useful to define $\gamma_\pm=(\gamma_1\pm\gamma_2)/\sqrt{2}$ for $\gamma=\theta,\phi$. The coupling actions are $\mathcal{S}_1=g_1\int dx\sin\left(\sqrt{8\pi}\phi_-+\pi M\right)$ and $\mathcal{S}_2=g_2\int dx(\partial_x\theta_+)\sin\sqrt{2\pi}\theta_-$, where $0\leq M\leq1/2$ is the magnetization per site, with parameters $g_1=-J_1a^2\sin\pi M$ and $g_2=-J_1\sqrt{\pi}b^2/\sqrt{2}$, which will lead to “bond nematic” and “vector chiral” phases. This model displays scale invariance, and renormalization group (RG) ideas apply. Then, within this approach, if $g_2/g_1$ flows to zero (resp. infinity) under the RG flow where high-frequency modes are integrated out, the system goes into the nematic, where $\phi_-$ gets pinned to a value which minimizes the integrand of $\mathcal{S}_1$, (resp. vector chiral, where it is the integrand of $\mathcal{S}_2$ which acquires a finite expectation value) phase [@hikihara2008]. Details are given in Appendix \[sec:vect-chir-bond\]. Because each site on the chain has only two neighbors, we expect that the contributions to the RIXS signal from three-spin interactions should be extremely weak. So, from Table \[tab:couplings\], assuming a weak enough effect of spin-orbit coupling at the low-energy level, the RIXS transition operator is given by Eq.  in zero field, and by $$\begin{aligned} \label{eq:24} T_i&=&\alpha_{0,\perp} ({\boldsymbol{\varepsilon}}_\perp'{}^*\cdot{\boldsymbol{\varepsilon}}_\perp)\mathbf{S}_i^\perp\cdot(\mathbf{S}_{i-1}^\perp+\mathbf{S}_{i+1}^\perp)\\ &&+\alpha_{0,z} ({\varepsilon}_z'{}^*{\varepsilon}_z)S_i^z(S_{i-1}^z+{S}_{i+1}^z)\nonumber\\ &&+({\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}})^z\left(\alpha_{1,1,z}\mathbf{S}_i+\alpha_{1,2,z}\mathbf{S}_i\times(\mathbf{S}_{i-1}+\mathbf{S}_{i+1})\right)^z\nonumber\\ &&+\alpha_{2,\perp} \llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket^\perp\llbracket\mathbf{S}_i,\mathbf{S}_{i-1}+\mathbf{S}_{i+1}\rrbracket^\perp,\nonumber $$ for $h\neq0$, i.e. when the full $SU(2)$ symmetry in spin space is broken down to $U(1)$ (see Appendix \[sec:lower-symmetry\]). In Eq. , we used the definitions $\mathbf{u}=\mathbf{u}_\perp+u^z\mathbf{\hat{z}}$ and $\llbracket\mathbf{u},\mathbf{v}\rrbracket^\perp_{\mu\nu}=\frac{1}{2}(u^\mu v^\nu+v^\nu u^\mu)-\frac{1}{2}(\mathbf{u}_\perp\cdot\mathbf{v}_\perp)\delta_{\mu\nu}$ with $\mu,\nu=x,y$ only. The $\alpha_{n,\mu}$ and $\alpha_{n,m,\mu}$ are coefficients. Finally, we find the following [*low-energy*]{} (long distance and time) [*leading*]{} contributions (see Appendix \[sec:vect-chir-bond\]) to the RIXS structure factor: $$\begin{aligned} \label{eq:27} \mathcal{I}_{\omega,q}^{\rm nematic}&\propto&\sum_{\epsilon=\pm1}\frac{\Theta(\omega^2-v_+^2(q-\epsilon\pi)^2)}{\sqrt{\omega^2-v_+^2(q-\epsilon\pi)^2}^{2-1/K_+}},\end{aligned}$$ for, e.g., ${\boldsymbol{\varepsilon}}\times {\boldsymbol{\varepsilon}}'{}^*=\mathbf{0}$ and ${\boldsymbol{\varepsilon}}\perp\mathbf{\hat{z}}$ in the nematic phase, and, around $q=\pm2\pi M\pm\pi$: $$\begin{aligned} \label{eq:28} \mathcal{I}_{\omega,q}^{\rm chiral}&\propto& \sum_{\epsilon,\epsilon'=\pm1}\Theta(\omega^2-v_+^2(q-2\pi\epsilon M-\epsilon'\pi)^2)\\ &&\qquad\qquad\times\sqrt{\omega^2-v_+^2(q-2\pi\epsilon M-\epsilon'\pi)^2}^{4K_+-2}\nonumber\end{aligned}$$ in cross polarizations, with $({\boldsymbol{\varepsilon}}\times {\boldsymbol{\varepsilon}}'{}^*)\parallel\mathbf{\hat{z}}$ in the vector chiral phase. In the expressions above, $K_+=K(1+J_1\frac{K}{\pi v})$ and $v_+=v(1-J_1\frac{K}{\pi v})$ \[note that $K(M=0)=1/2$ and $K(M=1/2)=1$\]. Figure \[fig:2\] displays some examples. ![Color plots of the dominant contributions to the a) $\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket^\perp$ channel (fourth line of Eq. ) in the nematic phase, around $q=\pi$, as given in Eq.  b) $({\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}})^z$ channel (third line of Eq. ) in the vector-chiral phase, around $q=\pi(2M-1)$, as given in Eq. .[]{data-label="fig:2"}](fig2-chain){width="3.3in"} Other degrees of freedom: electrons, phonons and orbitals {#sec:other-degr-freed} ========================================================= The derivation of effective RIXS operators presented above in the context of magnetic models readily extends to systems where other degrees of freedom are important. Indeed, the symmetry arguments we employed are general enough that they carry over to any type of problem. Modifications arise at the level of the identification and choice of basis for the space of operators which act on the local low-energy manifold. In magnetic insulators, as discussed earlier, the natural degrees of freedom are on-site, and a Hamiltonian is always associated with the specification of what the local degrees of freedom, namely effective “spins,” are. More microscopically, one can see an effective spin degree of freedom “emerge” from the multiplet structure of a single-ion Hamiltonian at each site. Now, similarly, if orbital degrees of freedom are to be treated explicitly in an insulating system in RIXS, one may simply introduce a set of (effective) operators $\mathbf{L}$, $L^\mu L^\nu$ etc., which transform as pseudo-vectors under [*real space*]{} operations, and obtain a table similar to Table \[tab:couplings\], where now each row should be associated with an irreducible representation of the appropriate point group. Now, systems with charge degrees of freedom, or phonons, are usually approached from a more field-theoretic perspective, where one has lost sight of a microscopic model, and operators are labeled by some momentum index (among others). That being said, given a material, one may always, much like for the insulating magnet case, think about how many electrons, and which single-ion orbital (or spin-orbital), a given ion will “contribute/provide” to the valence band of the whole solid. Provided one can determine this, it is reasonable to think of these spin-orbital states and number of electrons as the building blocks for the local low-energy manifold relevant to RIXS, and the basis of operators can be made of those which reshuffle the electrons in the (single-electron) spin-orbital states (even if the electrons interact, such a non-eigenstate basis can be chosen nevertheless). As an example, consider an atom A contributes $n$ on-site states to the valence band(s) of the system, with creation and annihilation operators $\psi_{\mathbf{r}\alpha}^\dagger,\psi_{\mathbf{r}\alpha}$. One may build on-site operators $\psi_{\mathbf{r}\alpha}^\dagger M_{\alpha\beta}\psi_{\mathbf{r}\beta}$, $\psi_{\mathbf{r}\alpha}^\dagger \psi_{\mathbf{r}\beta}^\dagger M_{\alpha\beta\gamma\delta}\psi_{\mathbf{r}\gamma}\psi_{\mathbf{r}\delta}$ etc., where $\alpha,\beta,\gamma,\delta=1,..,n$ (may be orbital labels, for example), as well as some involving neighbors, $\psi_{\mathbf{r}\alpha}^\dagger M_{\alpha\beta}\psi_{\mathbf{r}'\beta}$, $\psi_{\mathbf{r}\alpha}^\dagger \psi_{\mathbf{r}'\beta}^\dagger M_{\alpha\beta\gamma\delta}\psi_{\mathbf{r}'\gamma}\psi_{\mathbf{r}\delta}$, etc.. Despite the more delocalized nature of the electrons in an itinerant system, a quick order of magnitude estimate shows that, even in a typical metal, only close-neighbor operators are involved in the RIXS transition operators (see Sec. \[sec:effective-operators\] and foonote therein). An additional constraint in RIXS is charge conservation, since no electrons are kicked out of the sample. Then, much like in the case of magnetic insulators, we may split the tensors $M$ into irreducible representations and obtain the coupling terms to the corresponding combinations of polarizations. In a single band model, for example, the only on-site operators are the density $\psi^\dagger_\mathbf{r}\psi_\mathbf{r}$ and spin $\psi^\dagger_{\mathbf{r}}{\boldsymbol{\sigma}}\psi_\mathbf{r}$ [@marra2013] (and powers thereof, though the latter should be expected to contribute sub-dominantly). Like in any endeavor to compare experiment with theory, in any other technique, the most-delicate step in the [*calculation*]{} of a structure factor in a given ground state will be to understand how the $\psi_{\mathbf{r}\alpha}$ operators from the basis act on this ground state and are related to quasiparticle operators, if any. This is particularly true in the case of metals (but also of course in that of, e.g., quantum spin liquids), where, even in the case of a Fermi liquid, where the notion of quasiparticles is meaningful, the quasiparticle operators $\Psi^\dagger$ are, in the crudest approximation, related to the electron operators through the square-root of the quasiparticle weight $0<Z\leq1$: $\Psi^\dagger\sim\sqrt{Z}\psi^\dagger$. Therefore, a factor of at least $Z^2$ will be involved in the contribution of a quasiparticle-related excitation to the RIXS cross-section. Because $Z$ can be very small, like in a highly correlated metal, it is important to keep track of those factors to estimate the (esp. relative) amplitude of a signal of a given origin. For example, upon taking the [*a minima*]{} point of view of a single-band Fermi liquid [@benjamin2014] for the overdoped cuprates, one should keep in mind that factors of $Z$ are likely to greatly suppress the quasiparticle contribution to the RIXS signal. This should be crucial in deciphering the origin of the features seen in RIXS spectra of those materials [@letacon2011; @letacon2013; @dean2013; @wakimoto2015]. The case of phonons is quite similar. At the symmetry level, phonons bear no spin degree of freedom, but are associated to lattice degrees of freedom and their symmetries. There may be several phonon/displacement modes at each site, so that one can introduce several phonon creation operators $c^\dagger_{\mathbf{r},a}$. The symmetries to be considered should be purely spatial, and related to point group symmetries at site $\mathbf{r}$. Phonons and orbital degrees of freedom are likely to be important in the context of the nematic order seem in the pnictide superconductors, whose microscopic origin is not yet understood. Of course, ultimately, the full signal is given by the contributions from all the relevant degrees of freedom. Outlook {#sec:conclusions} ======= As the above examples have shown, the method presented here is very powerful both in scope and predictive potential. We have, for example, explicitly shown that various hidden orders could be unambiguously identified. Moreover, as we tried to emphasize, this approach offers the advantage of possibly helping with unbiased data analysis since all possible contributions to the RIXS signal can in principle be systematically enumerated. With this theory in hand, where should one look next? As proposed here, NiGa$_2$S$_4$ of course appears as a natural material to investigate with RIXS or REXS. In particular, thanks to $S=1$ on Ni$^{2+}$ one expects “direct RIXS” processes to be involved and therefore a strong signal. The current resolution on RIXS instruments —of about 130 meV— is too low to detect a sizable signal-to-noise ratio for a material where the exchange has been estimated to lie at around $J\sim7$ meV (as boldly estimated from a Curie-Weiss temperature of $|\Theta_{\rm CW}|\sim80$ K [@nakatsuji2005]). However, since static order is expected (at higher temperatures) [@stoudenmire2009], Bragg peaks should appear in REXS (see Fig. \[fig:1\]d)). Spin chain materials like LiCuVO$_4$ and others [@nawa2014], while perhaps even more promising in terms of confidence in the realization of a nematic state, will have to await the next generation of RIXS instruments, as their exchange energies are also relatively low ($\sim30$ K). Perhaps, at this point, high-quality data (like in the cuprates and iridates) would be worth re-investigating in light of all the possibilities which our work unearthed. One can, for example, imagine looking for signs of some of the “stranger” correlation functions presented in Table \[tab:couplings\]. Another exciting direction, briefly mentioned in Section \[sec:other-degr-freed\], is that of pnictide materials, as RIXS may help contribute to the effort of pinning down the origin of the observed nematic order. Finally, most electrifying would perhaps be the detection of chiral order in putative spin liquids on the kagomé lattice [@hu2015; @gong2015; @wietek2015] or the possible appearance of spin quadrupolar correlations (in the absence of dipolar ones) in La$_{2-x}$Ba$_x$Cu$_2$O$_4$, should it display features of a spin density wave glass [@mross2015]. With RIXS taking the central stage in various classes of systems, and new resolution-improved machines on the horizon, the future seems bright for refining our understanding of and discovering yet new physics in complex materials amenable to RIXS. And with these general results [*and*]{} derivation in this broad setting, we hope to guide experiments as well as theory in this endeavor. It is also our hope to have somewhat demystified the understanding of RIXS for non-experts of microscopic calculations. L.S. would like to thank Peter Armitage, Collin Broholm, Radu Coldea, Natalia Drichko, David Hawthorn, Bob Leheny, Kemp Plumb, Daniel Reich and especially Leon Balents for useful discussions. L.S. was generously supported by the Gordon and Betty Moore Foundation through a postdoctoral fellowship of the EPiQS initiative, Grant No. GBMF4303. T.S. was supported by NSF Grant DMR-1305741. This work was also partially supported by a Simons Investigator award from the Simons Foundation to Senthil Todadri. Electromagnetic field {#sec:electr-field} ===================== As mentioned in the main text, the electromagnetic vector potential at point $\mathbf{r}$ may be expanded in plane waves $$\label{eq:2} \mathbf{A}(\mathbf{r})=\sum_\mathbf{k}\sqrt{\frac{\hbar}{2V\epsilon_0\omega_\mathbf{k}}}\sum_{{\boldsymbol{\varepsilon}}}\left({\boldsymbol{\varepsilon}}^*a_{\mathbf{k},{\boldsymbol{\varepsilon}}}^\dagger e^{-i\mathbf{k}\cdot\mathbf{r}}+{\rm h.c.}\right),$$ where $\hbar$ is Planck’s constant, $\epsilon_0$ is the vacuum dielectric polarization, $\omega_\mathbf{k}=\omega_{-\mathbf{k}}=c|\mathbf{k}|$, with $c$ the speed of light, $V$ is the volume in which the electromagnetic field is confined, ${\boldsymbol{\varepsilon}}$ has nonzero components only along (real) vectors perpendicular to $\mathbf{k}$, and we define $ \mathbf{A}_{\mathbf{k},{\boldsymbol{\varepsilon}}}(\mathbf{r})={\boldsymbol{\varepsilon}}^*a_{\mathbf{k},{\boldsymbol{\varepsilon}}}^\dagger e^{-i\mathbf{k}\cdot\mathbf{r}}+{\rm h.c.}$. $a_{\mathbf{k},{\boldsymbol{\varepsilon}}}^\dagger$ is the creation operator of a photon of momentum $\mathbf{k}$ and polarization (helicity) ${\boldsymbol{\varepsilon}}$. This “expansion” introduces (and defines) the polarization vector ${\boldsymbol{\varepsilon}}$ which encodes the vectorial (in the sense of a tensor of rank one) nature of the $S=1$ field $\mathbf{A}$. We return to the symmetry transformation rules of $\mathbf{A}$ and ${\boldsymbol{\varepsilon}}$ in Appendix \[sec:transformation-rules\]. Transformation rules {#sec:transformation-rules} ==================== The (pseudo-)vector of spin operators $\mathbf{S}_\mathbf{r}$ transforms under a spatial operation $R$ and time reversal (TR) according to $$\label{eq:4} \begin{cases} R:&\mathbf{S}_\mathbf{r}\rightarrow \det(R)R\cdot\mathbf{S}_{R\cdot\mathbf{r}}\\ {\rm TR}:&\mathbf{S}_\mathbf{r}\rightarrow -\mathbf{S}_{\mathbf{r}} \end{cases},$$ regardless of the value of $S(S+1)$ [^6]. The vector potential $\mathbf{A}$ transforms as $$\label{eq:5} \begin{cases} R:&\mathbf{A}(\mathbf{r})\rightarrow R\cdot\mathbf{A}({R\cdot\mathbf{r}})\\ {\rm TR}:&\mathbf{A}(\mathbf{r})\rightarrow -\mathbf{A}({\mathbf{r}}) \end{cases},$$ so that the polarization ${\boldsymbol{\varepsilon}}$ transforms according to $$\label{eq:6} \begin{cases} R:&{\boldsymbol{\varepsilon}}\rightarrow R\cdot{\boldsymbol{\varepsilon}}\\ {\rm TR}:&{\boldsymbol{\varepsilon}}\rightarrow -{\boldsymbol{\varepsilon}}^* \end{cases}.$$ Note that the definition of the polarization sometimes differs by, e.g., a factor of $i$, and the polarization is then “even” (times complex conjugation) under the time reversal operation. If the spatial symmetry group contains all spherical operations (which contain in particular all $SO(3)$ operations), $\mathbf{S}$ and ${\boldsymbol{\varepsilon}}$ transform under the “$L=1$” representation of $SO(3)$ (regardless of the value of $S(S+1)$). Note that here, the name “$L$” is purely formal. Using the notation from Ref.  for the full rotational symmetry group “$D$” ($SO(3)\subset D$), ${\boldsymbol{\varepsilon}}$ and $\mathbf{S}$ transform under the $D_1^-$ and $D_1^+$ representations, respectively, where $\pm$ indicate parity under the inversion transformation. Derivation of Table I {#sec:derivation-table-i} ===================== In the equations below, the numbers are representation labels ($L=0,1,2,...$ associated to $D_L^\pm$), and the superscripts [*schematically*]{} show basis elements (in the form of tensors) in terms of the original terms in the products. Products of representations for - zero spins: $$1^{{\boldsymbol{\varepsilon}}'}\times 1^{{\boldsymbol{\varepsilon}}} =\left(0^{{\boldsymbol{\varepsilon}}'\cdot {{\boldsymbol{\varepsilon}}}}+1^{{\boldsymbol{\varepsilon}}'\times {{\boldsymbol{\varepsilon}}}}+2^{\llbracket{\boldsymbol{\varepsilon}}', {{\boldsymbol{\varepsilon}}}\rrbracket}\right);$$ - one spin: $$1^{{\boldsymbol{\varepsilon}}'}\times 1^{{\boldsymbol{\varepsilon}}}\times1^{\mathbf{S}_i} =\left(0^{{\boldsymbol{\varepsilon}}'\cdot {{\boldsymbol{\varepsilon}}}}+1^{{\boldsymbol{\varepsilon}}'\times {{\boldsymbol{\varepsilon}}}}+2^{\llbracket{\boldsymbol{\varepsilon}}', {{\boldsymbol{\varepsilon}}}\rrbracket}\right)\times 1^{\mathbf{S}_i};$$ - two spins: $$\begin{aligned} 1^{{\boldsymbol{\varepsilon}}'}\times 1^{{\boldsymbol{\varepsilon}}}\times1^{\mathbf{S}_i}\times1^{\mathbf{S}_j}&=&\left(0^{{\boldsymbol{\varepsilon}}'\cdot {{\boldsymbol{\varepsilon}}}}+1^{{\boldsymbol{\varepsilon}}'\times {{\boldsymbol{\varepsilon}}}}+2^{\llbracket{\boldsymbol{\varepsilon}}', {{\boldsymbol{\varepsilon}}}\rrbracket}\right)\\ &&\quad\times \left(0^{\mathbf{S}_i\cdot\mathbf{S}_j}+1^{\mathbf{S}_i\times\mathbf{S}_j}+2^{\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket}\right);\nonumber\end{aligned}$$ - three spins: $$\begin{aligned} && 1^{{\boldsymbol{\varepsilon}}'}\times 1^{{\boldsymbol{\varepsilon}}}\times1^{\mathbf{S}_i}\times1^{\mathbf{S}_j}\times1^{\mathbf{S}_k}\\ &=&\left(0^{{\boldsymbol{\varepsilon}}'\cdot {{\boldsymbol{\varepsilon}}}}+1^{{\boldsymbol{\varepsilon}}'\times {{\boldsymbol{\varepsilon}}}}+2^{\llbracket{\boldsymbol{\varepsilon}}', {{\boldsymbol{\varepsilon}}}\rrbracket}\right)\nonumber\\ &&\qquad\qquad\times \left(0^{\mathbf{S}_i\cdot\mathbf{S}_j}+1^{\mathbf{S}_i\times\mathbf{S}_j}+2^{\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket}\right)\times1^{\mathbf{S}_k}\nonumber\\ &=&\left(0^{{\boldsymbol{\varepsilon}}'\cdot {{\boldsymbol{\varepsilon}}}}+1^{{\boldsymbol{\varepsilon}}'\times {{\boldsymbol{\varepsilon}}}}+2^{\llbracket{\boldsymbol{\varepsilon}}', {{\boldsymbol{\varepsilon}}}\rrbracket}\right)\nonumber\\ &&\times \left(1^{(\mathbf{S}_i\cdot\mathbf{S}_j)\mathbf{S}_k}+0^{(\mathbf{S}_i\times\mathbf{S}_j)\cdot\mathbf{S}_k}+1^{(\mathbf{S}_i\times\mathbf{S}_j)\times\mathbf{S}_k}\right.\nonumber\\ &&\qquad\qquad+2^{\llbracket(\mathbf{S}_i\times\mathbf{S}_j),\mathbf{S}_k\rrbracket}+1^{\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket\cdot\mathbf{S}_k}+2^{\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket\times\mathbf{S}_k}\nonumber\\ &&\left.\qquad\qquad\qquad\qquad+3^{\llbracket\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket,\mathbf{S}_k\rrbracket}\right),\nonumber\end{aligned}$$ where the definition of the double brackets has been extended to: $$\label{eq:18} \begin{cases} (\llbracket\mathbf{u},\mathbf{v}\rrbracket\cdot\mathbf{w})_\mu=\sum_\nu\llbracket\mathbf{u},\mathbf{v}\rrbracket_{\mu\nu}w_\nu\\ (\llbracket\mathbf{u},\mathbf{v}\rrbracket\times\mathbf{w})_{\mu\rho}=\sum_{\nu,\lambda}\epsilon_{\nu\lambda\rho}\llbracket\mathbf{u},\mathbf{v}\rrbracket_{\mu\nu}w_\lambda\\ (\llbracket\llbracket\mathbf{u},\mathbf{v}\rrbracket,\mathbf{w}\rrbracket)_{\mu\nu\lambda}=\llbracket\mathbf{u},\mathbf{v}\rrbracket_{\mu\nu}w_\lambda \end{cases}$$ Only products of terms belonging to the same representation will have a contribution in the “final” $0$ representation (by contracting all the indices). Explicitly, the operator obtained for all the terms in Table \[tab:couplings\] reads: $$\begin{aligned} \label{eq:13} && T=\left({\boldsymbol{\varepsilon}}'{}^*\cdot{\boldsymbol{\varepsilon}}\right)\left[a_{0,1}\mathbf{S}_i\cdot\mathbf{S}_j+a_{0,2}(\mathbf{S}_i\times\mathbf{S}_j)\cdot\mathbf{S}_k\right]\\ &&\quad+ \left({\boldsymbol{\varepsilon}}'{}^*\times{\boldsymbol{\varepsilon}}\right)\cdot\left[a_{1,1}\mathbf{S}_i+a_{1,2}\mathbf{S}_i\times\mathbf{S}_j+a_{1,3}\left(\mathbf{S}_i\cdot\mathbf{S}_j\right)\mathbf{S}_k\right.\nonumber\\ &&\left.\qquad\qquad\qquad+a_{1,4}\left(\mathbf{S}_i\times\mathbf{S}_j\right)\times\mathbf{S}_k+a_{1,5}\left\llbracket\mathbf{S}_i,\mathbf{S}_j\right\rrbracket\cdot\mathbf{S}_k\right]\nonumber\\ &&\quad+\llbracket{\boldsymbol{\varepsilon}}'{}^*,{\boldsymbol{\varepsilon}}\rrbracket \left(a_{2,1}\llbracket\mathbf{S}_i,\mathbf{S}_j\rrbracket+a_{2,2}\left\llbracket\mathbf{S}_i\times\mathbf{S}_j,\mathbf{S}_k\right\rrbracket\right.\nonumber\\ &&\left.\qquad\qquad\qquad+a_{2,3}\left\llbracket\mathbf{S}_i,\mathbf{S}_j\right\rrbracket\times\mathbf{S}_k\right).\nonumber\end{aligned}$$ Lower symmetry {#sec:lower-symmetry} ============== It has been pointed out [@haverkort2010b] that, even when spin-orbit coupling is negligible in the low-energy manifold, spin-orbit is always very strong in core levels, and may lead to anisotropies in the RIXS signal. The derivation provided in the main text is readily generalized to the case of discrete “spin” symmetries. With the help of the tables found in Refs. , one may build bases for the representations, generalizing those of rotationally-invariant systems. The formula which generalizes Eq.  is: $$\begin{aligned} \label{eq:21} T_{\mathbf{R}}&=&\sum_\Gamma\sum_l\alpha_{\Gamma,l}\mathcal{E}^{\Gamma,l}\cdot\mathcal{S}^{\Gamma,l},\end{aligned}$$ where the sum proceeds over all irreducible representations $\Gamma$ of the point symmetry group at site $\mathbf{R}$ (that of the core hole), $l$ indexes the multiplicity of the representation $\Gamma$, and the dot product represents a symmetric contraction of all indices. Higher multipoles {#sec:higher-multipoles} ================= As mentioned in the main text, the Hamiltonian at first order in $\mathbf{A}$ is actually $$\label{eq:7} H'=\sum_\mathbf{r}\left[\hat{\psi}_\mathbf{r}^\dagger \,\frac{e\mathbf{p}}{m}\,\hat{\psi}_\mathbf{r}\cdot\mathbf{A}+\hat{\psi}_\mathbf{r}^\dagger\,\frac{e\hbar{\boldsymbol{\sigma}}}{2m}\,\hat{\psi}_\mathbf{r}\cdot\left({\boldsymbol{\nabla}}\times\mathbf{A}_\mathbf{r}\right)\right].$$ In the main text, only the first term was considered. In the spirit of the derivation provided in the main text, where experimental parameters are kept explicit, to treat the second term, one should consider the couplings to $\mathbf{k}\times{\boldsymbol{\varepsilon}}$ and $\mathbf{k}'\times{\boldsymbol{\varepsilon}}'$, much as we did to ${\boldsymbol{\varepsilon}}$ and ${\boldsymbol{\varepsilon}}'$ upon considering the term linear in $\mathbf{p}$. One should also note that “higher multipoles” will also arise from the expansion of the exponential, $e^{i\mathbf{k}\cdot\delta\mathbf{r}}=1+i\mathbf{k}\cdot\delta\mathbf{r}-\frac{1}{2}(\mathbf{k}\cdot\delta\mathbf{r})^2+\cdots$. Details of the cross-section derivations {#sec:deta-cross-sect} ======================================== Spin nematic in $S=1$ triangular magnets {#sec:spin-nematic} ---------------------------------------- Following Ref.  (the calculation is performed there in the antiferroquadrupolar phase), we introduce the bosonic operators $\alpha_\mathbf{r}$ and $\beta_\mathbf{r}$, and the Fock space vacuum such that $$\label{eq:26} \begin{cases} |S^z_\mathbf{r}=0\rangle=|{\rm vac}\rangle\\ |S^z_\mathbf{r}=\pm1\rangle=\frac{1}{\sqrt{2}}(\alpha^\dagger_\mathbf{r}\pm i\beta^\dagger_\mathbf{r})|{\rm vac}\rangle \end{cases},$$ and $$\label{eq:32} \begin{cases} S^x_\mathbf{r}=\alpha^\dagger_\mathbf{r}+\alpha_\mathbf{r}\\ S^y_\mathbf{r}=\beta^\dagger_\mathbf{r}+\beta_\mathbf{r}\\ S^z_\mathbf{r}=-i(\alpha_\mathbf{r}^\dagger\beta_\mathbf{r}-\beta_\mathbf{r}^\dagger\alpha_\mathbf{r}) \end{cases},$$ with the constraint that there should be no more than one boson per site. This in particular implies, in real space: $$\label{eq:33} \begin{cases} \alpha_\mathbf{r}^2=\beta_\mathbf{r}^2=\alpha_\mathbf{r}\beta_\mathbf{r}=\beta_\mathbf{r}\alpha_\mathbf{r}=0\\ \alpha_\mathbf{r}\beta_\mathbf{r}^\dagger=\beta_\mathbf{r}\alpha_\mathbf{r}^\dagger=0\\ \alpha_\mathbf{r}\alpha_\mathbf{r}^\dagger=\beta_\mathbf{r}\beta_\mathbf{r}^\dagger=1-\alpha_\mathbf{r}^\dagger\alpha_\mathbf{r}-\beta_\mathbf{r}^\dagger\beta_\mathbf{r} \end{cases}.$$ Furthermore, $$\label{eq:34} (\mathbf{S}_i\cdot\mathbf{S}_j)^2=-\frac{1}{2}\mathbf{S}_i\cdot\mathbf{S}_j+\frac{1}{4}\sum_{\mu,\nu}\{S_i^\mu,S_i^\nu\}\{S_j^\mu,S_j^\nu\},$$ and $$\begin{aligned} \label{eq:35} &&\frac{1}{4}\sum_{\mu,\nu}\{S_i^\mu,S_i^\nu\}\{S_j^\mu,S_j^\nu\}\\ &&\qquad\qquad=\sum_\mu(S_i^\mu)^2(S_j^\mu)^2+\frac{1}{2}\sum_{\nu>\mu}\{S_i^\mu,S_i^\nu\}\{S_j^\mu,S_j^\mu\},\nonumber\end{aligned}$$ with $$\label{eq:36} \begin{cases} \{S_\mathbf{r}^x,S_\mathbf{r}^y\}=\alpha_\mathbf{r}^\dagger\beta_\mathbf{r}+\beta_\mathbf{r}^\dagger\alpha_\mathbf{r}\\ \{S_\mathbf{r}^x,S_\mathbf{r}^z\}=-i(\beta_\mathbf{r}-\beta_\mathbf{r}^\dagger)\\ \{S_\mathbf{r}^y,S_\mathbf{r}^z\}=-i(-\alpha_\mathbf{r}+\alpha_\mathbf{r}^\dagger)\\ (S_\mathbf{r}^x)^2=1-\beta_\mathbf{r}^\dagger\beta_\mathbf{r}\\ (S_\mathbf{r}^y)^2=1-\alpha_\mathbf{r}^\dagger\alpha_\mathbf{r}\\ (S_\mathbf{r}^z)^2=\alpha_\mathbf{r}^\dagger\alpha_\mathbf{r}+\beta_\mathbf{r}^\dagger\beta_\mathbf{r} \end{cases}$$ Using the rules Eq.  and then keeping only terms quadratic in the boson operators $\alpha_\mathbf{r}$, $\alpha^\dagger_\mathbf{r}$, $\beta_\mathbf{r}$ and $\beta^\dagger_\mathbf{r}$ (i.e. neglecting interactions between the bosons), we arrive at $$\begin{aligned} \label{eq:37} H&=&\frac{1}{2}\left(J_1-J_2\right)\sum_{\eta=\alpha,\beta}\sum_\mathbf{r}\sum_n\left[\eta^\dagger_\mathbf{r}\eta^\dagger_{\mathbf{r}+\mathbf{R}_n}+\eta_\mathbf{r}\eta_{\mathbf{r}+\mathbf{R}_n}\right]\nonumber\\ &&+\frac{J_1}{2}\sum_{\eta=\alpha,\beta}\sum_\mathbf{r}\sum_n\left[\eta^\dagger_\mathbf{r}\eta_{\mathbf{r}+\mathbf{R}_n}+\eta_\mathbf{r}\eta^\dagger_{\mathbf{r}+\mathbf{R}_n}\right]\nonumber\\ &&-\frac{J_2}{2}\sum_{\eta=\alpha,\beta}\sum_\mathbf{r}\sum_n \left[\eta^\dagger_\mathbf{r}\eta_\mathbf{r}+\eta^\dagger_{\mathbf{r}+\mathbf{R}_n}\eta_{\mathbf{r}+\mathbf{R}_n}\right]\nonumber\\ &=&\frac{\gamma_\mathbf{k}}{2}\left(J_1-J_2\right)\sum_{\eta=\alpha,\beta}\sum_\mathbf{k}\left[\eta^\dagger_\mathbf{k}\eta^\dagger_{-\mathbf{k}}+\eta_\mathbf{k}\eta_{-\mathbf{k}}\right]\nonumber\\ &&+(J_1 \gamma_\mathbf{k}-6J_2)\sum_{\eta=\alpha,\beta}\sum_\mathbf{k}\eta^\dagger_\mathbf{k}\eta_{\mathbf{k}},\end{aligned}$$ where $\gamma_\mathbf{k}=2(\cos k_x+\cos(\frac{1}{2}(k_x+\sqrt{3}k_y))+\cos(\frac{1}{2}(k_x-\sqrt{3}k_y)))$. With the Bogoliubov transformation $\eta_\mathbf{k}^\dagger=\cosh\xi_\mathbf{k}\rho^\dagger_\mathbf{k}+\sinh\xi_\mathbf{k}\rho_{-\mathbf{k}}$, we obtain $$\label{eq:38} H=\sum_{\rho=\rho^\alpha,\rho^\beta}\sum_\mathbf{k}\omega_\mathbf{k}\rho^\dagger_\mathbf{k}\rho_\mathbf{k},$$ where we have defined: $$\label{eq:42} \omega_\mathbf{k}=2[A_\mathbf{k}\cosh2\xi_\mathbf{k}+B_\mathbf{k} \sinh2\xi_\mathbf{k}]=2\sqrt{A_\mathbf{k}^2-B_\mathbf{k}^2},$$ with $$\label{eq:39} \begin{cases} A_\mathbf{k}=\frac{1}{2}(J_1\gamma_\mathbf{k}-6J_2)\\ B_\mathbf{k}=\frac{\gamma_\mathbf{k}}{2}(J_2-J_1) \end{cases}$$ if $$\label{eq:41} \begin{cases} A_\mathbf{k}\sinh2\xi_\mathbf{k}+B_\mathbf{k}\cosh2\xi_\mathbf{k}=0\\ (A_\mathbf{k}\cosh2\xi_\mathbf{k}+B_\mathbf{k}\sinh2\xi_\mathbf{k})^2=A_\mathbf{k}^2-B_\mathbf{k}^2 \end{cases}.$$ This yields: $$\label{eq:40} \begin{cases} \sinh^22\xi_\mathbf{k}=\frac{B_\mathbf{k}^2}{A_\mathbf{k}^2-B_\mathbf{k}^2}\\ \cosh^22\xi_\mathbf{k}=\frac{A_\mathbf{k}^2}{A_\mathbf{k}^2-B_\mathbf{k}^2} \end{cases}.$$ Since $\forall x$ $\cosh x>0$, $$\label{eq:43} \cosh2\xi_\mathbf{k}=\sqrt{\frac{A_\mathbf{k}^2}{A_\mathbf{k}^2-B_\mathbf{k}^2}},\quad \sinh2\xi_\mathbf{k}=-\frac{B_\mathbf{k}}{A_\mathbf{k}}\sqrt{\frac{A_\mathbf{k}^2}{A_\mathbf{k}^2-B_\mathbf{k}^2}}.$$ The transition operator Eq.  takes the form, in Fourier space: $$\begin{aligned} \label{eq:44} T_\mathbf{k}&=&\kappa^{(0)}\delta(\mathbf{k})-\sum_\mu\sum_{\rho=\rho^\mu}\left[\rho^\dagger_{\mathbf{k}}(\mathcal{A}_\mu\cosh\xi_\mathbf{k}+\mathcal{B}_\mu\sinh\xi_\mathbf{k})\right.\nonumber\\ &&\qquad\qquad+\rho_{-\mathbf{k}}(\mathcal{A}_\mu\sinh\xi_\mathbf{k}+\mathcal{B}_\mu\cosh\xi_\mathbf{k})\Big],\end{aligned}$$ where $$\begin{aligned} \label{eq:45} \kappa^{(0)}&=&\alpha_0({\boldsymbol{\varepsilon}}\cdot{\boldsymbol{\varepsilon}}'{}^*)\\ \kappa_\mu^{(1)}&=&\alpha_1\epsilon_{\mu\sigma\tau}\varepsilon^\sigma\varepsilon'{}^*{}^\tau\\ \kappa_{\mu\nu}^{(2)}&=&\alpha_2\left[-\frac{2}{3}\delta_{\mu\nu}({\boldsymbol{\varepsilon}\cdot{\boldsymbol{\varepsilon}}'{}^*})+\varepsilon^\mu\varepsilon'{}^*{}^\nu+\varepsilon^\nu\varepsilon'{}^*{}^\mu\right]\\ \mathcal{A}_x&=&\kappa_{xy}^{(2)}+i\kappa_z\\ \mathcal{A}_x&=&\kappa_{yz}^{(2)}-i\kappa_x\\ \mathcal{B}_x&=&\kappa_{xy}^{(2)}-i\kappa_z\\ \mathcal{B}_x&=&\kappa_{yz}^{(2)}+i\kappa_x.\end{aligned}$$ Plugging this into the expression for the cross section: $$\begin{aligned} \label{eq:46} \frac{\delta^2\sigma}{\delta\Omega\delta\omega}&\propto&\sum_{\mu=x,z}\left|\mathcal{A}_\mu\cosh\xi_\mathbf{k}+\mathcal{B}_\mu\sinh\xi_\mathbf{k}\right|^2\delta(\omega-\omega_\mathbf{k}),\nonumber\end{aligned}$$ we arrive at the result given in the main text. Vector chirality and bond nematic in $S=1/2$ $J_1-J_2$ chains {#sec:vect-chir-bond} ------------------------------------------------------------- Equal-time and real-space correlation functions are given in Refs. [@mcculloch2008; @hikihara2008]. Here, we find the following contributions to the cross section: - in the nematic phase: $$\begin{aligned} \label{eq:47} && \mathcal{I}^{\langle (S^+S^+)_{-\omega,-k}(S^+S^+)_{\omega,k}\rangle}\\ &&\quad\propto\mathcal{A}\sum_{\epsilon=\pm1}\frac{\Theta(\omega^2-v_+^2(k-\epsilon\pi)^2)}{\sqrt{\omega^2-v_+^2(k-\epsilon\pi)^2}^{2-1/K_+}}\nonumber\\ &&\qquad+\mathcal{B}\sum_{\epsilon,\epsilon'}\Theta\left[\omega^2-v_+^2\left(k-\pi(\epsilon\frac{1}{2}-\epsilon'M)\right)^2\right]\nonumber\\ &&\qquad\qquad\times\sqrt{\omega^2-v_+^2\left(k-\pi(\epsilon\frac{1}{2}-\epsilon'M)\right)^2}^{K_++1/K_+-2}\nonumber\end{aligned}$$ $$\begin{aligned} \label{eq:48} \mathcal{I}^{\langle S^+_{-\omega,-k}S^-_{\omega,k}\rangle}&=&{\rm gapped}\\ \label{eq:49} \mathcal{I}^{\langle\chi^{z}_{-\omega,-k}\chi^z _{\omega,k}\rangle}&\propto&\omega\left(\delta(\omega+v_+k)+\delta(\omega-v_+k)\right)\end{aligned}$$ $$\begin{aligned} \label{eq:50} \mathcal{I}^{\langle S^z_{-\omega,-k}S^z_{\omega,k}\rangle} &\propto&\omega\left(\delta(\omega+v_+k)+\delta(\omega-v_+k)\right)\\ &&+\sum_{\epsilon=\pm1}\frac{\Theta(\omega^2-v_+^2(k-\epsilon\pi(\frac{1}{2}-M))^2)}{\sqrt{\omega^2-v_+^2(k-\epsilon\pi(\frac{1}{2}-M))^2}^{2-K_+}}\nonumber\end{aligned}$$ - in the vector chiral phase: $$\begin{aligned} \label{eq:51} &&\mathcal{I}^{\langle\chi^{z}_{-\omega,-k}\chi^z _{\omega,k}\rangle}\\ &&\quad\propto \mathcal{A}\omega^3\left(\delta(\omega+v_+k)+\delta(\omega-v_+k)\right)\nonumber\\ &&\qquad+\mathcal{B}\sum_{\epsilon,\epsilon'=\pm1}\left[\Theta(\omega^2-v_+^2(k-\epsilon2\pi M-\epsilon'\pi)^2)\right]\nonumber\\ &&\qquad\qquad\qquad\times\sqrt{\omega^2-v_+^2(k-\epsilon2\pi M-\epsilon'\pi)^2}^{4K_+-2}\nonumber\end{aligned}$$ $$\begin{aligned} \label{eq:52} \mathcal{I}^{\langle S^z_{-\omega,-k}S^z_{\omega,k}\rangle}&\propto& \omega\left(\delta(\omega+v_+k)+\delta(\omega-v_+k)\right)\\ \mathcal{I}^{\langle S^x_{-\omega,-k}S^x_{\omega,k}\rangle} &\propto&\sum_{\epsilon=\pm1}\frac{\Theta(\omega^2-v_+^2(k-\epsilon Q)^2)}{\sqrt{\omega^2-v_+^2(k-\epsilon Q)^2}^{2-1/(4K_+)}}\nonumber\end{aligned}$$ $$\begin{aligned} \label{eq:53} &&\mathcal{I}^{\langle (S^+S^+)_{-\omega,-k}(S^+S^+)_{\omega,k}\rangle}\\ &&\qquad\qquad\propto\sum_{\epsilon=\pm1}\frac{\Theta(\omega^2-v_+^2(k-\epsilon2Q)^2)}{\sqrt{\omega^2-v_+^2(k-\epsilon2Q)^2}^{2-1/K_+}}.\nonumber \end{aligned}$$ In all the above, $\mathcal{A}$ and $\mathcal{B}$ are constants, and $Q=\frac{\pi}{2}-\frac{1}{2}\sqrt{\frac{\pi}{2}}\langle\partial_x\theta_+\rangle$. Note, in particular, that, since $K$ increases monotonically between $1/2$ and $1$ ($K(M=0)=1/2$ and $K(M=1/2)=1$), and $K_+=K(1+K\frac{J_1}{\pi v})$, $K_+\geq 1/2$. Moreover, the bosonization approach is valid only “not too close” from the saturation limit $M=1/2$, and in the weak coupling regime $v\sim J_2$. So, in particular: $$\label{eq:54} \begin{cases} 2-\frac{1}{K_+}\geq3/2\\ K_++\frac{1}{K_+}-2\geq0\\ 2-K_+\geq0&\mbox{if }K\leq\pi v\frac{-1+\sqrt{1+8J_1/(\pi v)}}{2J_1}\\ 4K_+-2\geq0\\ 2-\frac{1}{4K_+}\geq\frac{15}{8} \end{cases}.$$ Note that $K\leq\pi v\frac{-1+\sqrt{1+8J_1/(\pi v)}}{2J_1}$ is always true for $\frac{J_1}{\pi v}\leq1$. [^1]: The “diamagnetic” term $\mathbf{A}^2$, of second order in $\mathbf{A}$, is involved in the [*first*]{}-order contribution to the scattering amplitude, which is negligible close to resonance. [^2]: For $\omega^{\rm x-ray}\sim10$ keV, $|\mathbf{k}|\sim1$ Å$^{-1}$ and $|\mathbf{k}\cdot\delta\mathbf{r}|\approx0$ can seem hardly valid. In practice, however it has been shown to usually be a good approximation. Regardless, we discuss how to go beyond this approximation in Appendix \[sec:higher-multipoles\]. [^3]: Indeed, $\tau\sim10^{-15}$ s corresponds to an energy of order $4$ eV, which is typically that of a metal’s bandwidth $W$. Taking estimate of an electron’s velocity as $v=aW$ with $a$ the lattice spacing, we find that the travelled distance during time $\tau$ of order a lattice spacing. [^4]: More rigorously, one should derive the transition operators in terms of spin-orbit coupled effective spins, and [ *then*]{} possibly neglect those which are not rotationally symmetric. [^5]: Note that Ref.  additionally provides a relation between some of the coefficients $\alpha_\beta$ and absorption spectroscopy coefficients. [^6]: One may also write the $R$ transformation as $S^\mu\rightarrow U^\dagger_R S^\mu U_R$, where $U_R$ acts in spin space ($U_R$ is the operator that $R$ maps into through the appropriate representation of the symmetry group).
--- abstract: 'Space charge effects can be very important for the dynamics of intense particle beams, as they repeatedly pass through nonlinear focusing elements, aiming to maximize the beam’s luminosity properties in the storage rings of a high energy accelerator. In the case of hadron beams, whose charge distribution can be considered as “frozen" within a cylindrical core of small radius compared to the beam’s dynamical aperture, analytical formulas have been recently derived [@BenTurc] for the contribution of space charges within first order Hamiltonian perturbation theory. These formulas involve distribution functions which, in general, do not lead to expressions that can be evaluated in closed form. In this paper, we apply this theory to an example of a charge distribution, whose effect on the dynamics can be derived explicitly and in closed form, both in the case of 2–dimensional as well as 4–dimensional mapping models of hadron beams. We find that, even for very small values of the “perveance" (strength of the space charge effect) the long term stability of the dynamics changes considerably. In the flat beam case, the outer invariant “tori" surrounding the origin disappear, decreasing the size of the beam’s dynamical aperture, while beyond a certain threshold the beam is almost entirely lost. Analogous results in mapping models of beams with 2-dimensional cross section demonstrate that in that case also, even for weak tune depressions, orbital diffusion is enhanced and many particles whose motion was bounded now escape to infinity, indicating that space charges can impose significant limitations on the beam’s luminosity.' author: - Tassos Bountis - Charalampos Skokos title: Space Charges Can Significantly Affect the Dynamics of Accelerator Maps --- Introduction {#INTRO} ============ One of the fundamental problems concerning the dynamics of particle beams in the storage rings of high energy accelerators is the determination of the beam’s [*dynamical aperture*]{}, i.e. the maximal domain containing the particles closest to their ideal circular path and for the longest possible time. For example, “flat" hadron beams (experiencing largely horizontal betatron oscillations) can be described by 2–dimensional (2D) area–preserving maps, where the existence of invariant curves around the ideal stable path at the origin, guarantees the stability of the beam’s dynamics for infinitely long times [@BTTS94; @GT96]. The reason for this is that the chaotic motion between these invariant curves is always bounded and the particles never escape to infinity. Other important phenomena that arise in this context is the presence of a major resonance in the form of a chain of islands through which the beam may be collected, or the existence of an outer invariant curve surrounding these islands, which serves as a boundary of the motion and thus allows an estimate of the beam’s dynamical aperture. On the other hand, hadron beams with a 2 - dimensional cross section require the use of $4$–dimensional (4D) symplectic mappings for the study of their dynamics [@BT91; @BK94; @VBK96; @VIB97; @BS05]. In fact, if longitudinal (or synchrotron) oscillations also need to be included the mappings become 6–dimensional. In such cases, the problems of particle loss are severe, as chaotic regions around different resonances are connected, providing a network of paths through which particles can move away from the origin, and eventually escape from the beam after sufficiently long times. In this Letter, we add to these issues the presence of space charges within a core radius $r_c$, which is small compared to the beam’s dynamical aperture. In other words, we will assume that our proton (or antiproton) beam is intense enough so that the effect of a charge distribution concentrated within this core radius cannot be neglected. Furthermore, we will consider this distribution as cylindrically symmetric and “frozen" (i.e. time independent, so that it may be self consistent with the linear lattice) and study the dynamics of a hadron beam as it passes repeatedly through nonlinear magnetic focusing elements of the FODO cell type. This system has been studied extensively in the absence of space charge effects in [@BTTS94; @GT96; @BT91; @BK94; @VBK96; @VIB97; @BS05] and the question we raise now is whether its dynamics can be seriously affected if space charges are also taken into consideration. Space charge presents a fundamental limitation to high intensity circular accelerators. Its effects are especially important in the latest designs of high-intensity proton rings, which require beam losses much smaller than presently achieved in existing facilities. It is therefore necessary to estimate the major space charge effects which could lead to emittance growth and associated beam loss [@Fedotov]. The interplay between nonlinear effects, typical of single-particle dynamics, and space charge, typical of multi-particle dynamics induced by the Coulomb interaction, represents a difficult challenge. To understand better these phenomena, an intense experimental campaign was launched at the CERN Proton Synchrotron [@Franchetti_2003]. It is very important, therefore, to develop analytical techniques which could be utilized in order to study and localize the associated web of resonances (see e.g. [@PAC2001]) to obtain an analytical estimation of the dynamic aperture, as suggested e.g. in [@Benedetti]. In a recent paper, Benedetti and Turchetti [@BenTurc] used first order canonical perturbation theory to obtain analytical expressions for the jump in the position and momenta due to the multipolar kicks in such maps, showing that the space charges effectively modulate the tune at every passage of the particle through a nonlinear element of the lattice. In particular, they derived the new position and momentum coordinates after the $n$th passage through a FODO cell in the thin lens approximation, as the iterates of the 2D map $$\left( \begin{array}{c} X_{n+1} \\P_{n+1} \end{array} \right) = \left( \begin{array}{cc} \cos \Omega(J) & -\sin \Omega(J) \\ \sin \Omega(J) & \ \cos \Omega(J) \\ \end{array} \right) \times \left( \begin{array}{c} X_{n} \\ P_{n}+X_{n}^{k-1} \\ \end{array} \right), \ \ n=0,1,2,..., \label{eq:map}$$ where $$J = \frac{P_n^2+X_n^2}{2}, \label{eq:action}$$ for $k=3$, i.e. in the case of sextupole nonlinearities and $$\Omega(J) = \omega + \frac{\omega_0^2-\omega^2}{2\omega} \left(1-\frac{R_c^2}{J} \frac{1}{2\pi}\int_0^{2\pi}g_1\left(\frac{2J\cos^2\theta}{R_c^2}\right)d\theta \right). \label{eq:Omega}$$ The variables $P$, $X$ and the parameters entering in the above expressions are related to the corresponding ones of the original Hamiltonian $$H=\frac{p^2}{2}+\omega_0^2\frac{x^2}{2}-\frac{x^3}{3}\sum_{l=1,2,..}\delta(s-l)-\frac{\xi}{2} g_2\left( \frac{x^2}{r_c^2}\right), \label{eq:Ham1}$$ by the formulas $$P = p/ \omega^2 \ , \ X=x/\omega \ , \ \ R_c=r_c \omega^{-3/2} \ , \ \ \omega^2 = \omega_0^2-\frac{\xi}{r_c^2}, \label{eq:param}$$ where $\omega$ is the depressed phase advance at the center of the charge distribution, $p=dx/ds$, $s$ is the coordinate along the ideal circular orbit, $g_2(t)$ is given by $$g_2(t)=\int_0^tu^{-1}g_1(u)du \ , \ g_1(t)=\int_0^tg(u)du \ , \ g\left( \frac{r^2}{r_c^2} \right)=\pi r_c^2\rho(r) \label{eq:dens}$$ $\rho(r)$ satisfies $\int_0^{\infty}\pi \rho(r) dr^2=1$, $\pi r_c^2\rho(0)=1$ and $Q\rho(r)$ represents the radial charge density. Note that if $m$, $q$, and $v_0$ denote the mass, charge and velocity of our non-relativistic particles, the “perveance" parameter $\xi=2qQ/mv_0^2$ determines the tune depression in (\[eq:param\]), which must be small for the above analysis to be valid [@BenTurc]. The stage is now set for the investigation of space charge effects on the dynamics. However, the space advances $\Omega(J)$ needed in (\[eq:map\]) at every iteration depend on integrals of the distribution function $g(u)$ that may not be available analytically. To overcome this difficulty, we choose in section II a particular form of $g(u)$ for which these integrals can be explicitly carried out not only for 2D maps of the “flat" beam case, but also for 4D maps describing vertical as well as horizontal deflections of the beam’s particles. Thus, in section III we perform numerical experiments to examine the influence of space charges on the dynamics and find indeed that even for small perveance values the long term stability of the beam is significantly affected. In particular, as $\xi$ grows (or the tune depression $\omega/\omega_0$ decreases from 1), perturbations of 2D as well as 4D maps show that the outer invariant “surfaces" surrounding the ideal circular path at the origin disappear and the beam’s dynamical aperture is seriously limited. Only the major unperturbed stable resonances survive, with their “boundaries" clearly diminished by the presence of new resonances due to space charge effects. In our 2D mapping model, a threshold value of $\xi$ (or $\omega/\omega_0$) was found, beyond which the beam is practically destroyed. The paper ends by describing our concluding remarks and work in progress in section IV. Space charge effects on beam stability became a relevant topic during the years when construction of medium-low energy high currents accelerators, such as SNS and the design of the FAIR rings at GSI were started [@Jeon; @PAC2001; @Franchetti_2005] (see also many articles in the SNS Accelerator Physics Proceedings of the last few years). The role of collective effects and resonances has attracted considerable attention, since they can cause significant beam quality deterioration and losses [@Hofmann]. Another relevant issue is the coupling with the longitudinal motion which modulates the transverse tune and induces losses by resonance crossing as shown by recent experiments [@Franchetti_2003]. High intensity rings, where the bunches can circulate over one million turns, require a careful analysis of the long term stability of the beam. Since the commonly used codes require large CPU times and exhibit an emittance growth due to numerical noise, they are not suited for long term dynamic aperture studies and the use of faster methods is necessary [@Franchetti_2005]. The method proposed in [@BenTurc] allows us to introduce space charge effects in one single evaluation of the map, when a thin sextupole or octupole is present, just as one does in the absence of space charge, and is thus especially well suited for dynamical aperture calculations. Exact Results for a Specific Charge Distribution {#Exact} ================================================ The One - dimensional Beam {#One_Dim} -------------------------- Let us choose for our space charge distribution function in the 1–dimensional case the form $$g\left( \frac{X^2}{R_c^2}\right)= \frac{1}{(X^2/R_c^2+1)^2}\,\,. \label{eq:distr1}$$ The generalization to 2 dimensions is evident by replacing $X^2$ by $X^2+Y^2$ in (\[eq:distr1\]). Observe that this function satisfies the requirements that $g(0)=1$, $g_1(t)\propto t$ as $t\rightarrow 0$ and $g_1(t)\rightarrow 1$ as $t\rightarrow 0$, (using (\[eq:dens\])), as expected from the theory [@BenTurc]. Evaluating now by elementary manipulations the integral in (\[eq:Omega\]), using (\[eq:distr1\]) and (\[eq:dens\]), we find that it is given by the closed form expression $$\int_0^{2\pi}g_1\left(\frac{2J\cos^2\theta}{R_c^2}\right) d\theta=2\pi-\frac{2\pi}{(\frac{2J}{R_c^2}+1)^{1/2}}\,\,. \label{eq:int}$$ Thus, the phase advance at every iteration becomes $$\Omega(J) = \omega + \frac{\omega_0^2-\omega^2}{2\omega} \left(1-\frac{R_c^2}{J}+ \frac{R_c^2}{J(\frac{2J}{R_c^2}+1)^{1/2}} \right), \label{eq:Omega1}$$ where $J$ is given by (\[eq:action\]). Note that, in the limit $J\rightarrow 0$, eq. (\[eq:Omega1\]) implies that $\Omega\rightarrow \omega$ as expected. In fact, expanding the square root in that limit we find $$\Omega(J) = \omega + \frac{\omega_0^2-\omega^2}{2\omega} \left(\frac{3}{2}\frac{J}{R_c^2}- \frac{5}{2}\frac{J^2}{R_c^4}+... \right), \label{eq:Omega1b}$$ from which we can estimate the deviation of $\Omega$ from the depressed tune $\omega$ near the origin. In section III below we pick an $\omega_0$ such that for $\xi=0$ we have a major resonance and a relatively large dynamical aperture in the $X_n, P_n$ plane, select $r_c$ small compared with this aperture and vary $\xi$ to study the space charge effect on the dynamics. The Two - dimensional Beam {#Two_Dim} -------------------------- Let us now observe that in two space dimensions the original Hamiltonian of the system, (\[eq:Ham1\]), becomes $$H=\frac{p_1^2}{2}+\omega_{01}^2\frac{x^2}{2}+\frac{p_2^2}{2}+ \omega_{02}^2\frac{y^2}{2}+\left(-\frac{x^3}{3}+xy^2\right) \sum_{l=1,2,..}\delta(s-l)-\frac{\xi}{2} g_2\left(\frac{x^2+y^2}{r_c^2}\right), \label{eq:Ham2}$$ where sextupole nonlinearities involve, of course, both $x$ and $y$ variables. Since there are now two tune depressions $$\omega_1=\left(\omega_{01}^2-\frac{\xi}{r_c^2}\right)^{1/2} \ \ , \ \ \omega_2=\left(\omega_{02}^2-\frac{\xi}{r_c^2}\right)^{1/2}, \label{eq:omegas}$$ after transforming to new variables $X=x \omega_1^{1/2}$, $P_1=p_1 \omega_1^{-1/2}$ and $Y=y \omega_2^{1/2}$, $P_2=p_2 \omega_2^{-1/2}$ defined by $$X=(2J_1)^{1/2} \cos \theta_1 \ , \ P_1=-(2J_1)^{1/2} \sin \theta_1 \ , \ Y=(2J_2)^{1/2}\cos \theta_2 \ , \ P_2=-(2J_2)^{1/2}\sin \theta_2 , \label{eq:newvar}$$ differentiating the Hamiltonian with respect to $J_1$ and $J_2$ and integrating over $\theta_1$ and $\theta_2$, we find the two tune depressions $$\Omega_1 = \omega_1 + \frac{\omega_{01}^2-\omega_1^2}{2\omega_1} \left(1- \frac{1}{(2\pi)^2} \int_0^{2\pi}\int_0^{2\pi}\frac{2\cos^2\theta_1}{A+1}d\theta_1d\theta_2 \right) \label{eq:Omega2}$$ and $\Omega_2$, with 1 replaced by 2 in (\[eq:Omega2\]), while A is defined by $$A = \frac{2J_1\cos^2\theta_1}{r_1^2}+\frac{2J_2\cos^2\theta_2}{r_2^2} \ , \ \label{eq:A}$$ where $r_1=r_c \omega_1^{1/2}$, $r_2=r_c \omega_2^{1/2}$. Observe that we have used for the $g_1$ function under the integral sign (see (\[eq:Omega\])), the expression $g_1(A)=A/(1+A)$, derived from our simple choice of the distribution function (\[eq:distr1\]) using (\[eq:dens\]). The above $\Omega_1$ and $\Omega_2$ are to be used in the iterations of the 4D mapping: $$\begin{aligned} \left( \begin{array}{c} X(n+1) \\P_1(n+1) \\Y(n+1) \\P_2(n+1) \end{array} \right) & = & \left( \begin{array}{cccc} \cos \Omega_1 & -\sin \Omega_1 & 0 & 0 \\ \sin \Omega_1 & \cos \Omega_1 & 0 & 0 \\ 0 & 0 & \cos \Omega_2 & -\sin \Omega_2 \\ 0 & 0 & \sin \Omega_2 & \cos \Omega_2 \end{array} \right) \nonumber \\ & \times & \left( \begin{array}{c} X(n) \\ P_1(n)+X^2(n)-Y^2(n) \\ Y(n) \\ P_2(n)-2 X(n) Y(n) \end{array} \right), \label{eq:map2}\end{aligned}$$ whose dynamics has already been studied extensively in [@BT91; @BK94; @VBK96; @VIB97; @BS05] in the absence of space charge effects, i.e for $\omega_1=\omega_{01}$ and $\omega_2=\omega_{02}$. In these papers, it was observed that for the tune values $q_x=0.61903$, and $q_y=0.4152$, in $$\omega_{01} = 2 \pi q_x , \,\,\omega_{02} = 2 \pi q_y, \label{eq:tunes}$$ a large dynamical aperture is achieved, with interesting chains of resonant “tori" surrounding the origin. In section III we will study what happens to these structures when $\xi > 0$ (i.e $\omega_1 <\omega_{01}$, $\omega_2 <\omega_{02}$) and space charge effects are taken into account. Before doing this, however, it is necessary to describe how the integrals in (\[eq:Omega2\]) are to be evaluated: Let us first perform the integration over $\theta_2$, writing $$K_1=\int_0^{2\pi}\int_0^{2\pi}\frac{2\cos^2\theta_1}{A+1}d\theta_1d\theta_2 = 2\int_0^{2\pi}d\theta_1\cos^2\theta_1I(\theta_1), \label{eq:IntK}$$ where $$I(\theta_1)=\int_0^{2\pi}\frac{d\theta_2}{2A_1\cos^2\theta_2+B_1} \ , \ A_1=\frac{J_2}{r_2^2} \ , \ B_1=1+\frac{2J_1\cos^2\theta_1}{r_1^2}\,\,. \label{eq:IntI1}$$ The integral (\[eq:IntI1\]) can be evaluated as before with elementary functions to yield $$I(\theta_1)=\frac{2\pi}{[B_1(B_1+2A_1)]^{1/2}} = 2\pi\left[ \left(1+\frac{2J_1\cos^2\theta_1}{r_1^2}\right) \left(1+\frac{2J_1\cos^2\theta_1}{r_1^2}+ \frac{2J_2}{r_2^2}\right)\right]^{-1/2}\,\, . \label{eq:IntI2}$$ Inserting now expression (\[eq:IntI2\]) into the integral (\[eq:IntK\]), changing integration variable to $\phi=2\theta_1$, we easily arrive, after some simple manipulations, to the expression $$K_1=\frac{2\pi r_1^2}{J_1}\int_0^{2\pi}d\phi\frac{cos\phi+1} {\left[(cos\phi+C_1)(cos\phi+D_1)\right)]^{1/2}}\,\, , \label{eq:IntK2}$$ where $$C_1=1+\frac{r_1^2}{J_1} \ , \ D_1=1+\frac{r_1^2}{J_1}\left(1+\frac{2J_2}{r_2^2}\right). \label{eq:CD}$$ We finally make the substitution $u=\tan(\phi/2)$ and rewrite the above integral in the form $$K_1=\frac{16\pi r_1^2}{J_1}\int_0^{\infty} \frac{du}{(u^2+1) \left\{ \left[(C_1-1)u^2+1+C_1\right] \left[(D_1-1)u^2+1+D_1\right] \right\}^{1/2}}\,\, . \label{eq:IntK1}$$ This is clearly not an elementary integral. Notice, however, that all terms in the denominator of (\[eq:IntK1\]) are positive and as $u\rightarrow\infty$ the integrand vanishes as $u^{-4}$. It is, therefore, expected to converge very rapidly and may be computed, at every iteration of the map, using standard routines. For practical purposes, however, in section III below, we prefer to compute instead its equivalent form (\[eq:IntK2\]). Of course, as explained above, a similar integral, $K_2$, also needs to be computed (with $1\rightarrow2$ in (\[eq:IntK\]), (\[eq:IntK2\]) and (\[eq:CD\])), whence $\Omega_1$ and $\Omega_2$ are found and the next iteration of the 4D map (\[eq:map2\]) can be evaluated. Numerical Results ================= Let us now turn to some practical applications of the above theory to specific problems concerning the stability of hadron beams passing through FODO cell magnetic focusing elements and experiencing sextupole nonlinearities, as described in [@BTTS94; @GT96; @BT91]. First, we shall consider the flat beam case (\[eq:map\]), for the specific tune value $q_x=0.21$ corresponding to frequency $\omega_0=2\pi q_x=1.32$, exhibiting, in the absence of space charge perturbations, the phase space picture shown in Figure \[graph\_1\](a) below. As we see in this figure, the region of bounded particle motion extends to a radius of about 0.54 units from the origin. There are also 5 islands of a major resonance surrounded by invariant curves (or 1D “tori"), whose outermost boundary delimits the so - called dynamical aperture of the beam. Outside that domain there are chains of smaller islands (representing higher order resonances) and chaotic regions through which particles eventually escape to infinity. This escape occurs, of course, at different speeds due to the well - known phenomenon of “stickiness", depending on how close the orbits are to the invariant curves surrounding the islands. Let us now consider a space charge distribution of the form (\[eq:distr1\]) with a “frozen core" of radius $r_c=0.1$, which is small compared with the radius of the beam’s dynamical aperture. Our purpose is to vary the value of the preveance $\xi>0$, cf. (\[eq:Ham1\]), starting from $\xi=0$, to estimate the effects of space charge on the dynamics. Setting $\xi=0.001$ ($\omega/\omega_0 \simeq 0.97$), for example, which is quite small compared with $r_c^2=0.01$, we observe in Figure \[graph\_1\](b) that the picture has significantly changed. In particular, the 3-dimensional character of the dynamics (due to the variation of the space advance $\Omega(J)$) has turned the invariant curves into “surfaces" and has led to the dissolution of the outer ones surrounding the five major islands. Furthermore, most of the chains of smaller islands of Figure \[graph\_1\](a) have disappeared due to the new resonances caused by the presence of space charges. To see how all this affects the dynamical aperture of the beam as a function of the tune depression $\omega/\omega_0$ we now perform the following experiment: Forming a grid of initial conditions of step size $\Delta x=\Delta p=0.01$ within a square $[-1,1]\times[-1,1]$ about the origin ($X_n=P_n=0$), we use (\[eq:map\]) to iterate for different $\xi > 0$ (or $\omega/\omega_0 <1$) all points falling within circular rings of width $\Delta r=0.01$ for $N=10^5$ and $N=10^6$ iterations and plot in Figure \[graph\_2\] the last $r=(X_n^2+P_n^2)^{1/2}=r_{esc}$ value, at which at least one orbit was found to escape from the next outer ring. The results demonstrate that already at $\xi=0.001$ ($\omega/\omega_0 \simeq 0.97$) our estimate of the dynamical aperture $r_{esc}$ has fallen from $0.54$ to about $0.43$. In fact, it remains close to that value (rising somewhat to about $0.5$) until $\xi\simeq .006$ ($\omega/\omega_0 \simeq 0.81$), where it experiences a sudden drop to $r_{esc}\simeq 0.03$, and the beam is effectively destroyed. Of course, once one orbit escapes most of them quickly follow within the next one or two circular rings. Note also that increasing the number of iterations from $N=10^5$ to $N=10^6$ does not appreciably change the results, until the sudden drop occurs. This dramatic change at $\omega/\omega_0 \simeq 0.81$ is most probably due to the presence of a major new resonance caused by the space charge perturbation. It may be an important effect, however, since it occurs at a $\xi$ value which is still smaller than the $r_c^2=0.01$ radius of the charge core. Of course, long before this happens, already at $\xi\simeq 0.0002$ (or $\omega/\omega_0 \simeq 0.994$), the effective aperture of the beam has been significantly reduced by about 20 percent from its value at $\xi=0$. Finally, let us turn to the case of the 4D map (\[eq:map2\]), describing the more realistic case of a beam whose particles experience horizontal as well as vertical displacements from the ideal path, see (\[eq:Ham2\]). For comparison purposes, we choose the same parameter values as in our earlier papers [@BK94; @VBK96; @VIB97; @BS05], i.e horizontal and vertical tunes $q_x=0.61903$, $q_y=0.4152$ respectively, yielding the unperturbed frequencies (\[eq:tunes\]) used in the mapping equations. In Figure \[graph\_3\](a), we iterate many initial conditions $X(0),P_1(0),Y(0),P_2(0)$ around the origin and plot on a $X(n),P_1(n)$ projection a picture of the dynamics, for $|Y(n)|\leq 0.04$, in the absence of space charges, i.e. with $\omega_{i}=\omega_{0i}$, $i=1,2$. Note the region of invariant tori and a chain of 6 “islands" corresponding to a stable resonance. Strictly speaking, the motion between these tori need not be bounded as 2D surfaces do not separate 4D space and Arno’ld diffusion phenomena [@Licht-Lieb] could in principle carry orbits far away from the origin. However, as has been explicitly shown for this model in [@BK94; @VBK96; @VIB97], such phenomena are extremely slow and hence a domain with radius of the order of 0.5 can be effectively considered as the dynamical aperture of the beam. Repeating this experiment in the presence of space charges, i.e. with $\xi=0.0002$ (or $\omega_1/\omega_{01} \simeq 0.9993, \omega_2/\omega_{02} \simeq 0.9985$) in (\[eq:Ham2\]), we observe in Figure \[graph\_3\](b) that the outer invariant curves (together with the islands) have been destroyed and the dynamical aperture of the beam has been significantly reduced. Studying this reduction as a function of $\xi$, we proceed to choose initial conditions from a grid of step size 0.05, extending from -0.65 to 0.65 in all 4 directions about the origin, represented by $X(0),P_1(0),Y(0),P_2(0)$. Iterating the resulting orbits from points within spherical shells of width $\Delta r=0.01$, we plot in Figure \[graph\_4\], for each $\xi$, the $r_{esc}$ value of the inner radius of the shell from which at least one orbit escapes to infinity. Our results demonstrate that the beam’s dynamical aperture steadily decreases as $\xi$ grows. At $\xi=0.006$ (or $\omega_1/\omega_{01} \simeq 0.98, \omega_2/\omega_{02} \simeq 0.955$) its radius has fallen by more than 50 percent from its original value, while at higher perveance values the approximation $\xi<<r_c^2$ no longer applies. In fact, it is worth noting that the size of the dynamical aperture falls drastically even for small values of $\xi$, as our calculations with $N=10^5$ iterations show. For example even at $\xi=0.0002$ (or $\omega_1/\omega_{01} \simeq 0.9993, \omega_2/\omega_{02} \simeq 0.9985$) our estimate of the dynamical aperture has dropped from 0.54 to 0.37. Conclusions =========== High intensity effects have long been studied in connection with the so called beam-beam interaction and were a relevant topic in the design of many hadron colliders like ISABELLE and the SSC (see articles in [@Beam-Beam; @Month_1986; @Month_1987]). However, the effects of high currents on the beam stability have become especially crucial only in recent times, when the design and construction of medium energy high current accelerators has started. We have reported in this Letter our results on the possible importance of space charge effects to the global stability of intense hadron beams, experiencing the sextupole nonlinearities of an array magnetic focusing elements, through which the particles pass $N=10^{5-6}$ times in a typical “medium term" experiment of intense beam dynamics. We have used a recently developed analytical approach [@BenTurc] to model the space charges by a “frozen core" distribution, valid to first order in canonical perturbation theory. By proposing a simple example of such a distribution, which leads to explicit and convenient formulas, we have been able to carry out detailed numerical investigations on perturbations of 2D and 4D mapping models, describing the dynamics of flat (horizontal) and elliptic beams (with horizontal and vertical displacements) respectively. These charge distributions are in effect periodic modulations of the tunes (and space advance frequencies) of the motion and are therefore expected to introduce new resonances, raising the phase space dimensionality of the dynamics. Thus, outer invariant tori of the unperturbed case start to disappear and “island" chains of higher order resonances far from the origin eventually drift away, leading to a significant decrease of the region of bounded betatron oscillations of the particles about their ideal path (i.e. the beam’s dynamical aperture, or luminosity). In our experiments, we have been able to measure this reduction of the beam’s dynamical aperture, for several small values of the perveance parameter $\xi$, representing the strength of the space charge distribution. We found that, within the range of validity of our approximations, the domain of bounded orbits decreases by a significant percentage and hence space charge effects should be taken into consideration as they can be important for the long term survival of the beam. In the flat beam case, we observed a near total loss of the beam at some $\xi$ value, which is most likely caused by the onset of a major new resonance introduced by the space charge modulations. On the other hand, in the more general case of a beam with 2- dimensional cross section modelled by a 4- dimensional map, we also discovered a sudden drop in the dynamical aperture, occurring already at very small tune depressions. We, therefore, believe that space charges are important enough to merit further investigation in mapping models of intense proton beams [@BS06]. The occurrence of new low order resonances poses, of course, a major threat to the dynamics, if the perveance parameter is big enough. However, even at small values of this parameter, weak (Arnol’d) diffusion effects and the slow drift of high order resonances, may significantly alter the long term picture of the motion, after a sufficiently great number of iterations. It would also be useful to compare the one turn map with the full integration of the space charge effect over one beam revolution to appreciate the validity limits of our approximation. Indeed, since the high computation efficiency of the one turn map is a key issue of this approach, an estimate of the errors in some reference cases would contribute additional useful information in realistic applications. Acknowledgments =============== We are particularly grateful to the two referees for their very valuable comments which helped significantly in improving the exposition of our results. T. Bountis acknowledges many interesting discussions on the topics of this paper with Professor G. Turchetti, Dr. H. Mais, Dr. I. Hoffmann and Dr. C. Benedetti at a very interesting Accelerator Workshop in Senigallia, in September 2005. He and Ch. Skokos are thankful to the European Social Fund (ESF), Operational Program for Educational and Vocational Training II (EPEAEK II) and particularly the Programs HERAKLEITOS, and PYTHAGORAS II, for partial support of their research in physical applications of Nonlinear Dynamics. [00]{} References ========== Benedetti C. and Turchetti G. 2005, [*An Analytic Map for Space Charge in a Nonlinear Lattice*]{}, [*Phys. Lett.*]{} [**A 340**]{}, 461. Bazzani A., Todesco E., Turchetti G., Servizi G. 1994, [*A Normal Form Approach to the Theory of Nonlinear Betatronic Motion*]{}, CERN, Yellow Reports 94 - 02. Giovanozzi M. and Todesco E. 1996, Part. Accel. [**54**]{}, 203. Bountis T C and Tompaidis S 1991, [*Future Problems in Nonlinear Particle Accelerators*]{}, eds G. Turchetti and W. Scandale (Singapore: World Scientific), 112. Bountis T and Kollmann M 1994, [*Physica D*]{} [**71**]{}, 122. Vrahatis M N, Bountis T C and Kollmann M 1996 [*Int. J. Bifur. & Chaos*]{} [**6**]{}(8), 1425. Vrahatis M N, Isliker H and Bountis T C 1997 [*Int. J. Bifur. & Chaos*]{} [**7**]{}(12), 2707. Bountis T C and Skokos Ch 2006, [*Application of the SALI Chaos Detection Method to Accelerator Mappings*]{}, preprint, physics/0512115, to appear in Nucl. Instr. and Meth. Sect. A Fedotov A V, Holmes J A and Gluckstern R L 2001, [*Instabilities of High-Order Beam Modes Driven by Space-Charge Coupling Resonances*]{}, Physical Review ST, Accel. Beams 4, 084202. Franchetti G, Hofmann I, Giovannozzi M, Martini M, Metral E, 2003, [*Study of Space Charge Driven Beam Halo and Loss Observed at the CERN Proton Synchrotron*]{}, Phys. Rev. ST Accel. Beams 6, 124201. Fedotov A V, Malitsky N, Papaphilippou Y, Wei J and Holmes J 2001, [*Excitation of Resonances due to Space Charge and Magnet Errors in the SNS Ring*]{}, Particle Accelerator Conference 2001, Proceedings, Ed. P. Lucas, S. Webber (IEEE Operations Center). Benedetti C, Rambaldi S and Turchetti G 2005, [*Collisional Effects and Dynamic Aperture in High Intensity Storage Rings*]{}, Nucl. Instr. and Meth. A 544, 465-471. Franchetti G, Hofmann I, Orzhekhovskaya A, Spiller P, 2005, [*Intensity and Bunch-Shape Dependent Beam Loss Simulation for the SIS100*]{}, Particle Accelerator Conference 2005 Proceedings, Knoxville, Tennessee, USA, ed. C. Horak, Joint Accelerator Conferences Website, 3807. Jeon D, Danilov V V, Galambos J D, Holmes J A, and Olsen D K 1999, [*SNS Accumulator Ring Collimator Optimization with Beam Dynamics*]{}, Nuclear Instruments and Methods in Physics Research A 435, p. 308. Hofmann I, Franchetti G, Boine Frankenhaim O, Qiang and Ryne R D 2003, [*Space Charge Resonances in Two and Three Dimensional Anisotropic Beams*]{} Phys. Rev. Special Topics 6, 024202. Lichtenberg A. and Lieberman M. 1988, [*Regular and Chaotic Motion*]{}, Springer Verlag, 2nd ed. 1980, [*Conference on the Beam - Beam Interaction*]{}, ed. M. Month, J. Herrera, A.I.P. Conf. Proc. [**57**]{}. 1986, [*Nonlinear Dynamics Aspects of Particle Accelerators*]{}, J.M. Jowett, M. Month and S. Turner, eds., Springer Lecture Notes in Physics [**247**]{}. 1987, [*Physics of Particle Accelerators*]{}, eds M. Month, M. Dienes, A.I.P. Conf. Proc. [**153**]{} v. 1. Bountis T C and Skokos Ch 2006, [*Orbital Diffusion in Space Charge Modulated Models of Accelerator Dynamics*]{}, in preparation.
--- abstract: 'Recently, TersoffCG, a coarse grain potential for graphene based on Tersoff potential, has been developed. In this work, we explore this potential, applying it to the case study of a single wall carbon nanotube. We performed a series of molecular dynamics simulations of longitudinal tension and compression on armchair carbon nanotubes, comparing two full atomistic models, described by means Tersoff and AIREBO potentials, and the coarse grained model described by means of TersoffCG. We followed each stage and mode of deformation, finding a good matching between the stress strain curves under tension independently from the used potential, with a small difference in the pre-fracture zone. Conversely, under compression the coarse grain model presents a buckling stress almost the double of the full atomistic models, and a more than double post-buckling stress. With the increase of the nanotube diameter, the capturing of the buckling modes is enhanced, however the stress overestimation remains. A decreasing of the three body angular term in the potential can be a rough way to recover the buckling stress, with small losses in the capturing of the post-buckling behavior. In spite of a good agreement under compression, the fracture behavior of the nanotube is strongly influenced, suggesting this modification only when no fractures are present. The findings reported in this work underlie the necessity of accurately evaluate the use of a coarse grain model when compressive loads are applied to the system during the simulation.' author: - 'A. Pedrielli' bibliography: - 'Bibliography.bib' title: 'On deformation of carbon nanotubes with TersoffCG: a case study' --- Nanomaterials,Molecular Dynamics,Graphene,Coarse Grain Introduction ============ In recent years, a increasing interest has been devoted to 3D graphene structures [@Tylianakis2011; @Wang2014] as means to deliver notable graphene mechanical properties to the macroscale. Among these structures, graphene nanofoams [@Wu2013; @Qin2017; @Pedrielli2018] and carbon nanotube networks [@Hall2008] have emerged as chemically stable, lightweight porous materials. Some efforts were done to pass from the study of general properties of these structures to their optimization as functional materials [@Xie2011; @Pedrielli2017]. Bridging the analysis at the nanoscale and that at the macroscale, needs methods capable to deal with an high number of atoms, still capturing the main mechanical features of these materials. Recently, new potentials for graphene were developed with a coarse grain approach [@Cranford2009; @Cranford2011; @Ruiz2015]. Some of these potentials are also defined to be recursively extent at higher order of coarse graining [@Zhu2014]. However, each time we use a coarse grain model, we lose some information on the system with respect to the full atomistic description. In this work we evaluate the influence of using a recently developed a coarse grain potential for graphene [@Shang2017], based on Tersoff potential, named TersoffCG, on the case study of a single wall carbon nanotube. We computed the stress strain curves under tension and compression along longitudinal direction, comparing the coarse grain model and two full atomistic models described by means of Tersoff [@Tersoff1988; @Tersoff1988A] and AIREBO [@Stuart2000] potential. Following step by step the tension and the compression of the nanotube we also evaluated the buckling modes, and their possible suppression due to the use of the coarse grain model. The findings of this work show how the coarse grain model accurately works under tension while a loss of accuracy is found under compression. m $\gamma$ $\lambda_3$ c d $\cos \theta_0$ n $\beta$ $\lambda_2$ B R D $\lambda_1$ A --- ---------- ------------- ------- -------- ----------------- --------- --------------- ------------- -------- ----- ----- ------------- -------- -- -- 3 1 0 38049 4.3484 - 0.57058 0.72751 1.572x10$^-7$ 1.10595 1386.8 4.1 0.6 1.74395 5574.4 . \[tab:Parameters\] Computational model ==================== The case study we use here is a $5$ nm $(20,20)$ armchair nanotube. For the coarse grain model we use instead a $(10,10)$ armchair nanotube with doubled bond length. In this way, as asked by the coarse grain model, we have, at the first order, a coarse grain atom in place of four atoms in the full atomistic model. At the same time the coarse grain atoms have a mass four times that of a carbon atom. The unit cell sides were fixed to $6$ nm along the directions perpendicular to the nanotube axis. We used for TersoffCG[@Shang2017], Tersoff [@Tersoff1988; @Tersoff1988A] and AIREBO [@Stuart2000] potentials the standard parametrizations, without taking into account typical internal cutoff for the near fracture regimes [@Shenderova2000]. This, anyway have no influence on the results presented here. Molecular dynamics simulations were performed within LAMMPS code [@Plimpton1995]. The parameters used for TersoffCG [@Shang2017] are reported in Tab. \[tab:Parameters\]. We imposed periodic boundary conditions on the longitudinal direction and applied the deformation in the same direction. All the samples were initially fully relaxed, then equilibrated to zero pressure and the target temperature $1$ or $300$ K. All the mechanical tests, were performed with successive $0.01$ deformation steps followed by $2$ ps isothermal ensemble (NVT) equilibration by means of Nosé–Hoover thermostat. The stress was computed and averaged during a NVT run of $1$ ps. In all the simulations the equations of motion were solved with the velocity-Verlet integration method using a time step of $1$ fs. The engineering strain parallel to the direction of deformation is defined as $$\varepsilon = \frac{L-L_0}{L} = \frac{\Delta L}{L}$$ where $L_0$ and $L$ are the starting and current length of the sample in the direction of loading. To determine the stress, the pressure stress tensor components in response to the external deformation are computed as [@Thompson2009] $$\label{pressure} P_{ij} = \frac{\sum_k^N{m_k v_{k_i} v_{k_j}}}{V}+ \frac{\sum_k^N{r_{k_i} f_{k_j}}}{V}$$ where $i$ and $j$ label the coordinates $x$, $y$, $z$; $k$ runs over the atoms; $m_k$ and $v_k$ are the mass and velocity of $k$-th atom; $r_{k_i}$ is the position of $k$-th atom; $f_{k_j}$ is the $j$-th component of the total force on the $k$-th atom due to the other atoms; and, finally, $V$ is the volume of the simulation box. We note that the stress presented along this paper is referred to the simulation cell sectional area of $36$ nm$^2$. The pressure in Eq. \[pressure\] includes both kinetic energy (temperature) and virial term. In the same way a $10$ nm $(20,20)$ and $5$ nm and $10$ nm $(40,40)$ nanotubes were prepared and tested. Mechanical tests and buckling modes =================================== We report in Fig. \[fig:Tens1K\] the stress strain curves under tension of the considered $5$ nm $(20,20)$ armchair nanotube for full atomistic and coarse grain potentials, at $1$ K temperature. We found an overall matching of the stress strain curve for the three potentials, with a similar fracture strain. Regarding the compressive case, we report in Fig. \[fig:Comp1K\] the stress strain curves of the considered $5$ nm $(20,20)$ armchair nanotube for full atomistic and coarse grain potentials, at $1$ K temperature. The stress strain curves under compression are very sensitive with respect to the use of coarse grain model. Indeed, apart form the slope of the elastic part, i.e. the Young modulus, the features are different. Fracture strain and fracture stress are almost the double with respect to those obtained for full atomistic potentials, the post-buckling strain is instead more than double. The difference between the two full atomistic potentials is instead limited to the value of buckling stress presenting the same buckling strain and post-buckling stress strain curve. In order to evaluate the influence of the temperature on the different behavior of the coarse grain model with respect to the full atomistic ones and that in buckling stress between the two atomistic models, we performed the same compressive tests at $300$ K temperature (Fig. \[fig:Comp300K\]). Fracture strain and fracture stress and the post buckling with the coarse grain model are almost three times those obtained for full atomistic potentials. The effect of the temperature is to level out the buckling stress for atomistic models, whereas the coarse grain model is essentially uninfluenced (Fig. \[fig:Comp300K\]). We report in Fig. \[fig:Comp40\], \[fig:Comp70\], \[fig:Comp120\], \[fig:Comp350\] the nanotube samples at $4$, $7$, $12$ and $35$% compressive strain. In each figure we show in the upper panel longitudinal view of the nanotube cell, in the lower panel a front view of the nanotube. These particular strain values where chosen to show the difference in the buckling mode, and the strain at which they appear. We have seen from the stress strain curves in Fig. \[fig:Comp1K\] that the first buckling mode is obtained for full atomistic models between $2$ and $5$% strain, we report in Fig. \[fig:Comp40\] the three nanotubes at $4$% strain showing the first buckling mode of the full atomistic models, whereas the coarse grain model is still in pre-buckling. A second stage is that presented at $7$% strain, reported in Fig. \[fig:Comp70\], for which a second buckling mode is obtained for full atomistic models, corresponding to the second small peaks in the stress strain curves reported in Fig. \[fig:Comp1K\]. For the coarse grain model, at $7$% strain we have the appearance, with a certain delay, of the first mode, noted for atomistic models in at $4$% strain. In the third stage, presented in Fig. \[fig:Comp120\] at $12$% strain, the buckling mode of the atomistic models is essentially the same of $7$% strain, whereas the coarse grain model directly pass to the last buckling mode, that it will keep at the subsequent strain increases. In the last stage, presented in Fig. \[fig:Comp120\] at $35$% strain, the buckling mode of the atomistic model and that of the coarse grain model are the same. The coarse grain models presents only two of the three buckling modes presented by both atomistic models, the first mode and the third. The buckling mode presented by atomistic samples in Fig. \[fig:Comp70\] and \[fig:Comp120\] is eliminated using the coarse grain potential. The same tests were performed on a $10$ nm $(20,20)$ carbon nanotube and its coarse grain analogous, with similar findings. In particular, as shown in Fig. \[fig:CompLong\], also in this case the stress strain curves under compression are very sensitive with respect to the use of coarse grain model. The main features are present also in the coarse grain model, however all the peaks corresponding to subsequent buckling instabilities are found at higher stress and higher strain than that for the full atomistic models. A further test was done on a $5$ nm $(40,40)$ carbon nanotube (Fig. \[fig:CompLarge\]). As for the double length nanotube of which stress strain curves are reported in Fig. \[fig:CompLong\] the main features are present also in the coarse grain model, in this case the peaks corresponding to subsequent buckling instabilities are found at higher stress and almost the same strain than that for the full atomistic models. A rough modification of TersoffCG for compressive regime ======================================================== For the $5$ nm $(40,40)$ nanotube, the main peaks in the stress strain curves, are found in good agreement with those obtained with full atomistic models. In this case, to increase the nanotube diameter enhances the agreement the the capturing of the peaks positions in the stress strain curves in Fig. \[fig:CompLarge\] without improving the accuracy on the stress values. Here we indicate a rough modification to TersoffCG potential that can improve the matching of the stress values under compression. As each modification to a complex potentials, it should be verified accurately with respect to the aim of the simulations. The parameter of TersoffCG that we modified is $\gamma$, the multiplicative factor on the angular term (Tab. \[tab:Parameters\]). In Fig. \[fig:Modify\] we report the stress strain curve under compression, obtained for the $5$ nm $(40,40)$ nanotube at $1$ K a value of $\gamma$ parameter equal to $0.16$. This modification strongly affects the near- and post- fracture regime. Anyway, under compression the change results to be effective. A further test was performed on a $10$ nm $(40,40)$ nanotube, with good agreement between the modified TersoffCG end the full atomistic potentials. The stress strain curves are reported in Fig. \[fig:Modify1\]. Conclusions =========== In this work, we applied the TersoffCG potential to the case study of a single wall carbon nanotube. The general performance of the potential was tested under tension and compression comparing the results with those obtained using two full atomistic models. Under tension the results for the various potentials are overall similar, and a easy application of the coarse grain model could be safe. Conversely, under compression the only features of the stress strain curves that resemble the atomistic model are the Young modulus and the general trend. However buckling strain, buckling stress and post-buckling stress are overestimates by a factor two. With regards to the buckling modes, one of those presented by the $5$ nm $(20,20)$ nanotube in the atomistic models is suppressed. Furthermore, the strain at which the modes appear is different when we use the coarse grain model. With the increasing of the nanotube length ($5$ nm $(20,20)$ nanotube), the main buckling modes are captured also by the coarse grain model. Increasing instead the nanotube diameter ($5$ nm $(40,40)$ nanotube) the critical strain at which the buckling modes appear better matches that of full atomistic models. The overestimation of the buckling stress can be roughly cured modifying the parameter $\gamma$ in TersoffCG potential. This is effective under compression and we successfully reproduce the stress strain curves for a $5$ nm $(40,40)$. However, the choice of the value of $\gamma$ could be system dependent. Furthermore, under tensile stress, in the near- and post- fracture regime the modification produces unphysical results. The modification of $\gamma$ is then limited to specific compressive problems. From this case study we have seen that the tension case can be easily treated with TersoffCG whereas, under compression, the presence of instabilities and the high deformation of the nanotube structure at the wrinkles suggest more attention in the use of coarse grain potentials. Finally, the compressive test of carbon nanotube could be taken as a useful test for coarse grain graphene potentials.
--- abstract: 'It has been common to argue or imply that a regularizer can be used to alter a statistical property of a hidden layer’s representation and thus improve generalization or performance of deep networks. For instance, dropout has been known to improve performance by reducing co-adaptation, and representational sparsity has been argued as a good characteristic because many data-generation processes have a small number of factors that are independent. In this work, we analytically and empirically investigate the popular characteristics of learned representations, including correlation, sparsity, dead unit, rank, and mutual information, and disprove many of the *conventional wisdom*. We first show that infinitely many Identical Output Networks (IONs) can be constructed for any deep network with a linear layer, where any invertible affine transformation can be applied to alter the layer’s representation characteristics. The existence of ION proves that the correlation characteristics of representation is irrelevant to the performance. Extensions to ReLU layers are provided, too. Then, we consider sparsity, dead unit, and rank to show that only loose relationships exist among the three characteristics. It is shown that a higher sparsity or additional dead units do not imply a better or worse performance when the rank of representation is fixed. We also develop a rank regularizer and show that neither representation sparsity nor lower rank is helpful for improving performance even when the data-generation process has a small number of independent factors. Mutual information $I(\operatorname{\mathbf{z}}_l;\operatorname{\mathbf{x}})$ and $I(\operatorname{\mathbf{z}}_l;\operatorname{\mathbf{y}})$ are investigated, and we show that regularizers can affect $I(\operatorname{\mathbf{z}}_l;\operatorname{\mathbf{x}})$ and thus indirectly influence the performance. Finally, we explain how a rich set of regularizers can be used as a powerful tool for performance tuning.' author: - | Daeyoung Choi[^1] & Kyungeun Lee\ Department of Transdisciplinary Studies\ Seoul National University\ Seoul, 08826, South Korea\ `{choid, ruddms0415}@snu.ac.kr`\ Changho Shin\ Encored Technologies\ Seoul, 06109, South Korea\ `chshin@encoredtech.com` Wonjong Rhee\ Department of Transdisciplinary Studies\ Seoul National University\ Seoul, 08826, South Korea\ `wrhee@snu.ac.kr` bibliography: - 'iclr2019\_conference.bib' title: 'On the Statistical and Information-theoretic Characteristics of Deep Network Representations' --- [^1]: Authors contributed equally.
--- abstract: 'We apply a simple decomposition to the energy of a moving particle. Based on this decomposition, we identify the potential and kinetic energies, then use them to give general definitions of momentum and the various kinds of forces exerted on the particle by fields, followed by the generalization of Newton’s second law to accomodate these generally defined forces. We show that our generalization implies the Lorentz force law as well as Lagrange’s equation, along with the usually accepted Lagrangian and the associated velocity dependent potential of a moving charged particle.' author: - | Artice M. Davis\ Professor Emeritus\ San Jose State University title: 'Energy, Forces, Fields and the Lorentz Force Formula' --- Introduction {#introduction .unnumbered} ------------ The motivation for this paper is to present a rigorous derivation of the Lorentz force law in a nonrelativistic context. The provenance of the law is somewhat obscure. Lorentz’s original paper,[^1], written in French and apparently not translated into English, assumed the ether as a medium and contained a number of unwarranted assumptions and vague definitions. Lorentz apparently then rejected his own derivation, choosing in his later monograph “The Theory of Electrons”[^2] to simply say the law was “...got by generalizing the results of electromagnetic experiments."[^3] He did not specify which experiments, but he clearly saw fit not to refer to his own earlier paper. Others have derived the formula by assuming a generalized Lagrangian for a moving charged particle,[^4] while still others have derived the Lagrangian presuming the Lorentz force formula.[^5] This type of analysis results in a generalized potential, namely $\psi=\phi-\vec{A}\cdot\vec{v}$, for the moving particle, which we will derive. The quantity $\vec{A}$ is the vector potential, which some authors (including Maxwell) referred to as the electromagnetic momentum; but others insist that the electromagnetic momentum is $\epsilon_0\left[\vec{E}\times\vec{B}\right]$.[^6] We feel that much of the controversy in this matter, as well as in many others, is due to the lack of general definitions. We believe we have supplied such definitions in this work.[^7] The work presented here assumes that the energy of a moving particle is a known quantity. The difficulty of defining energy has been discussed by others,[^8] but we will deem it to be a primitive notion. Otherwise, we will adopt the operational point of view.[^9] This means that each fundamental quantity is defined by the description of a measuring instrument and a recipe for its use to measure the associated variable, while a derived quantity is defined by an equation expressing it in terms of previously defined fundamental quantities and/or previously defined derived quantities. We require, of course, that the latter never be self referential; that is, a flow diagram of all the definitions should contain no loopsit must be a tree structure. Particles: Mass and Accelerational Force {#particles-mass-and-accelerational-force .unnumbered} ---------------------------------------- We assume the usual definition of a particle, namely a vanishingly small region of space having certain properties through which it interacts with other similarly defined regions of space. We assume that such interactions are mediated by energy transfer through the empty spaces between them, these interactions depending upon the parameters mass and charge. Let us select two particles, remove them from any outside influence, place them at a given location in space with zero velocity, and measure their accelerations due to mutual influence.[^10] We will take it as a fact that in any such test the accelerations are always oppositely directed; furthermore, that the ratio of the magnitudes of the accelerations is always the same. Selecting one of these particles as a reference and denoting its acceleration by $\vec{a}_0$ and that of the other by $\vec{a}$, [^11] we define the mass of the other particle by $$\label{defn of mass} m=\frac{a_0}{a},$$ where $\mid\vec{a}\mid = a$ and $\mid\vec{a}_0\mid=a_0$. If the two particles are identical then $a_0=a$ so $m=1$; hence, our reference particle has unit mass. Taking note of the opposing directions of the accelerations caused by the interaction, we can write $$\label{defn of accelerational force} \vec{f}_a=-\vec{a}_0=m\vec{a},$$ which we will define to be the accelerational force on the nonreference particle. This procedure for defining mass and force is due to Mach.[^12] There exist forces, however, that are not associated with a moving object, for example the force of a spring on a weight it supports. We will offer more general definitions in a subsequent section of this paper. Charge {#charge .unnumbered} ------ Now let’s select an arbitrary particle and test it against all the other particles in the universe, measuring the force between each pair. If no other particle repels it we will say it is uncharged. If, on the other hand, at least one other particle repels it, we will say our original particle is positively charged and assign it a charge of one unit thus making it our reference charge. Now we segregrate all particles in the universe into two classes: each of those in the first class repels our reference charge and is said to have a positive charge and each of those in the second class attracts the particle having our reference charge. Next, divide the second class into two subcategories: those which repel any other particle in the second class and those which attract each other particle in the second class. We will say that those in the repelling subcategory are negatively charged and those in the attracting subcategory uncharged. Thus, each and every particle in the universe is positively charged, negatively charged, or uncharged.[^13] Finally, we invoke the Coulomb law to determine the magnitude of a given charge. Thus, we have defined charge, like mass, in an operational manner. Energy Considerations {#energy-considerations .unnumbered} --------------------- Consider a particle moving freely through spacethat is, a particle with no mechanical constraintsand let $U(\vec{r},\vec{v},t)$ be its energy.[^14] Write $$\label{decomp1} U(\vec{r},\vec{v},t)=U(\vec{r},0,t)+\left[U(\vec{r},\vec{v},t)-U(\vec{r},0,t)\right]=\phi(\vec{r},t)+T(\vec{r},\vec{v},t),$$ where $\phi$ and $T$ have obvious definitions. The former is called the potential energy and the latter the kinetic energy. We define the generalized momentum by $$\label{momentum defn} \vec{p}(\vec{r},\vec{v},t)=\nabla_{\vec{v}}T(\vec{r},\vec{v},t)=\nabla_{\vec{v}}U(\vec{r},\vec{v},t),$$ where the subscript $\vec{v}$ on the gradient operator refers to differentiation with respect to the components of velocity: $$\label{velocity gradient} \nabla_{\vec{v}}=\hat{e}_i\frac{\partial}{\partial v_i}.$$ Then we have $$\label{T integral} T(\vec{r},\vec{v},t)=\int_0^{\vec{v}}\vec{p}(\vec{r},\vec{\alpha},t)\cdot d\vec{\alpha},$$ where the integration is that of a line integral in velocity space with both position and time held fixed. Next, let us apply a similar decomposition to the generalized momentum, writing $$\label{decomp2} \vec{p}(\vec{r},\vec{v},t)=\vec{p}(\vec{r},0,t)+\left[\vec{p}(\vec{r},\vec{v},t)-\vec{p}(\vec{r},0,t)\right]=\vec{A}(\vec{r},t)+\vec{Q}(\vec{r},\vec{v},t),$$ where $\vec{A}$ and $\vec{Q}$ have obvious definitions. We will call $\vec{A}$ the potential momentum due to the field and $\vec{Q}$ the inertial momentum. Letting $\vec{Q}=Q_j\hat{e}_j,$ define $$\vec{m}_j(\vec{r},\vec{v},t)=\nabla_{\vec{v}}Q_j(\vec{r},\vec{v},t)=\hat{e}_i\frac{\partial Q_j}{\partial v_i}=m_{ij}\hat{e}_i.$$ Then $$Q_j(\vec{r},\vec{v},t)=\int_0^{\vec{v}}\vec{m}_j(\vec{r},\vec{\alpha},t)\cdot d\vec{\alpha}=\int_0^{\vec{v}}d\vec{\alpha}^T\vec{m}_j=\int_0^{\vec{v}}m_{ij}d\alpha_i.$$ Finally, define the matrix $$M=[m_{ij}]=\left[\frac{\partial Q_j}{\partial v_i}\right].$$ We will call $M=M(\vec{r},\vec{v},t)$ the generalized mass tensor. Using it, we have $$\vec{Q}(\vec{r},\vec{v},t)=\int_0^{\vec{v}}d\vec{\alpha}^TM(\vec{r},\vec{\alpha},t),$$ where $d\vec{\alpha}^T$ is the transpose of the differential of the integration variable and $d\vec{\alpha}^TM$ denotes the matrix product of the row matrix $d\vec{\alpha}^T$ and the square matrix $M$.[^15] The Classical Case {#the-classical-case .unnumbered} ------------------ Classically, the mass tensor becomes $$M=mI_3=m\begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{bmatrix},$$ where $m$ is a constant which we have already operationally defined. Then the particle momentum is $$\vec{Q}(\vec{r},\vec{v},t)=m\vec{v},$$ the kinetic energy is $$T(\vec{r},\vec{v},t)=\vec{A}(\vec{r},t)\cdot\vec{v}+\frac{1}{2}mv^2,$$ and the total particle energy is $$\label{classical decomposition} U(\vec{r},\vec{v},t)=\phi(\vec{r},t)+T(\vec{r},\vec{v},t)=\phi(\vec{r},t)+\vec{A}(\vec{r},t)\cdot\vec{v}+\frac{1}{2}mv^2.$$ Following Maxwell, we will call the term $\vec{A}\cdot\vec{v}$ the electrokinetic energy. In what follows, we will restrict ourselves to the classical case. Generalized Forces {#generalized-forces .unnumbered} ------------------ We will now define three generalized forces. The first will be “positional force,” the force on the particle caused by change of position. It will be defined by $$\label{positional force} \vec{f}_P=-\nabla_{\vec{r}}\left[\phi(\vec{r},t)-\vec{A}(\vec{r},t)\cdot\vec{v}\right],$$ where $\nabla_{\vec{r}}=\hat{e}_i\partial/\partial x_i$. Here are the reasons for our choice of signs. If the field exerts force on the particle, the particle moves from a region of higher potential energy to a region of lower potential energy. On the other hand, a force exerted by the field on a particle tends to increase its kinetic energyand $\vec{A}\cdot\vec{v}$ is part of the kinetic energy. The second generalized force we will define is the “inertial force,” given by the time rate of change of the generalized momentum: $$\label{inertial force} \vec{f}_I(\vec{r},\vec{v},t)=\frac{d}{dt}\vec{p}(\vec{r},\vec{v},t)=\frac{d}{dt}\vec{A}(\vec{r},t)+\frac{d}{dt}Q(\vec{r},\vec{v},t)=\frac{d}{dt}\vec{A}(\vec{r},t)+m\vec{a}.$$ We now generalize Newton’s second law by postulating that $$\label{basic form of newton 2} \vec{f}_P=\vec{f}_I.$$ Supressing arguments for simplicity of notation and applying standard vector identities to equations , , and , we obtain $$\label{second law} -\nabla_{\vec{r}}\left[\phi-\vec{A}\cdot\vec{v}\right]=\nabla_{\vec{r}}\left[\vec{A}\cdot\vec{v}\right]-\vec{v}\times\left[\nabla_{\vec{r}}\times\vec{A}\right]+\partial_t\vec{A}+m\vec{a}.$$ Our third and last generalized force is that part of the inertial force which we have already defined to be the ’‘accelerational force,’’ $\vec{f}_a=m\vec{a}$. Using it in equation gives $$\label{second second law} \vec{f}_a=-\nabla_{\vec{r}}\phi-\partial_t\vec{A}+\vec{v}\times\left[\nabla_{\vec{r}}\times\vec{A}\right].$$ If we define the ‘positional field’’ by $$\vec{E}=-\nabla_{\vec{r}}\phi-\partial_t\vec{A}$$ and the “motional field” by $$\vec{B}=\nabla_{\vec{r}}\times\vec{A},$$ we can rewrite equation in the extremely simple form $$\label{field force} \vec{f}_a=\vec{E}+\vec{v}\times\vec{B}.$$ We see at once that $\vec{E}$ is the accelerational force on a particle at rest and $\vec{v}\times\vec{B}$ the added accelerational force due to its motion. These interpretations clearly serve as operational definitions of these two fields. The Lorentz Force Law {#the-lorentz-force-law .unnumbered} --------------------- We now recognize that the energy of a particle might depend upon both mass and charge. At first suppressing the position, velocity, and time arguments and then reintroducing them, we write $$U=U(m,q)=U(m,0)+\left[U(m,q)-U(m,0)\right]=V_m(\vec{r},\vec{v},t)+W_q(\vec{r},\vec{v},t),$$ with obvious definitions of $V_m$ and $W_q$. Next, we perform, for each of $V_m$ and $W_q$ the decomposition in equation , using obvious notation and assuming that each component is normalized to unit mass or charge as appropriate: $$\label{V decomposition} V_m(\vec{r},\vec{v},t)=m\phi_m(\vec{r},t)+\frac{1}{2}mv^2.$$ and $$\label{W decomposition} W_q(\vec{r},\vec{v},t)=q\phi_q(\vec{r},t)+q\vec{A}(\vec{r},t)\cdot\vec{v}.$$ We have made two key assumptions here, namely 1. $V_m$ has no component due to field momentum. 2. $W_q$ is independent of particle mass. These assumptions can, of course, be removed at the expense of a more complex resulting theory. It is also convenient in working strictly with electrodynamics to assume that $m\phi_m<<q\phi_q\approx q\phi$. These assumptions, taken together, permit us to remove all subscripts and rewrite the total particle energy in equation as $$\label{classical decomposition 2} U(\vec{r},\vec{v},t)=q\phi(\vec{r},t)+q\vec{A}(\vec{r},t)\cdot\vec{v}+\frac{1}{2}mv^2$$ and our field force equation in as $$\label{lorentz} \vec{f}_a=q\vec{E}+q\vec{v}\times\vec{B},$$ where all terms except $\vec{f}_a$ are purely electrical in nature. We now see that $\vec{E}$ is clearly the electrical field intensity and $\vec{B}$ the magnetic field as these quantities are normally defined in electromagnetic field theory, the former being the per-unit force on a stationary charged particle and the second term in equation being the incremental force added by the magnetic field. Lagrange’s Equation {#lagranges-equation .unnumbered} ------------------- Let’s return to Newton’s generalized second law, $\vec{f}_p=\vec{f}_I$, and write it in terms of the basic definitions: $$\label{general newtons second law} -\nabla_{\vec{r}}\left[q\phi-q\vec{A}\cdot\vec{v}\right]=\frac{d}{dt}\nabla_{\vec{v}}\left[q\vec{A}\cdot\vec{v}+\frac{1}{2}mv^2\right].$$ Note that $$\label{defn of em T} T=\frac{1}{2}mv^2+q\vec{A}\cdot\vec{v}.$$ In component form becomes $$\label{lagrange components} \frac{d}{dt}\frac{\partial T}{\partial v_i}+q\frac{\partial\phi}{\partial x_i}=\frac{\partial}{\partial x_i}\left[q\vec{A}\cdot\vec{v}\right].$$ Next, we define $$\label{defn of L} L=T-q\phi=\frac{1}{2}mv^2-q\left[\phi-\vec{A}\cdot\vec{v}\right]$$ and note that $\phi$ is not a function of $\vec{v}$ to obtain $$\label{lagranges equation} \frac{d}{dt}\frac{\partial L}{\partial v_i}-\frac{\partial L}{\partial x_i}=0.$$ We now see that $$\label{generalized potential} \psi(\overline{r},\overline{v},t)=\phi(\overline{r},t)-\overline{A}(\overline{r},t)\cdot\vec{v}$$ has the character of a velocity dependent potential. We also see that $L$ is the commonly accepted Lagrangian for a particle in the electromagnetic field, usually either assumed on an *ad hoc* basis or derived from the Lorentz force law as an assuption.[^16] Equation can, of course, be immediately extended by expressing the cartesian position variables in terms of generalized coordinates. Standard procedures can then be applied to show that Lagrange’s equation is invariant under this transformation. Thus, the theory just outlined produces both the Lorentz force equation and Lagrange’s equation as results, rather than as assumptions. Finally, we note that the steps leading from the generalized Newton’s second law expresssd by equation with its associated force definitions to Lagrange’s equation in are all reversible; in other words the two equations are equivalent mathematical assertions. Hence, if one accepts the latter, one must accept the former. We feel, therefore, that the theory outlined in this paper offers a solid, theoretically sound approach to introducing both topics in introductory courses in fields and classical mechanics. Acknowledgements {#acknowledgements .unnumbered} ---------------- I would like to express my appreciation to Vladimir Onoochin for a number of productive discussions about the topics treated her. [^1]: H. A. Lorentz, *La Théorie Eléctromagnétique de Maxwell et son Application aux Corps Mouvants* (Leiden, E. J. Brill, 1892). [^2]: H. A. Lorentz, *The Theory of Electrons and its Applications to the Phenomena of Light and Radiant Heat*, 2^nd^ ed, (Teubner, Leipzig, 1916). [^3]: See the sentence immediately after equation (23), page 14 in reference \[2\] [^4]: See, for example, J. R. Taylor, *Classical Mechanics*, (University Science Books, 2005), page 273; Goldstein, Poole, and Safco, *Classical Mechanics*, (Addison-Wesley, San Francisco,2002), sec. 1.5, or M. G. Calkin, *Lagrangian and Hamiltonian Mechanics*,(World Scientific, Singapore, 1996), p. 46. [^5]: O. D. Johns, *Analytical Mechanics for Relativity and Quantum Mechanics*, (Oxford University Press, 2005), sec. 2.17. See also E. J. Konopinski, “What the Electromagnetic Vector Potential Describes,” Am. J. Phys., **46**, 499-502 (1978). [^6]: See D. Griffiths, “Resource Letter EM-1: Electromagnetic Momentum,” Am. J. Phys, **80**, 7-18, (2012) for a thorough discussion of this issue as well as for an extensive list of references. [^7]: Our definitions do not generally resolve the aforementioned bone of contention, though it does imply that $\vec{A}$ is the appropriate quantity insofar as the motion of a single charged particle is concerned. [^8]: See, for example, Feynman, Leighton, and Sands, *The Feynman Lectures on Physics*, (Addison-Wesley, Reading, 1964), v1, sec. 4.1. But see Falk, Hermann, and Schmid, “Energy Forms or Energy Carriers?", Am. J. Phys., **51**, 1074-1076, (1983). [^9]: P. Bridgman, *The Logic of Modern Physics*, (McMillan, New York, 1958), chapter 1. [^10]: Because their velocities are initially zero we are defining rest mass and ignoring relativistic effects. We will also assume “near” interactions so that retardation effects may be neglected [^11]: We will assume throughout the following work that a vector can be thought of as a $3\times 1$ column matrix. Hence, we can use matrix manipulations where needed. [^12]: W. Mach, *The Science of Mechanics*, (The Open Court Publishing Company, 1983) 4^th^ ed., chapter 2, section 5. There are additional consistency requirements which were pointed out by others after Mach advanced this definition. See L. Eisenbud, “On the Classical Laws of Motion,” Am. J. Phys., **26**, 144-159, (2005). [^13]: Note that the uncharged particles attract every other particle (weakly!) because of gravitational forces. [^14]: We are assuming the particle has no energy of rotation (so no explicit dependence upon rotational angles or velocities). [^15]: We are mixing matrix and vector notation here, but as all our vectors are column matrices this should not create confusion. [^16]: See the paper by Konopinski in reference \[5\].
--- abstract: 'Starting as highly relativistic collimated jets, gamma-ray burst outflows gradually decellerate and become non-relativistic spherical blast waves. Although detailed analytical solutions describing the afterglow emission received by an on-axis observer during both the early and late phases of the outflow evolution exist, a calculation of the received flux during the intermediate phase and for an off-axis observer requires either a more simplified analytical model or direct numerical simulations of the outflow dynamics. In this paper we present light curves for off-axis observers covering the long-term evolution of the blast wave, calculated from a high resolution two-dimensional relativistic hydrodynamics simulation using a synchrotron radiation model. We compare our results to earlier analytical work and calculate the consequence of the observer angle with respect to the jet axis both for the detection of orphan afterglows and for jet break fits to the observational data. We find that observable jet breaks can be delayed for up to several weeks for off-axis observers, potentially leading to overestimation of the beaming corrected total energy. When using our off-axis light curves to create synthetic Swift X-ray data, we find that jet breaks are likely to remain hidden in the data. We also confirm earlier results in the literature finding that only a very small number of local Type Ibc supernovae can harbor an orphan afterglow.' author: - 'Hendrik van Eerten, Weiqun Zhang and Andrew MacFadyen' bibliography: - 'oa.bib' title: 'Off-Axis Gamma-Ray Burst Afterglow Modeling Based On A Two-Dimensional Axisymmetric Hydrodynamics Simulation' --- Introduction {#sec:intro} ============ According to the standard fireball shock model, gamma-ray burst (GRB) afterglows are the result of the interaction between a decelerating relativistic jet and the surrounding medium. Synchrotron radiation is produced by shock-accelerated electrons interacting with a shock-generated magnetic field. The radiation will peak at progressively longer wavelengths and the observed light curve will change shape whenever the observed frequency crosses into different spectral regimes, when the flow becomes non-relativistic and when lateral spreading of the initially strongly collimated outflow becomes significant (see e.g. @Zhang2004 [@Piran2005; @Meszaros2006] for recent reviews). Analytical models have greatly enhanced our understanding of GRB afterglows. Such models rely on a number of simplifications of the fluid properties and radiation mechanisms involved. Both at early relativistic and late non-relativistic stages spherical symmetry can be assumed. At first lateral spreading of the jet has not yet set in and the beaming is so strong that a collimated outflow is still observationally indistinguishable from a spherical flow. Eventually the outflow really has become approximately spherical. Self-similar solutions for a strong explosion can be applied, the Blandford-McKee (BM, @Blandford1976) solution in the relativistic regime, and the Sedov-Taylor-von Neumann (ST, @Sedov1959 [@Taylor1950]) solution in the non-relativistic regime. However, in order to include lateral spreading of the jet and to calculate the light curve for an observer not located on the axis of the jet, the downstream fluid profile has usually been approximated by a homogeneous slab (e.g. @Rhoads1999 [@Kumar2000; @Granot_etal_2002_ApJ; @Waxman2004; @Oren2004]). Structured jet models exist where the effect of lateral expansion is estimated and fluid quantities like Lorentz factor and density depend on the angle of the flow with respect to the jet axis (e.g. @Rossi_PD_2008_MNRAS, also @Granot2007 and references therein). However, to gain an understanding of afterglow light curves during all stages of jet evolution and for off-axis observers, large scale multi-dimensional simulations are required. Over the past ten years various groups have combined one-dimensional relativistic hydrodynamics (RHD) simulations with a radiation calculation (e.g. @Kobayashi1999 [@Downes2002; @Mimica2009; @vanEerten2010]). Thanks to specialized techniques such as adaptive-mesh-refinement (AMR), jet simulations in more than one dimension have also become feasible (e.g. @Granot2001 [@Zhang_MacFadyen_2006_ApJS; @Meliani2007]). In this work we present the results of a high-resolution two-dimensional RHD simulation covering the full transition from relativistic to non-relativistic flow. We use the simulation results from @Zhang2009 (hereafter ZM09), but we now calculate for the first time detailed light curves for off-axis observers. The simulations have been performed with the <span style="font-variant:small-caps;">ram</span> code [@Zhang_MacFadyen_2006_ApJS]. Off-axis observations of GRB afterglows are observationally relevant for a variety of reasons. Following the first observations of GRB afterglows, it was immediately realized that, if the afterglow emission was less strongly beamed than the prompt emission, many *orphan* afterglows should in principle still be observable even when the prompt emission was not visible because the observer was positioned too far away from the jet axis [@Rhoads1997; @Rhoads1999]. Detailed light curves from simulations help constrain the expected rate of occurrence of orphan afterglows and help determine to what extent orphan afterglows can possibly remain hidden in observations of type Ibc supernovae. This in turn helps to constrain the fraction of type Ibc supernovae that can in principle be linked to GRBs. For observers that are not too far off-axis but are still located within the jet opening angle, the prompt emission still remains visible and they will observe an afterglow light curve that shows a jet break when the jet edges become visible, an effect which is enhanced when lateral spreading becomes significant. The shape of this jet break is expected to depend strongly on the observer angle. If the observer is not on-axis, each edge of the jet becomes visible at different times and the corresponding break is split in two, or at least becomes smoother. This effect may account for the difficulty in detecting a jet break for many GRBs [@Racusin2009; @Evans2009]. This paper is structured as follows. In section \[hydro\_section\] we briefly review the RHD simulation setup and methods we have applied in ZM09 and that form the basis for this paper as well. In section \[radiation\_section\] we describe how the radiation is calculated for an observer at an arbitrary angle with respect to the jet axis. The resulting light curves are presented in section \[results\_section\]. They are put into context first by a comparison to light curves calculated from the relativistic BM solution for on-axis observers, followed by a comparison to light curves from a simplified homogeneous slab model for off-axis observers. We then apply the simulation to two different observational issues. In section \[jetbreaks\_section\] we use our computed light curves to generate synthetic Swift data, to which we then fit broken and single power laws in order to probe the extent to which X-ray jet breaks can be hidden in the SWIFT data due to off-axis observer angle. In section \[orphan\_afterglows\_section\] we confirm the result from @Soderberg2006 that only a very small number of local Ibc supernovae can possibly harbor an orphan GRB afterglow, now using our simulation instead of a simplified analytical model for comparison with the observations. We discuss the results presented in this paper in section \[discussion\_section\]. The mathematical details of the analytical model with which we have compared our simulation results are summarized in the Appendix. Methods ======= Hydrodynamic Model {#hydro_section} ------------------ We have used the two-dimensional RHD simulation first presented in ZM09 as the basis for our calculations. This simulation was performed using the <span style="font-variant:small-caps;">ram</span> adaptive mesh refinement code [@Zhang_MacFadyen_2006_ApJS]. R<span style="font-variant:small-caps;">am</span> employs the fifth-order weighted essentially non-oscillatory (WENO) scheme [@Jiang_Shu_1996_JCP] and uses the PARAMESH AMR tools [@Macneice_etal_2000] from FLASH 2.3 [@Fryxell_etal_2000_ApJS]. The simulation takes a conic section of the Blandford-McKee (BM) analytical solution [@Blandford1976] as the initial condition, starting from a fluid Lorentz factor directly behind the shock front equal to 20. The isotropic energy of the explosion was set at $E_{iso} = 10^{53}$ erg. The jet half opening angle $\theta_0 = 0.2$ rad ($11.5^\circ$), leading to a total energy in the twin jets of $E_j \approx 2.0 \times 10^{51}$ erg. The circumburst proton number density is taken to be homogeneous and set at $n = 1$ cm$^{-3}$. The pressure $p$ of the surrounding medium is set at a very low value compared to the density $\rho$ ($p = 10^{-10} \rho c^2$, with $c$ the speed of light) and will therefore not be dynamically important. Under these conditions, the starting radius of the blast wave is equal to $R_0 \approx 3.8 \times 10^{17}$ cm. A spherical grid $(r, \theta)$ was used with $0 \le r \le 1.1 \times 10^{19}$ cm and $0 \le \theta \le \pi / 2$. At first, 16 levels of refinement are used, with the finest cell having a size of $\Delta r \approx 5.6 \times 10^{13}$ cm and $\Delta \theta \approx 9.6 \times 10^{-5}$ rad. The maximum refinement level is gradually decreased (but always kept at least at 11) during the simulation, making use of the fact that the blast wave widens proportionally to $t^4$ in lab frame time to keep the number of cells radially resolving the blast wave approximately constant. The most important dynamical results from ZM09 are the following. They find that very little sideways expansion takes place for the ultrarelativistic material near the forward shock, while the mildly relativistic and Newtonian jet material further downstream undergoes more sideways expansion. When taking a fixed fraction of the total energy contained within an opening angle as a measure of the jet collimation it is found that sideways expansion is logarithmic (and not exponential, as used by some early analytic models such as that of @Rhoads1999). This sideways expansion sets in approximately at a time $t_\theta$ calculated from plugging $\gamma = 1 / \theta_j$ into the BM solution, where $\gamma$ the fluid Lorentz factor directly behind the forward shock and $\theta_j$ the original jet opening angle. For the simulation settings described above, $t_\theta \approx 373$ days, measured in the frame of the burster. The jet becomes nonrelativistic and the BM solution breaks down at $t \backsim t_{NR} \approx 970$ days. The time $t_{NR}$ is estimated by equating the isotropic equivalent energy in the jet to the rest mass energy of the material swept up by a spherical explosion, assuming the jet moves at approximately the speed of light. The transition to spherical flow was found to be a slow process and was found from the simulation to take until $5 t_{NR}$ to complete. After that time the outflow can be described by the Newtonian Sedov-von Neumann-Taylor (ST) solution. Calculation of Off-Axis Afterglow Emission {#radiation_section} ------------------------------------------ A large number of data dumps (2800) from the hydrodynamic simulation are stored. These data dumps are then used to calculate the synchrotron emission from all the individual fluid elements at the lab frame time of each data dump. A single emission time corresponds to different arrival times at the observer for different parts of the fluid due to light travel time differences. The radiation from the data dumps is binned over a number of observer times. The size of each fluid element is given by $\Delta V = r^2 \sin{\theta} \Delta r \Delta \theta \Delta \phi$ in 3D. In the 2D axisymmetric simulation the flow is independent of $\phi$ and, if the observer is positioned on-axis, the $\phi$ symmetry allows for considering just the 2D elements $\Delta V = 2 \pi r^2 \sin{\theta} \Delta r \Delta \theta$ provided by the data dumps. When calculating emission for off-axis observers, however, fully 3D data must be created from the 2D fluid elements by extending the data in the $\phi$ direction. The fluid elements are split into smaller elements along the $\phi$ direction of angular size $\Delta \phi$ to account for differences in relativistic beaming and Doppler shifts of emission observed from different angles. In practice we have started with angular resolution such that $r \sin{\theta} \Delta \phi$ is comparable to $\Delta r$ and $r \Delta \theta$, taking into account that the arrival time between the close and far edge of the fluid element in the $\phi$ direction should stay very small. We found that the light curves are not very sensitive to the resolution in the $\phi$ direction, even when it is decreased tenfold. The synchrotron emission itself is calculated following @Sari1998. We sum over the contributions of the individual fluid elements. In the frame comoving with the fluid element, the spectral power peaks at $$P' = 0.88 \left( \frac{16}{3} \right)^2 \frac{p-1}{3p-1} \frac{\sigma_T m_e c^2}{8 \pi q_e} n' B',$$ where $p$ is the slope of the power law accelerated electron distribution, $\sigma_T$ is the Thomson cross section, $m_e$ is the electron mass, $c$ is the speed of light, $q_e$ is the electron charge, $n'$ is the comoving number density and $B'$ is the comoving magnetic field strength. The field strength $B'$ is determined from the comoving internal energy density $e'_i$ using $B' = \sqrt{ \epsilon_B e'_i 8 \pi}$. Here $\epsilon_B$ is a free parameter that determines how much energy is converted into magnetic energy near and behind the shock front. The shape of the spectrum is determined by the synchrotron critical frequency $\nu'_m$ and the cooling frequency $\nu'_c$. These frequencies are set according to $$\nu'_m = \frac{3}{16} \left( \frac{p-2}{p-1} \frac{\epsilon_e e'_i}{n' m_e c^2} \right)^2 \frac{q_e B'}{m_e c},$$ where $\epsilon_e$ parameterizes the fraction of the internal energy density in the shock-accelerated electrons, and $$\nu'_c = \frac{3}{16} \left( \frac{3 m_e c}{4 \sigma_T \epsilon_B e'_i t / \gamma} \right)^2 \frac{q_e B'}{m_e c}.$$ The cooling break is therefore estimated by using the duration of the explosion as a measure of the cooling time: $t$ is the lab-frame time and $\gamma$ is the Lorentz factor of the fluid element in the lab frame (i.e. the frame of the unshocked medium) used to translate to the time comoving with the fluid element. The emitted power at a given frequency depends on the position of that frequency in the spectrum with respect to $\nu'_m$ and $\nu'_c$ and the relative position of $\nu'_m$ and $\nu'_c$ (*fast cooling* when $\nu'_c < \nu'_m$ and *slow cooling* when $\nu'_m < \nu'_c$). The spectrum consists of connected power law regimes. For example, for a fluid element the emitted power at observer frequency $\nu$ ($\nu'$ in the comoving frame) between synchrotron break and cooling in the slow cooling case is given by $P' ( \nu' / \nu'_m )^{(1-p)/2}$. The complete shape of the spectrum is given in @Sari1998 and can also be found in the Appendix. The power in the observer frame is then obtained by applying the appropriate beaming factors and Doppler shifts to power and frequency. Finally the received flux is calculated by taking into account the luminosity distance and redshift (the latter is also used to translate between observer frequency and comoving frequency). In our calculations we have set $p = 2.5$, $\epsilon_e = \epsilon_B = 0.1$. These are typical values found for afterglows. Different redshifts have been calculated, but in this paper we only present results where we have ignored redshift (i.e. $z \equiv 0$) and set the observer luminosity distance at $d_L = 10^{28}$ cm. If the redshift is increased the features of the light curves stretch out to later observer times. Light curves computed for a range of observer redshifts will be presented in an upcoming publication. The main radiation results from ZM09 for an on-axis observer are the following. The jet break due to lateral expansion was found to be weaker than analytically argued, while the jet break due to jet edges becoming visible was stronger than expected from the simplest analytical models, although not unexpected from calculations taking limb-brightening into account. The weaker jet break due to lateral spreading can be understood from the lateral spreading being logarithmic instead of exponential as has been often assumed in analytical models (see ZM09, Fig. 3). The long transition time to the nonrelativistic regime for the blast wave was already mentioned in the previous section. When the blast wave has become nonrelativistic the counterjet is no longer beamed away from the observer. It becomes distinctly visible around $t_{cj} = 2 ( 1+z) t_{NR}$, with the ratio between flux from counter and forward jet peaking at 6 at 1 GHz at 3800 days for the simulation settings (at $z = 1$). Numerical Results {#results_section} ================= ![image](fig1.eps){width="\textwidth"} ![Temporal decay index $\alpha$ assuming $F \propto t^{-\alpha}$. The observer frequency is $10^9$ Hz. The vertical lines at 3.5 days indicate the jet break time estimates for observers at 0, 0.1 and 0.2 radians (from left to right), using equation \[jetbreak\_time\_equation\]. []{data-label="slopes_low_figure"}](fig2.eps){width="1.0\columnwidth"} ![Temporal decay index $\alpha$ assuming $F \propto t^{-\alpha}$. The observer frequency is $10^{17}$ Hz. The vertical lines at 3.5 days indicate the jet break time estimates for observers at 0, 0.1 and 0.2 radians (from left to right), using equation \[jetbreak\_time\_equation\].[]{data-label="slopes_high_figure"}](fig3.eps){width="1.0\columnwidth"} We first present the main results for both small and large observer angles, looking both at the light curves and the corresponding temporal slopes. An overview of light curves is shown in Fig. \[collective\_figure\], where we have plotted multi-frequency light curves spanning from $10^9$ to $10^{17}$ Hz. The temporal slopes for the lowest frequency $10^9$ Hz and the highest frequency $10^{17}$ Hz are separately plotted in Figs. \[slopes\_low\_figure\] and \[slopes\_high\_figure\]. For the on-axis results we estimate the jet break, a combination of lateral spreading and jet edges becoming visible, to occur around 3.5 days (the 7 days mentioned in ZM09 is for $z = 1$). Direct comparisons at the same frequency of different observer angles are shown in figures \[model2\_figure\] and \[supernovae\_figure\], for 8.46 GHz. As the observer angle increases, the jet break splits into two for observers still inside the jet. Once the observer is positioned at the jet edge only one break remains that is significantly postponed. This effect is similar across all frequencies. The steepest drop in slope is the one associated with the edge of the jet furthest from the observer and can therefore be estimated to occur around $$t_j = 3.5 (1+z) E_{iso,53}^{1/3} n_1^{-1/3} \left(\frac{\theta_0 + \theta_{obs}}{0.2}\right)^{8/3} \textrm{ days}, \label{jetbreak_time_equation}$$ where $E_{iso,53}$ is the isotropic equivalent energy in units of $10^{53}\,{{\mathrm{erg}}}$, $n_1$ is density of the medium in units of ${{\mathrm{cm}}}^{-3}$, $\theta_0$ is the jet half opening angle, and $\theta_{obs}$ is the observer angle relative to the jet axis. Jet breaks can be used to estimate the opening angles of GRB jets. It is usually assumed in GRB afterglow modeling that the observer is on the jet axis. However, if the observer is near the edge of the jet, the jet opening can be overestimated by a factor of up to 2, and the beaming-corrected total energy can be overestimated by a factor of up to 4. For a typical observer at $\theta_{obs} \approx 2\theta_0/3 $, the beaming-corrected energy can be overestimated by a factor of $\sim 3$. The observational implications of this effect will be further discussed in Section \[discussion\_section\]. At high observer angles, the rise of the light curve is postponed until the point where relativistic beaming has weakened sufficiently for the observer to be in the light cone of the radiating fluid. Due to limb-brightening the drop in temporal slope following a jet break initially overshoots its asymptotic value. After that it starts to change again due to the onset of the transition into the nonrelativistic regime and the rise of flux from the counterjet, before it finally settles into its asymptotic value for the nonrelativistic regime. In order to put the simulation results in context and to differentiate between the break due to lateral spreading and the break due to the edges becoming visible we will compare the simulation results against the BM solution for a hard edged jet without lateral spreading in the next subsection. Afterglow emission for on-axis observer – hydrodynamic simulation versus analytic model --------------------------------------------------------------------------------------- ![direct comparison between on-axis spectrum from simulation (solid curve) and BM solution with locally calculated cooling times (dashed curve), omitting self-absorption. The spectra are taken at 1 day in observer time, so well before the jet break. The leftmost vertical dotted line denotes the analytically calculated position of $\nu_m$, the rightmost that of $\nu_c$ (calculated for the BM solution).[]{data-label="exactspectrum_figure"}](fig4.eps){width="1.0\columnwidth"} In Fig. \[exactspectrum\_figure\] we show a comparison between on-axis spectra at 1 day in observer time, calculated from the simulation and from an analytical description using the BM solution plus synchrotron emission [@vanEerten2009]. The observed time is well before the jet break and before significant lateral spreading or slowing down of the jet has occurred. With the dynamics for both the simulation and the BM solution still being nearly equal, the figure therefore mainly shows the difference between the two approaches to synchrotron radiation. The differences below the cooling break $\nu_c$ are marginal and can be attributed to the absolute scaling of the emitted power. The difference beyond $\nu_c$ is significantly larger. The reason for this is that the simulation follows the approach to electron cooling from @Sari1998, where the cooling time is globally estimated by setting it equal to the duration of the explosion, whereas @vanEerten2009 build upon @Granot2002 and calculate the local cooling time for each fluid element, which is given by the time passed since the fluid element has crossed the front of the shock. When the cooling time is calculated locally, the transition between pre- and post-cooling is smooth, with areas of the fluid further downstream making the transition before those at the front. Having the cooling time set globally results in a sudden global transition between the pre- and post-cooling regimes instead, with the same asymptotic spectral slope but a different value for the cooling break frequency and therefore a different absolute scaling of the flux for $\nu > \nu_c$. Note that even for a global cooling time, the sudden transition in the emission frame will still get smeared out in the observer frame. ![Direct comparison of simulation results and heuristic description based on the Blandford-McKee exact solution. Simulated light curves are shown for observer frequencies $10^{13}$ and $10^{17}$ Hz (top and bottom solid line respectively). Analytical light curves are drawn for global ([*dashed*]{}) and local ([ *dotted*]{}) cooling respectively. The top line of each pair is for a spherical model whereas the lower line is for a conic section only. The 3.5 days estimate for the jet break is indicated with a vertical dotted line.[]{data-label="exactlightcurve_figure"}](fig5.eps){width="1.0\columnwidth"} Figure \[exactlightcurve\_figure\] shows a direct comparison between the on-axis light curves obtained from the simulation and light curves from the same analytical model as before. If no lateral spreading is assumed, a conic section of the spherically symmetric BM solution can be used to show the difference between the jet break due to the edges becoming visible and the combination of this break and the jet break due to lateral spreading. The exact light curves shown in Fig. \[exactlightcurve\_figure\] do not cover the entire observer time span because the BM solution ceases to be valid around a shock Lorentz factor of $\gamma \backsim 2$ (and slightly earlier for a local cooling calculation). The light curves are truncated at the point where this would start to affect the observed emission. From the figure it can be seen that including lateral spreading has the effect that the jet break becomes steeper and starts slightly earlier (which confirms the low resolution comparison shown in @vanEerten2010b). The figure also shows the strong overshoot in steepening of the light curve following the jet break, also seen in Fig. \[slopes\_high\_figure\]. This overshoot has also been discussed in @Granot2001 and ZM09. The difference between the detailed local treatment of electron cooling and the global treatment of cooling is important when applying simulation results to actual data: it should be kept in mind that the simulation light curves systematically underestimate the flux beyond the cooling break. Electron cooling aside, the long term qualitative behavior of the light curve is fully captured by the simulation, with the results covering not only the relativistic regime but also the non-relativistic regime and the transition in between. Afterglow emission for off-axis observer – hydrodynamic simulation versus analytic model {#off_axis_comparison_section} ---------------------------------------------------------------------------------------- No exact solution exists that fully includes lateral spreading of the jet. In the Appendix we describe a simplified analytical model that approximates the behavior of the jet and allows us to calculate the observed flux for an observer at an arbitrary angle. Many such models exist in the literature (see e.g. @Oren2004 [@Waxman2004; @Soderberg2006; @Huang2007] etc.) and our model is not strongly different. Its distinguishing features are that it smoothly connects the relativistic BM solution to the nonrelativistic ST solution and that a conservative approach to lateral spreading is used where the jets start to spread at the speed of sound (and therefore logarithmically) upon approaching the nonrelativistic regime. ![Direct comparison between simulation results (solid lines) and analytical model (dashed lines) for different opening angles, for radio frequency 8.46 Ghz.[]{data-label="model2_figure"}](fig6.eps){width="1.0\columnwidth"} ![Light curves at 8.46 Ghz for an observer at $\theta_{obs} = 90^\circ$. The *simulation* and *simplified model* curves are repeated from fig. \[model2\_figure\]. If we ignore the fluid velocity in the lateral direction for the purpose of the radiation calculation, we get the slightly lower curve labeled *no lateral beaming*. The emission is still well in excess of a light curve from a radiation calculation that takes the BM profile instead of the simulated fluid profile as input, labeled *Blandford-McKee*. Only by completely omitting the emission contribution from matter that has spread sideways out of the original jet opening angle, we get a flux level that is initially comparable to that of the exact BM solution and the simplified model. This is shown by the *truncated cone* curve.[]{data-label="beaming_figure"}](fig7.eps){width="1.0\columnwidth"} In figure \[model2\_figure\] we show a comparison between off-axis light curves generated using the analytical model and light curves calculated from the simulation. Qualitatively both simulation and model show the same features. Quantitatively however, the differences are substantial. The simulation light curves peak earlier than the model light curves and do so at lower peak luminosity. At early times, the emission from off-axis simulation light curves is higher than that from the corresponding model light curves. The early time slopes for simulation and model light curves are similar. However, the further off-axis the observer, the later the observer time at which the simulation provides full coverage. In the figure, we have truncated the light curves at the observer time before which radiation from the blast wave with $\gamma > 20$ would have been required. For two blast wave jets viewed sideways ($\theta_{obs} = 90^\circ$), for example, full coverage starts beyond 100 days. This means that, even though the initial slopes agree between simulation and model, the early time shape of off-axis light curves in reality will be largely dictated by the initial shape of the blast wave, which does not need to be anything like the BM solution. Collapsar jet simulations indicate the existence of a cocoon around the emerging jet (see e.g., @Zhang_WM_2003_ApJ [@Zhang_WH_2004_ApJ; @2007ApJ...665..569M; @Mizuta2009]). We can understand why the off-axis light curves from the simulation are initially brighter than those from the simplified model by looking at one of the angles in more details. In Fig. \[beaming\_figure\] we have again plotted the simulation and model light curves for an observer at $\theta_{obs} = 90^\circ$ (1.57 rad), together with a number of variations. We have now also included a light curve where we continue using the BM solution to determine the local fluid conditions, instead of the dynamical simulation results, but otherwise proceed as if we were reading the fluid quantities from disc (because of this, the curve also serves as a consistency check on the radiation calculation itself). The same approach has been used to generate the *BM global cooling* light curve in Fig. \[exactlightcurve\_figure\]. The curve initially lies significantly below the simulation light curve. It also lies above the simplified model curve. The flux level of this BM curve is determined in part by the numerical resolution that we assume. In the plot we have used a resolution similar to that for the simulation curve, which initially resolves the radial profile with approximately 17 cells. Increasing the resolution moves the BM curve closer to the simplified slab curve, and not to the simulation curve. The difference between the BM curve and the simulation curve is real and we have added to the plot two hybrid simulation / model curves to make clear the cause of this difference. First, when we completely ignore the velocity $v_\theta$ in the angular direction for the purpose of calculating the emission but otherwise still use the simulation dynamics, we find that the resulting light curve, labeled *no lateral beaming* in the figure, initially lies somewhat below the simulation curve before the two eventually merge. This tells us that part of the observed flux level is caused by beaming towards the observer of material spreading sideways, but that this is not the main cause of the difference between simulation and the hard edge jet models. At late times beaming no longer plays any role and the two curves are indeed no longer expected to be different. The main reason for the difference is shown by the second additional curve. When calculating the light curve labeled *truncated cone* in the figure we have omitted the contribution to the radiation of any material that has spread sideways outside the original jet opening angle. The resulting curve lies very close to the BM light curve at first, before becoming orders of magnitude lower than all other curves. The late time behavior is as expected, for then only a small fraction of the energy and particle density is still contained within the original opening angle; the actual simulation flow has become roughly spherical. The early time behavior and the similarity between the truncated cone and BM light curves is more relevant. It demonstrates that the light curve for an off-axis observer is dominated by the emission from material that has spread sideways out of the original jet opening angle even though the energy of the material is very little and the sideways spreading is not yet dynamically important. In hindsight, the fact that the material on the side of the jet dominates the observed radio flux can easily be understood. It is not so much due to the fact that it moves a little faster towards the observer, for the $v_\theta$ component to the beaming is not that strong (as we have shown above). It is instead due to the fact that the radial velocity component $v_r$ drops quickly outside of the original jet opening angle and as a result the material outside the original jet opening angle is not beamed away from the observer as much as the material in the original jet cone. By contrast, for an on-axis observer the opposite is true and for a long time the received flux is dominated by emission from material inside the original jet opening angle. This has been demonstrated explicitly by @vanEerten2010b. In the above sections we did not discuss the effects of synchrotron self-absorption. We will postpone addressing synchrotron self-absorption, which is not included in the simulation, until section \[orphan\_afterglows\_section\]. Application: Hidden Jet Breaks? {#jetbreaks_section} =============================== A large number of X-ray afterglow light curves have been obtained by the *Swift* satellite since it was launched in 2004 [@Gehrels2004]. In a surprisingly large number of cases, these light curves fail to show a clearly discernable jet break [@Racusin2009; @Evans2009]. Using our simulation results as a basis to generate synthetic *Swift* data sets for observers positioned at different angles from the jet axis, we show that the effect of observer position on the temporal evolution inferred from the data can be profound and sometimes render the jet break difficult to detect. Procedure for creating synthetic data ------------------------------------- The synthetic data sets that we produce should be comparable to those produced by the on-line *Swift* repository [@Evans2007]. Also they should have data points at a sufficiently late time that, if this were actual data, the jet break would be considered missing and not merely delayed. We therefore make sure that we have data up to at least 10 days, in accordance with the criteria for their ‘complete’ sample set by @Racusin2009. The observed on-axis jet break for our simulation occurs roughly around three days. Synthetic light curves have also been created from an underlying model by @Curran2008 (who find that even broken power law models observed on-axis can occasionally be mistaken for a single power law decline) and we follow the same procedure as described in that paper, changing only the time span and adding an additional late time data point if necessary. We then have: - Constant counts and 1 $\sigma$ fractional error of 0.25 per data point. - 94 minute orbits (47 min on/off due to *Swift*’s low-Earth orbit). - Fractional exposure drops from 1.0 to 0.1 after one day (when *Swift* is usually no longer dedicated completely to observing the burst). - Rate cut off at $5 \times 10^{-4}$ cts/s. - Observed number of cts/s is scaled to 0.1 at 1.0 day. ![Synthetic Swift data generated from simulation light curve for an on-axis observer and $p = 2.5$. The simulation flux has been scaled to a count rate of 0.1 cts/s at 1 day observer time.[]{data-label="synswiftonaxis_figure"}](fig8.eps){width="1.0\columnwidth"} ![Synthetic Swift data generated from simulation light curve for an on-edge observer (i.e. at 0.2 rad) and $p = 2.5$. The simulation flux has been scaled to a count rate of 0.1 cts/s at 1 day observer time.[]{data-label="synswiftonedge_figure"}](fig9.eps){width="1.0\columnwidth"} We start generating data points from $3 \times 10^4$ s and continue until the rate drops below $5 \times 10^{-4}$ cts/s. This starting point is chosen such that we have full coverage from the simulation at all observer angles under consideration. Like @Curran2008, we increase the number of counts per bin to a number well in excess of the numbers mentioned by @Evans2007. This has no physical significance but is used to generate synthetic light curves containing around 30 data points (The synthetic curves from Curran et al. contain more data points because they use an earlier starting time). If needed a late time data point is added to ensure that at least one data point is observed after ten days. The last one or two data points, where the count rates are less than $10^{-3}$ cts/s get a larger fractional error of 0.5. Out of the thousands of synthetic curves that have been generated, two randomly selected example synthetic light curves are shown in figures \[synswiftonaxis\_figure\] and \[synswiftonedge\_figure\]. We generate light curves for observer angles 0.00, 0.02, 0.04, 0.06, 0.08, 0.1, 0.12, 0.14, 0.16, 0.18 and 0.2 radians. Note that by fixing the count rate at 0.1 cts/s after one day, we end up comparing on-axis observations to off-axis observations that are relatively brighter (i.e. corresponding to closer GRBs). As can be seen in Fig. \[collective\_figure\], off-axis light curves are less bright than on-axis curves for the same physical parameters. Fitting procedure and results ----------------------------- We follow @Curran2008 again when fitting the synthetic data automatically using the *simulated annealing* method to minimise the $\chi^2$ of the residuals. The data are first fit to a single power law, then to a sharply broken power law, $$F(t) = N \left\{ \begin{array}{rl} (t / t_b)^{-\alpha_1} & \text{if } t < t_b, \\ (t / t_b)^{-\alpha_2} & \text{if } t > t_b. \end{array} \right.$$ Curran et al. use a smooth power law, but a sharply broken power law is also used by @Racusin2009. After each fit the count rates of the data points are re-perturbed from their original on-model values and a Monte Carlo analysis using 1000 trials is used to obtain average values and $1 \sigma$ Gaussian deviations of the best fit parameters and F-test probabilities. This process is repeated for the list of observer angles mentioned previously. We have set $p = 2.5$. The F-test is a measure of the probability $F_{prob}$ that the decrease in $\chi^2$ associated with the addition of the two extra parameters of the broken power law, $\alpha_2$ and $t_b$, is by chance or not. When $F_{prob} \gtrsim 10^{-2}$ a single power law is commonly favored, when $10^{-5} \lesssim F_{prob} \lesssim 10^{-2}$ neither is favored but a single power law is usually presumed as the simpler model and when $F_{prob} \lesssim 10^{-5}$ a broken power law is favored. ![Results for F-test as function of observer angle. Note that a typical observer angle is 0.13.[]{data-label="Fproblog_figure"}](fig10.eps){width="1.0\columnwidth"} $\theta_{obs}$ $\alpha$ $\alpha_1$ $\alpha_2$ $t_b$ ($10^5$ s) ---------------- ------------------ ----------------- ----------------- ------------------ -- -- 0.00 $1.69 \pm 0.040$ $0.81 \pm 0.16$ $3.32 \pm 0.79$ $2.3 \pm 0.80$ 0.02 $1.68 \pm 0.037$ $0.76 \pm 0.17$ $3.14 \pm 0.73$ $2.1 \pm 0.83$ 0.04 $1.63 \pm 0.038$ $0.81 \pm 0.15$ $2.96 \pm 0.61$ $2.1 \pm 0.72$ 0.06 $1.78 \pm 0.036$ $0.84 \pm 0.16$ $3.05 \pm 0.58$ $2.1 \pm 0.70$ 0.08 $1.52 \pm 0.040$ $0.78 \pm 0.19$ $2.26 \pm 0.37$ $1.4 \pm 0.54$ 0.10 $1.51 \pm 0.040$ $0.80 \pm 0.20$ $2.07 \pm 0.26$ $1.2 \pm 0.45$ 0.12 $1.49 \pm 0.042$ $0.91 \pm 0.26$ $1.95 \pm 0.29$ $1.2 \pm 0.55$ 0.14 $1.65 \pm 0.035$ $1.07 \pm 0.16$ $2.36 \pm 0.52$ $2.0 \pm 1.1$ 0.16 $1.46 \pm 0.042$ $1.00 \pm 0.28$ $1.95 \pm 0.47$ $1.6 \pm 0.89$ 0.18 $1.40 \pm 0.040$ $0.91 \pm 0.36$ $1.85 \pm 0.50$ $1.6 \pm 0.97$ 0.20 $1.42 \pm 0.036$ $0.86 \pm 0.21$ $2.01 \pm 0.45$ $1.9 \pm 1.0$ : Average temporal power law slopes and jet break times $t_b$ for 1000 Monte Carlo iterations per observer angle $\theta_{obs}$. Here $\alpha$ denotes the single power law slope, $\alpha_1$ the pre-break broken power law slope and $\alpha_2$ the post-break broken power law slope.[]{data-label="fit_results_table"} $\theta_{obs}$ $\log( F_{prob} )$ single ambiguous broken ---------------- -------------------- -------- ----------- -------- -- -- 0.00 $-7.1 \pm 1.7$ 0 130 870 0.02 $-6.9 \pm 1.6$ 0 148 852 0.04 $-6.6 \pm 1.6$ 0 202 798 0.06 $-7.2 \pm 1.6$ 0 60 940 0.08 $-5.2 \pm 1.4$ 6 510 484 0.10 $-4.5 \pm 1.4$ 22 698 280 0.12 $-3.6 \pm 1.4$ 129 767 104 0.14 $-4.6 \pm 1.5$ 21 641 338 0.16 $-2.8 \pm 1.3$ 274 675 51 0.18 $-3.1 \pm 1.4$ 259 686 55 0.20 $-4.4 \pm 1.5$ 37 663 300 : Average $\log(F_{prob})$ values and classifications based on the criteria described in the text. For each angle, the classifications add up to 1000 Monte Carlo runs. []{data-label="break_times_results_table"} The results of the fitting procedure are shown in tables \[fit\_results\_table\], \[break\_times\_results\_table\] and fig. \[Fproblog\_figure\]. In table \[fit\_results\_table\] any slope $\alpha$ and its error $\Delta \alpha$ means that if an observer fits data to a swift dataset with similar count rate and duration and if the swift data was produced by an explosion that is accurately described by our numerical model, then that observer would find a slope that lies within $\Delta \alpha$ of $\alpha$ (the errors therefore do not reflect on the accuracy of the Monte Carlo run, which has converged to far greater precision). The analytically expected slopes for an on-axis observer and $p=2.5$ are 1.375 before and 2.125 after the jet break. These values are not even reproduced within $1 \sigma$, which for the post-break slope can largely be attributed to the overshoot in slope directly after the break (see also section \[results\_section\]). Although the inaccuracy of the pre-break slope inferred from (synthetic) light curves will be smaller when earlier time data is available, which is often the case for *Swift* light curves, a systematic difference will remain (see also @Johannesson2006). This emphasizes the importance of numerical modeling for the proper interpretation of *Swift* data. The situation gets worse when we move the observer off-axis. As Fig. \[Fproblog\_figure\] shows, even though a jet break is clearly detected on-axis for our physics settings, it can become hard to distinguish from a single power law for an observer positioned at $\theta_j / 2$. The average observer angle one would expect when observing jets oriented randomly in the sky is $2\theta_j /3$ (for small jet opening angles, and assuming that the observer angle lies between zero and the jet opening angle). Larger observer angles will lead to *orphan afterglows* that we discuss separately below. It therefore follows that a significant number of jet breaks may remain hidden in the data due to the jets not being observed directly on-axis. In table \[break\_times\_results\_table\] the classifications for the individual Monte Carlo iterations are also counted. Assuming again that jets are oriented randomly in the sky and that jets are only observed up to observer angles equal to $\theta_j$, we can calculate how often an afterglow with the physical parameters of the simulation would be classified as showing a jet break. For each angle $\theta_i$ we have classified $n_i$ out of 1000 synthetic curves as showing a jet break. This means that we will classify the afterglow described by the simulation as showing a jet break only $\sum_i n_i \sin( \theta_i ) / \sum_i 1000 \sin( \theta_i ) \times 100 \% \approx 29 \%$ of the time. This value is only a very rough estimate, for it depends on the $F_{prob} \lesssim 10^{-5}$ criterion, that is to some extent arbitrary. Also, as noted before, the off-axis curves are relatively brighter due to the fixed count rate. The requirement of having data up to ten days introduces a selection effect as well. For all results above, it should be kept in mind that they have been obtained for a single half opening angle of 0.2 radians (approx. $11.5^\circ$) which is relatively large (although not extremely so and within the range of jet opening angles observationally inferred from *Swift* data). This results in a later jet break than a smaller opening angle would lead to. On the other hand we have set $z = 0$, which again moves the jet break to earlier observer times and thereby compensates for the large opening angle. The general effect of higher observer angles is that both jet edges become observable at different times. The reason that a broken power law did not always produce a significantly better fit than a single power law is mainly because the full drop in count rate associated with the further edge got pushed out beyond ten days for off-axis observers (Section \[results\_section\]). For strongly collimated jets with small opening angles this effect is therefore expected to be less severe. The often large difference between the physical parameters like $p$ (affecting the slope of the light curve) and $\theta_j$ (affecting the break time) used in a model and the values for these parameters when inferred from synthetic data created from that model has been discussed in detail by @Johannesson2006. They also include the observer angle as a model parameter, but do not discuss it further in their paper. Application: Orphan Afterglow Searches {#orphan_afterglows_section} ====================================== The existence of orphan afterglows is an important and general prediction of current afterglow theories. Regardless of the GRB launching mechanism and the initial baryon content of the jet, eventually synchrotron emission from a decelerating baryonic blast wave should be observable for any observer angle. For this reason, various groups have looked for orphan afterglows, both at the optical and radio frequencies (e.g. @Levinson_etal_2002_ApJ [@Gal-Yam_etal_2006_ApJ; @Soderberg2006; @Malacrino_etal_2007_AA]). Few positive detections have been reported and surveys and archival studies have mainly served to establish constraints on GRB rates and beaming factors. @Soderberg2006, for example, conclude from late time radio observations of 68 local type Ibc supernovae (SNe) that less than $\backsim 10 \%$ of such SNe are associated with GRBs, and constrain the GRB beaming factor to be $\left< (1 - \cos{\theta_j})^{-1} \right> \lesssim 10^4$. A lower limit to the beaming factor of $\left< (1 - \cos{\theta_j})^{-1} \right> \gtrsim 13$ is provided by @Levinson_etal_2002_ApJ. ![VLA late time radio limits ($3 \sigma$) for 66 local type Ibc of supernovae compared against simulation results. All supernova redshifts have been ignored (the largest redshift, that of SN 1991D, is $\backsim 0.04$). The fluxes have been rescaled to luminosities. All VLA observations were done at 8.46 GHz, the afterglow light curve is calculated at the same frequency. As in the rest of this paper, the simulation jet half opening angle is $11.5^\circ$.[]{data-label="supernovae_figure"}](fig11.eps){width="1.0\columnwidth"} Such estimates require a model describing the shape of off-axis light-curves, and are therefore sensitive to model assumptions. Eventually, comparing observations and detailed simulations like the one described in this paper will place the most accurate observational limits on orphan afterglow characteristics. A large number of simulations is required to fully explore the afterglow parameter space. We can however, use the single simulation of this paper that has typical values for the explosion parameters to confirm the result from @Soderberg2006 that their sample of 68 supernovae observations are all significantly fainter than a standard afterglow viewed off-axis. This confirmation is shown in fig. \[supernovae\_figure\], where we have plotted 66 supernovae radio upper limits (omitting SN 1984L and SN1954A, which were not observed at 8.46 GHz, from the original 68) together with our off-axis simulated light curves. Note that the jet half opening angle in our simulation is $11.5^\circ$, whereas @Soderberg2006 use $5^\circ$. The fact that the early time flux received by an off-axis observer is actually stronger than analytically expected (as shown in section \[off\_axis\_comparison\_section\], where model and simulation are compared directly) only strengthens the case made by Soderberg et al. ![Analytically calculated light curves for different observer angles, with and without self-absorption. The observer frequency is set at 8.46 GHz. Self-absorption influences the light curves only until a few hundred days at most. At 8.46 GHz, the light curve for an observer at 90 degrees is the same with and without self-absorption enabled. The jet half opening angle is $11.5^\circ$. The VLA late time radio limits for the local Ibc SNe are included as well.[]{data-label="selfabsorption_figure"}](fig12.eps){width="1.0\columnwidth"} A possible caveat to the above is that our simulation light curves do not include the effect of synchrotron self-absorption. Although we cannot completely rule out that this plays a role without actually calculating it, we can nevertheless look at the effect of self-absorption on the model light curves, having already established that model and simulation lead to at least qualitatively similar light curves in section \[off\_axis\_comparison\_section\]. In Fig. \[selfabsorption\_figure\] we show model light curves with and without synchrotron self-absorption, calculated as explained in the appendix. The figure shows that the effect of self-absorption is initially significant for an on-axis observer but becomes less pronounced for observers further off-axis. For an observer at $90^\circ$, the light curves with and without self-absorption are effectively identical. Aside from the minimal differences due to the analytical model assumptions, the main difference between this figure and fig. 1 from Soderberg et al. is due to the different jet opening angles. For our wider jet opening angle, only the two earliest supernovae lie clearly above the $90^\circ$ curve. Summary and Discussion {#discussion_section} ====================== In this paper we present broadband GRB afterglow light curves calculated assuming synchrotron emission from a high-resolution relativistic jet simulation in 2D. We have expanded the work presented in @Zhang2009 to include observers positioned off the jet symmetry axis, both at small and large angles. For the jet simulation we have used the <span style="font-variant:small-caps;">ram</span> adaptive-mesh-refinement code, starting from the Blandford-McKee analytical solution and letting the jet evolve until it has reached the Sedov-Taylor stage and has decollimated into a nearly spherical outflow. We have implemented synchrotron radiation as described in @Sari1998. When put in the context of analytical light curve estimates, our simulations show the following: - For an on-axis observer, the jet break from a 2D simulation including lateral spreading of the jet is seen earlier than that of a hard-edged jet. However, the break due to the jet edges becoming visible still dominates the shape of the light curve. - We compared a description of electron cooling that takes into account the local cooling time since the shocked electrons passed the shock front to a description that uses a global cooling time estimate (as we have done in the simulation). The latter approach underestimates the observed cooling break frequency and therefore the post-break flux as well. - The simulation light curves show that simplified homogeneous slab analytical models are qualitatively correct as long as they include a clear contribution from the counterjet for observers at all angles. - Contrary to what has thus far been assumed in simplified analytical models and even though lateral spreading of the jet is initially not dynamically important, the received flux for an off-axis observer is strongly dominated by emission from material that has spread laterally outside the original jet opening angle. This is due to the fact that material outside the original jet cone has slowed done considerably in the radial direction and is therefore not beamed away from the observer as much as material closer to the jet axis. - Moving the observer off-axis splits the jet break in two. The steep drop in slope only occurs after the break associated with the farthest edge and this break can thus be postponed until several weeks after the burst, even for observers positioned still within the jet cone. The late break time can be estimated by using the sum of the observer angle and the jet half opening angle instead of just the jet half opening angle. In addition to these direct numerical results, we have presented two applications of our numerical work, one with observers positioned at small and moderate observer angles and one with observers at large observer angles. - X-ray Light curves for observers at small observer angles are relevant for satellites such as *Swift*. Recent authors (e.g. @Racusin2009 [@Evans2009]) have noted a lack of jet breaks visible in the data. In order to check whether the observer angle can cause the jet break to remain hidden in the data we have performed a Monte Carlo analysis where we created synthetic *Swift* data out of simulation light curves for different observer angles. Observational biases and a Gaussian observational error were included. Broken and single power laws were then fit to the synthetic data sets. We found that it is not difficult to bury a jet break in the data for an off-axis observer. For our explosion parameters, even an observer at an angle of $\theta_j / 2$ will not find a significantly better fit for a broken power law than for a single power law, leading to a missing jet break. For a random observer angle within the cone of the jet, a synthetic light curve created from our simulation will only show a discernable jet break 29 % of the time. Although our simulation is somewhat atypical in its above average initial opening angle of the jet, these results nevertheless imply that the observer angle has a strong influence on the interpretation of X-ray data. This holds even for observers still within the cone of the jet, observer angles that have usually been ignored and considered practically on-axis. - As a second application we have confirmed the result of @Soderberg2006 that a sample of 68 nearby type Ibc SNe cannot harbor an off-axis GRB, at least for the typical afterglow parameters that we have used for our simulation. This confirms the observational restrictions placed on orphan afterglow rate and beaming factor by these and other authors. The fact that at early times the light curves for off-axis observers are actually brighter than analytically expected, even strengthens the conclusions of @Soderberg2006. Recent deep late-time optical observations by @Dai2008 have detected jet breaks in several bursts and they suggest that the lack of jet breaks in *Swift* bursts is due to the lack of well sampled light curves at late times. Therefore they conclude that the collimated outflow model GRBs is still valid. However, the non-detection of a jet break or a break at very late times in several GRBs (050904, @Cenko2010; 070125, @Cenko2010; 080319B, @Cenko2010; 080721, @Starling2009; 080916C, @Greiner2009; 090902B, @Pandey2010; 090926A, @Cenko2010b) seems to infer a huge amount of released energy ($\gtrsim 10^{52}\,\mathrm{erg}$) in these bursts. This leads @Cenko2010 [@Cenko2010b] to propose a class of *hyper-energetic* GRBs and challenge the magnetar model for GRBs [@Usov92; @Duncan92; @Thompson04; @Uzdensky07; @Kom07; @Buc09]. Although *hyper-energetic* GRBs might exist, we propose an alternate explanation, in which the observer is simply off-axis. Note that an observer is more likely to be off-axis than on-axis. A typical observer sees the burst from $\theta_{obs} \approx 2\theta_0/3 $ so is closer to the jet edge than jet axis. Applying an on-axis model to a jet that is seen off-axis will *overestimate* the total energy release by a factor of up to 4 because the jet break can be delayed by a factor of up to $\sim 6$ (see Eq. \[jetbreak\_time\_equation\]). We emphasize that when the jet opening angle is corrected for off-axis observer angle, the inferred energy release of those *hyper-energetic* events can be revised downwards by factors of several lessening tension with magnetar models for the GRB central engine. These are the main conclusions of the work presented in this paper. In addition to this, the current results also raise a number of issues that need to be addressed in future work. Our simulation results can be generalized by performing additional simulations. And given the quantitative differences between simulation and simplified analytical models (such as the homogeneous slab model described in the Appendix), this is expected to result in different constraints on orphan afterglow rate and beaming factor. We have so far ignored self-absorption when calculating light curves. A simplified homogeneous slab approximation indicates that self-absorption is not expected to play a large role for observers at high angles. The applicability of our analytical model is however limited, especially in view of our finding that the early flux received by an off-axis observer is dominated by emission from material that has spread sideways and has slowed down more than material on the jet axis. It is conceivable that emission from this material has different spectral properties. Finally, the significant difference in observed flux between two approaches to synchrotron cooling that we have discussed emphasize the importance of a detailed model for the microphysics and radiation mechanisms involved. We thank Peter A. Curran for allowing us the use of his computer code for synthesizing and fitting Swift data and for helpful discussion. This work was supported in part by NASA under Grant No. 09-ATP09-0190 issued through the Astrophysics Theory Program (ATP). The software used in this work was in part developed by the DOE-supported ASCI/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. Summary of analytical model {#summary_model_section} =========================== Here we provide a short summary of our analytical model. In this model, light curves are calculated by numerical integration over an infinitesimally thin homogeneous blast wave front. The received flux is given by $$F = \frac{1}{4 \pi d^2_L} \int d^3 \mathbf{r} \frac{\epsilon'_{\nu'}}{\gamma^2 (1-\beta \mu)^2},$$ when we ignore the redshift $z$. Here $\epsilon'_{\nu'}$ is the comoving frame emissivity and $\mu$ the angle between observer direction and local velocity. The dependence of the beaming and emissivity on the observer time $t_{obs}$ is kept implicit (see also eqn. \[te2tobs\_equation\]; the volume integral needs to be taken across different emission times). Assuming the radiation is produced by an infinitesimally thin shell with width $\Delta R$ we get $$F = \frac{1}{4 \pi d^2_L} \int d \theta d \phi R^2 \sin{\theta} \Delta R \frac{\epsilon'_{\nu'}}{\gamma^2 (1-\beta \mu)^2}, \label{flux_model2_equation}$$ For every observer time we integrate over jet angles $\theta$ and $\phi$ (we define $\theta$ such that it is *not* the angle between observer and local fluid velocity, but that between local fluid velocity and jet axis), while taking into account that radiation from different emission angles. The emissivity can be calculated from the local fluid conditions, which we know in turn in terms of emission time $t_e$. For the blast wave radius we have by definition: $$R = c \int \beta_{sh} d t_e,$$ Where the subscript $sh$ indicates *shock* velocity. From the shock jump conditions it follows for arbitrary strong shocks that $$e'_{th} = ( \gamma - 1) n' m_p c^2.$$ The comoving downstream number density $n'$ in both the relativistic and nonrelativistic regime is given by $$n' = 4 n_0 \gamma,$$ with $\gamma \to 1$ in the nonrelativistic limit. We assume this equation to remain valid in the intermediate regime as well. This is not implied by the expression above, where we have kept implicit the dependence on the fluid adiabatic index (which changes from $4/3$ to $5/3$ over the course of the blast wave evolution). We set the width of the shell at a single emission time by demanding that the shell contains all swept-up particles, leading to: $$4 \pi \left[ R_f (t_f) - R_b(t_f) \right] R^2 n = 4/3 \pi R^3 n_0 \to \left[ R_f (t_f) - R_b(t_f) \right] = \frac{R}{12 \gamma^2},$$ where we have used $n = \gamma n' = 4 n_0 \gamma^2$, again assumed valid throughout the entire evolution of the fluid. The subscript $f$ denotes the front of the shock and the subscript $b$ denotes the back of the shock. Setting the shock width through the number of particles is to some extent an arbitrary choice, and we could also have used the total energy which would have yielded a different width (since the downstream energy density profile is different from the downstream number density profile). The width of the shell $\Delta R$ in equation \[flux\_model2\_equation\] has to take into account the emission time difference between the front and back of the shell and is given by $$\Delta R = | R_f (t_f) - R_b (t_b) | = | R_f (t_f) - R_f( t_b ) \left[ 1 - \frac{1}{12 \gamma(t_b)^2} \right] |.$$ Because the shell is very thin, $R_f (t_b) \approx R_f (t_f) - \beta_{sh} c \Delta t$. We integrate over emission arriving at a single observer time, and for given values of $\mu$ and $t_{obs}$ we have $$t_{obs} = t_f - \mu R_f (t_f) / c = t_b - \mu R_b ( t_b) / c, \label{te2tobs_equation}$$ which yields $\Delta R = \Delta t c / \mu$ when differentiated. Combining the above, we eventually find $$\Delta R = \frac{1}{1 - \beta_{sh} \mu} \cdot \frac{R}{12 \gamma^2}.$$ For the shock velocity we have $$(\beta_{sh} \gamma_{sh})_{BM} = \left( \frac{17 \cdot E_\mathrm{iso}}{8 \pi n_0 m_p c^5} \right)^{1/2} t_e^{-3/2} \equiv C_{BM} t_e^{-3/2}; \qquad (\beta_{sh} \gamma_{sh})_{ST} = \frac{2}{5} \cdot 1.15 \cdot \left( \frac{E_j}{n_0 m_p c^5} \right)^{1/5} \cdot t_e^{-3/5} \equiv C_{ST} \cdot t_e^{-3/5},$$ in the BM and ST regime respectively. We artificially combine the two simply by adding them (after squaring): $$\beta_{sh}^2 \gamma_{sh}^2 = C^2_{BM} t_e^{-3} + C^2_{ST} t_e^{-6/5}.$$ Note that the BM quantities depend on $E_{iso}$, while the ST quantities depend on $E_{j}$. The two are related via $E_j = E_{iso} \theta_j^2 / 2 $. Here $E_j$ is the total energy in *both* jets, and $\theta_j$ the *half* opening angle of a jet. The fluid Lorentz factor in the relativistic regime is related to the shock Lorentz factor via $\gamma^2 = \gamma_{sh}^2 / 2$, while the fluid velocity in the non-relativistic regime is related to the shock velocity via $\beta = 3/4 \beta_{sh}$. We therefore construct a relationship between emission time and fluid velocity similar to that between emission time and shock velocity: $$\beta^2 \gamma^2 = \frac{1}{2} C_{BM}^2 t_e^{-3} + \frac{9}{16} C_{ST}^2 t_e^{-6/5}.$$ We assume that the jet does not spread sideways throughout the relativistic phase of its evolution. However, at some point the blast wave *must* become spherical. For if we keep the opening angle fixed, but do take $E_j$ to dictate the ST solution instead of $E_{iso}$ we would underestimate the final flux by integrating over an integration domain that is too small. We will assume that the jet starts spreading sideways when it has reached the nonrelativistic phase and we take this moment to be given by $$(\beta \gamma)_{BM} = 1 \to t_{NR} = 2^{1/3} C_{BM}^{2/3}.$$ At this point, the jet starts spreading sideways with the speed of sound $c_s$, leading to $$R \frac{d \theta}{d t_{obs}} = c_s.$$ In the nonrelativistic regime $d t_{obs}$ and $d t_e$ are identical. The ST solution for the speed of sound for adiabatic index $5/3$ is given by $c_s = r/\sqrt{20} t$, leading to $$\theta = \theta_j + \frac{1}{\sqrt{20}} \ln \frac{t_{obs}}{t_{NR}},$$ until spherical symmetry is reached. By contrast, Rhoads ’99 take $\Omega \approx \pi ( \theta_j + c_s t' / c t_e )^2$ (where $t'$ time in the comoving frame, and $\Omega$ a solid angle) as the starting point. From the local fluid conditions the local emissivity can be calculated. In the case of slow cooling, we define $$\begin{aligned} \epsilon'_{\nu'} & = & \epsilon'_m \left( \frac{\nu'}{\nu'_m} \right)^{1/3}, \qquad \nu' < \nu'_m \nonumber \\ \epsilon'_{\nu'} & = & \epsilon'_m \left( \frac{\nu'}{\nu'_m} \right)^{(1-p)/2}, \qquad \nu'_m < \nu' < \nu'_c \nonumber \\ \epsilon'_{\nu'} & = & \epsilon'_m \left( \frac{\nu'_c}{\nu'_m} \right)^{(1-p)/2} \left( \frac{\nu'}{\nu'_c} \right)^{-p/2}, \qquad \nu'_c < \nu'.\end{aligned}$$ The definition for fast cooling is analogous (see also @Sari1998). The peak emissivity is given by $$\epsilon'_m \backsim \frac{p-1}{2} \frac{\sqrt{3}q_e^3}{m_e c^2}n' B'.$$ The synchrotron break frequency $\nu'_m$ depends on its corresponding critical electron Lorentz factor $\gamma'_m$, leading to $$\nu'_m = \frac{3}{4\pi} \frac{q_e}{m_e c} (\gamma'_m)^2 B', \qquad \gamma'_m = \left( \frac{p-2}{p-1} \right) \frac{\epsilon_e e'_{th}}{n'm_e c^2} .$$ For the cooling break frequency an identical relation between frequency and electron Lorentz factor holds. The critical Lorentz factor is now given by $$\gamma'_c = \frac{ 6 \pi m_e \gamma c}{ \sigma_T (B')^2 t_e},$$ which follows from the electron kinetic equation when synchrotron losses dominate over adiabatic expansion and the cooling time is approximated by the lab frame time since the explosion. Synchrotron self-absorption is included in the model using the assumption that emission and absorption occur in a homogeneous shell. The solution to the linear equation of radiative transfer then dictates that we need to replace eq. \[flux\_model2\_equation\] by $$F = \frac{1}{4 \pi d^2_L} \int d \theta d \phi R^2 \sin{\theta} \frac{\epsilon_\nu}{\alpha_\nu} (1 - \mathrm{e}^{-\tau}),$$ The optical depth $\tau \approx - \alpha_\nu \Delta R$ [^1]. Emissivity and absorption translate between frames using $\epsilon'_{\nu'} = \gamma^2 (1 - \beta \mu)^2 \epsilon_\nu$ and $\alpha'_{\nu'} = \alpha_\nu / \gamma (1 - \beta \mu)$ respectively. In our simplified model we calculate the absorption coefficient $\alpha'_{\nu'}$ under the assumption that electron cooling does not influence it. This assumption is justified when the self-absorption break frequency $\nu_a$ lies well below the cooling break frequency $\nu_c$, which is the case for all applications of the model in this paper. Also approximating the synchrotron spectral shape by just two sharply connected power laws we then find for the self-absorption coefficient: $$\alpha'_{\nu'} = (p-1) (p+2) n' \frac{ \sqrt{3} q_e^3 B'}{\gamma'_m 16 \pi m_e^2 c^2} (\nu')^{-2} \left( \frac{ \nu'}{\nu'_m} \right)^{\kappa},$$ where $\kappa = 1/3$ if $\nu' < \nu'_m$ and $\kappa = -p/2$ otherwise. Numerically speaking the integration procedure is as follows. First we tabulate $R(t_e)$ for a given set of physical parameters, so that we do not need to estimate it analytically but can use its exact dependence on the fluid Lorentz factor instead. We integrate over $\theta$ before we integrate over $\phi$, and for each $\theta$, $\phi$ the angle between observer and fluid element is given by $$\mu = \sin{\theta} \cos{\phi} \sin{\theta_{obs}} + \cos{\theta} \cos{\theta_{obs}}.$$ Having tabulated $R(t_e)$, we tabulate $\mu( t_e, R(t_e))$ as well for a given value of $t_{obs}$. Since $\mu(t_e)$ is a monotonically increasing function of $t_e$, we can unambiguously determine $t_e( \mu)$ from this table. When determining the value of the integrand at a given value of $\theta$, $\phi$, we can now calculate the local fluid conditions and emissivity via $t_e(\mu(\theta, \phi))$. [^1]: This is an approximation that does not take into account that not all rays cross the homogeneous slab along the radial direction. However, significantly increasing the optical depth does not alter our finding that self-absorption does not play a role for off-axis VLA light curves generated from this model.
--- abstract: 'We generalize the generating formula for plane partitions known as MacMahon’s formula as well as its analog for strict plane partitions. We give a 2-parameter generalization of these formulas related to Macdonald’s symmetric functions. The formula is especially simple in the Hall-Littlewood case. We also give a bijective proof of the analog of MacMahon’s formula for strict plane partitions.' author: - 'Mirjana Vuletić$^{*}$' title: 'A GENERALIZATION OF MACMAHON’S FORMULA' --- [^1] Introduction ============ [s1]{} A plane partition is a Young diagram filled with positive integers that form nonincreasing rows and columns. Each plane partition can be represented as a finite two sided sequence of ordinary partitions $(\dots,\lambda^{-1},\lambda^0,\lambda^1,\dots)$, where $\lambda^0$ corresponds to the ordinary partition on the main diagonal and $\lambda^k$ corresponds to the diagonal shifted by $k$. A plane partition whose all diagonal partitions are strict ordinary partitions (i.e. partitions with all distinct parts) is called a [*strict*]{} plane partition. Figure \[PlanePartition\] shows two standard ways of representing a plane partition. Diagonal partitions are marked on the figure on the left. ![A plane partition[]{data-label="PlanePartition"}](PlanePartitionFinN4.eps){height="5.8cm"} For a plane partition $\pi$ one defines the weight $|\pi|$ to be the sum of all entries. A [*connected component*]{} of a plane partition is the set of all connected boxes of its Young diagram that are filled with a same number. We denote the number of connected components of $\pi$ with $k(\pi)$. For the example from Figure \[PlanePartition\] we have $k(\pi)=10$ and its connected components are shown in Figure \[PlanePartition\] (left- bold lines represent boundaries of these components, right- white terraces are connected components). Denote the set of all plane partitions with $\mathcal{P}$ and with $\mathcal{P}(r,c)$ we denote those that have zero $(i,j)$th entry for $i>r$ and $j>c$. Denote the set of all strict plane partitions with $\mathcal{SP}$. A generating function for plane partitions is given by the famous MacMahon’s formula (see e.g. 7.20.3 of [@S]): $${\label}{MacMahonac} \sum_{\pi \in \mathcal{P}}s^{|\pi|}= \prod_{n=1}^{\infty} \left(\frac{1}{1-s^{n}}\right)^n.$$ Recently, a generating formula for the set of strict plane partitions was found in [@FW] and [@V]: $${\label}{ShiftedMacMahonac} \sum_{\pi \in \mathcal{SP}}2^{k(\pi)}s^{|\pi|}= \prod_{n=1}^{\infty} \left(\frac{1+s^n}{1-s^{n}}\right)^n.$$ We refer to it as the shifted MacMahon’s formula. In this paper we generalize both formulas (\[MacMahonac\]) and (\[ShiftedMacMahonac\]). Namely, we define a polynomial $A_\pi(t)$ that gives a generating formula for plane partitions of the form $$\sum_{\pi \in \mathcal{P}(r,c)}A_\pi(t)s^{|\pi|}= \prod_{i=1}^{r}\prod_{j=1}^{c} \frac{1-ts^{i+j-1}}{1-s^{i+j-1}}$$ with the property that $A_\pi(0)=1$ and $$A_\pi(-1)= \begin{cases} 2^{k(\pi)},&\pi \text{ is a strict plane partition}, \\ 0,& \text{otherwise}. \end{cases}$$ We further generalize this and find a rational function $F_{\pi}(q,t)$ that satisfies $$\sum_{\pi \in \mathcal{P}(r,c)}F_\pi(q,t)s^{|\pi|}=\prod_{i=1}^{r}\prod_{j=1}^c \frac{(ts^{i+j-1};q)_\infty}{(s^{i+j-1};q)_{\infty}},$$ where $$(s;q)_\infty=\prod_{n=0}^{\infty}(1-sq^n)$$ and $F_\pi(0,t)=A_\pi(t)$. We describe $A_\pi(t)$ and $F_\pi(q,t)$ below. In order to describe $A_\pi(t)$ we need more notation. If a box $(i,j)$ belongs to a connected component $C$ then we define its [*level*]{} $h(i,j)$ as the smallest positive integer such that $(i+h,j+h)$ does not belong to $C$. A [*border component*]{} is a connected subset of a connected component where all boxes have the same level. We also say that this border component is of this level. For the example above, border components and their levels are shown in Figure \[BorderComponents\]. \[htp!\] ![Border Components[]{data-label="BorderComponents"}](BC3.eps "fig:"){height="6cm"} For each connected component $C$ we define a sequence $(n_1,n_2,\dots)$ where $n_i$ is the number of $i$-level border components of $C$. We set $$P_C(t)=\prod_{i\geq1}(1-t^{i})^{n_i}.$$ Let $C_1,C_2,\dots,C_{k(\pi)}$ be connected components of $\pi$. We define $$A_\pi(t)=\prod_{i=1}^{k(\pi)}P_{C_i}(t).$$ For the example above $A_\pi(t)=(1-t)^{10}(1-t^2)^3(1-t^3)^2$. $F_{\pi}(q,t)$ is defined as follows. For nonnegative integers $n$ and $m$ let $$f(n,m)= \begin{cases} \displaystyle \prod_{i=0}^{n-1}\frac{1-q^{i}t^{m+1}}{1-q^{i+1}t^{m}},&\;\;\;n\geq1,\\ \;1,&\;\;\;n=0. \end{cases}$$ Here $q$ and $t$ are parameters. Let $\pi\in \mathcal{P}$ and let $(i,j)$ be a box in its support (where the entries are nonzero). Let $\lambda$, $\mu$ and $\nu$ be ordinary partitions defined by $${\label}{lmn} \begin{tabular}{l} $\lambda=(\pi(i,j),\pi(i+1,j+1),\dots)$,\\ $\mu=(\pi(i+1,j),\pi(i+2,j+1),\dots)$,\\ $\nu=(\pi(i,j+1),\pi(i+1,j+2),\dots).$\\ \end{tabular}$$ To the box $(i,j)$ of $\pi$ we associate $$F_\pi(i,j)(q,t)=\prod_{m=0}^{\infty}\frac{f(\lambda_1-\mu_{m+1},m)f(\lambda_1-\nu_{m+1},m)} {f(\lambda_1-\lambda_{m+1},m)f(\lambda_1-\lambda_{m+2},m)}.$$ Only finitely many terms in this product are different than 1. To a plane partition $\pi$ we associate a function $F_\pi(q,t)$ defined by $$F_\pi(q,t)=\prod_{(i,j)\in \pi}F_\pi(i,j)(q,t).$$ For the example above $$F_\pi(0,0)(q,t)=\frac{1-q}{1-t}\cdot\frac{1-q^3t^2}{1-q^2t^3}\cdot\frac{1-q^5t^4}{1-q^4t^5}\cdot\frac{1-q^3t^5}{1-q^4t^4}.$$ Two main results of our paper are (Generalized MacMahon’s formula; Macdonald’s case) $$\sum_{\pi \in \mathcal{P}(r,c)}F_\pi(q,t)s^{|\pi|}=\prod_{i=1}^{r}\prod_{j=1}^c \frac{(ts^{i+j-1};q)_\infty}{(s^{i+j-1};q)_{\infty}},$$ In particular, $$\sum_{\pi \in \mathcal{P}}F_\pi(q,t)s^{|\pi|}=\prod_{n=1}^{\infty} \left[\frac{(ts^{n};q)_\infty}{(s^{n};q)_{\infty}}\right]^n.$$ (Generalized MacMahon’s formula; Hall-Littlewood’s case) $$\sum_{\pi \in \mathcal{P}(r,c)}A_\pi(t)s^{|\pi|}= \prod_{i=1}^{r}\prod_{j=1}^{c} \frac{1-ts^{i+j-1}}{1-s^{i+j-1}}.$$ In particular, $$\sum_{\pi \in \mathcal{P}}A_\pi(t)s^{|\pi|}= \prod_{n=1}^{\infty} \left(\frac{1-ts^n}{1-s^{n}}\right)^n.$$ Clearly, the second formulas (with summation over $\mathcal{P}$) are limiting cases of the first ones as $r,c \to \infty$. The proof of Theorem A was inspired by [@OR] and [@V]. It uses a special class of symmetric functions called skew Macdonald functions. For each $\pi \in \mathcal{P}$ we introduce a weight function depending on several specializations of the algebra of symmetric functions. For a suitable choice of these specializations the weight functions become $F_\pi(q,t)$. We first prove Theorem A and Theorem B is obtained as a corollary of Theorem A after we show that $F_\pi(0,t)=A_\pi(t)$. Proofs of formula (\[ShiftedMacMahonac\]) appeared in [@FW] and [@V]. Both these proofs rely on skew Schur functions and a Fock space corresponding to strict plane partitions. In this paper we also give a bijective proof of (\[ShiftedMacMahonac\]) that does not involve symmetric functions. The paper is organized as follows. Section 2 consists of two subsections. In Subsection \[s2.1\] we prove Theorem A. In Subsection \[s2.2\] we prove Theorem B by showing that $F_\pi(0,t)=A_\pi(t)$. In Section \[MM\] we give a bijective proof of (\[ShiftedMacMahonac\]). This work is a part of my doctoral dissertation at California Institute of Technology and I thank my advisor Alexei Borodin for all his help. Generalized MacMahon’s formula ============================== [s2]{} Macdonald’s case ---------------- [s2.1]{} We recall a definition of a plane partition. For basics, such as ordinary partitions and Young diagrams see Chapter 1 of [@Mac]. A plane partition $\pi$ can be viewed in different ways. One way is to fix a Young diagram, the support of the plane partition, and then to associate a positive integer to each box in the diagram such that integers form nonincreasing rows and columns. Thus, a plane partition is a diagram with row and column nonincreasing integers. It can also be viewed as a finite two-sided sequence of ordinary partitions, since each diagonal in the support diagram represents a partition. We write $ \pi=(\ldots, \lambda^{-1},\lambda^{0},\lambda^{1}, \ldots ),$ where the partition $\lambda^{0}$ corresponds to the main diagonal and $\lambda^{k}$ corresponds to the diagonal that is shifted by $k$, see Figure \[PlanePartition\]. Every such two-sided sequence of partitions represents a plane partition if and only if $${\label}{condpp} \begin{array}{c} \cdots \subset \lambda^{-1} \subset \lambda^{0} \supset \lambda^{1} \supset \cdots \text{ and} \medskip \\ \text{$[\lambda^{n-1}/\lambda^n]$ is a horizontal strip for every $n$,} \end{array}$$ where $$[\lambda/\mu]= \begin{cases} \lambda/\mu& \text{if } \lambda \supset \mu,\\ \mu/\lambda& \text{if } \mu \supset \lambda. \end{cases}$$ The weight of $\pi$, denoted with $|\pi|$, is the sum of all entries of $\pi$. We denote the set of all plane partitions with $\mathcal{P}$ and its subset containing all plane partitions with at most $r$ nonzero rows and $c$ nonzero columns with $\mathcal{P}(r,c)$. Similarly, we denote the set of all ordinary partitions (Young diagrams) with $\mathcal{Y}$ and those with at most $r$ parts with $\mathcal{Y}(r)=\mathcal{P}(r,1).$ We use the definitions of $f(n,m)$ and $F_\pi(q,t)$ from the Introduction. To a plane partition $\pi$ we associate a rational function $F_\pi(q,t)$ that is related to Macdonald symmetric functions (for reference see Chapter 6 of [@Mac]). In this section we prove Theorem A. The proof consists of few steps. We first define weight functions on sequences of ordinary partitions (Section \[s2.1.1\]). These weight functions are defined using Macdonald symmetric functions. Second, for suitably chosen specializations of these symmetric functions we obtain that the weight functions vanish for every sequence of partitions except if the sequence corresponds to a plane partition (Section \[s2.1.2\]). Finally, we show that for $\pi \in \mathcal{P}$ the weight function of $\pi$ is equal to $F_\pi(q,t)$(Section \[s2.1.3\]). Before showing these steps we first comment on a corollary of Theorem A. Fix $c=1$. Then, Theorem A gives a generating formula for ordinary partitions since $\mathcal{P}(r,1)=\mathcal{Y}(r)$. For $\lambda=(\lambda_1,\lambda_2,\dots)\in \mathcal{Y}(r)$ we define $d_i=\lambda_i-\lambda_{i+1}$, $i=1,\dots,r$. Then $$F_\lambda(q,t)=\prod_{i=1}^{r}f(d_i,0)=\prod_{i=1}^{r}\prod_{j=1}^{d_i}\frac{1-tq^{j-1}}{1-q^j}.$$ Note that $F_{\lambda}(q,t)$ depends only on the set of distinct parts of $\lambda$. [ObicneParticije]{} $$\sum_{\lambda \in \mathcal{Y}(r)}F_\lambda(q,t)s^{|\lambda|}=\prod_{i=1}^{r} \frac{(ts^{i};q)_\infty}{(s^{i};q)_{\infty}}.$$ In particular, $$\sum_{\lambda \in \mathcal{Y}}F_\lambda(q,t)s^{|\lambda|}=\prod_{i=1}^{\infty} \frac{(ts^{i};q)_\infty}{(s^{i};q)_{\infty}}.$$ This corollary is easy to show directly. First, we expand $(ts;q)_\infty/(s;q)_\infty$ into the power series in $s$. Let $a_d(q,t)$ be the coefficient of $s^d$. Observe that $$\frac{(ts;q)_\infty}{(s;q)_\infty}:=\sum_{d=0}^{\infty}a_d(q,t)s^d=\frac{1-ts}{1-s}\sum_{d=0}^{\infty}a_d(q,t)s^dq^d.$$ This implies that $$a_d(q,t)=f(d,0).$$ Every $\lambda=(\lambda_1,\dots,\lambda_r)\in \mathcal{Y}(r)$ is uniquely determined by $d_i\in {{\mathbb{N}}}\cup\{0\}$, $i=1,\dots,r$, where $d_i=\lambda_i-\lambda_{i+1}$. Then $\lambda_i=\sum_{j\geq0} d_{i+j}$ and $|\lambda|=\sum_{i=1}^{r}id_i$. Therefore, $$\begin{aligned} \prod_{i=1}^{r} \frac{(ts^{i};q)_\infty}{(s^{i};q)_{\infty}}&=&\prod_{i=1}^{r}\sum_{d_i=0}^{\infty}a_{d_i}(q,t)s^{id_i}\\ &=&\sum_{d_1,\dots,d_r}\left[\prod_{i=1}^{r}a_{d_i}(q,t)\right]\cdot \left[s^{\sum_{i=1}^rid_i}\right]=\sum_{\lambda \in \mathcal{Y}(r)}F_{\lambda}(q,t)s^{|\lambda|}.\end{aligned}$$ ### **[The weight functions]{}** [s2.1.1]{} The weight function is defined as a product of Macdonald symmetric functions $P$ and $Q$. We follow the notation of Chapter 6 of [@Mac]. Let $\Lambda$ be the algebra of symmetric functions. A specialization of $\Lambda$ is an algebra homomorphism $\Lambda \to {{\mathbb{C}}}$. If $\rho$ and $\sigma$ are specializations of $\Lambda$ then we write $P_{\lambda/\mu}(\rho;q,t)$, $Q_{\lambda/\mu}(\rho;q,t)$ and $\Pi(\rho,\sigma;q,t)$ for the images of $P_{\lambda/\mu}(x;q,t)$, $Q_{\lambda/\mu}(x;q,t)$ and $\Pi(x,y;q,t)$ under $\rho$, respectively $\rho \otimes \sigma$. Every map $\rho:(x_1,x_2,\dots)\to (a_1,a_2,\dots)$ where $a_i\in {{\mathbb{C}}}$ and only finitely many $a_i$’s are nonzero defines a specialization. Let $\rho=(\rho_0^{+},\rho_1^{-},\rho_1^{+}, \ldots, \rho_T^{-})$ be a finite sequence of specializations. For two sequences of partitions $ \lambda=(\lambda^{1},\lambda^{2}, \ldots ,\lambda^{T})$ and $ \mu=(\mu^{1},\mu^{2}, \ldots ,\mu^{T-1})$ we set the weight function $W(\lambda, \mu;q,t)$ to be $$W(\lambda, \mu;q,t)=\prod_{n=1}^{T}Q_{\lambda^{n}/\mu^{n-1}} (\rho_{n-1}^{+};q,t)P_{\lambda^{n}/\mu^{n}}({\rho_n^{-}};q,t),$$ where $\mu^0=\mu^T=\emptyset$. Note that $W(\lambda , \mu;q,t)=0$ unless $$\emptyset \subset \lambda^{1} \supset \mu^{1} \subset \lambda^{2} \supset \mu^{2} \subset \ldots \supset \mu^{T-1} \subset \lambda^{T} \supset \emptyset.$$ Recall that ((6.2.5) and (6.4.13) of [@Mac]) $$\Pi(x,y;q,t)=\sum_{\lambda \in \mathcal{Y}}Q_\lambda(x;q,t)P_\lambda(y;q,t)=\prod_{i,j}\frac{(tx_iy_j;q)_\infty}{(x_iy_j;q)_\infty}.$$ [Z]{} The sum of the weights $W(\lambda, \mu;q,t)$ over all sequences of partitions $ \lambda=(\lambda^{1},\lambda^{2}, \ldots ,\lambda^{T})$ and $ \mu=(\mu^{1},\mu^{2}, \ldots ,\mu^{T-1})$ is equal to $${\label}{FormulaForZ} Z(\rho;q,t)=\prod_{0 \leq i < j \leq T}\Pi(\rho_i^{+},\rho_j^{-};q,t).$$ We use $$\sum_{\lambda \in \mathcal{Y}}Q_{\lambda / \mu}(x)P_{\lambda / \nu}(y)=\Pi(x,y)\sum_{\tau \in \mathcal{Y}}Q_{\nu / \tau}(x)P_{\mu / \tau}(y).$$ The proof of this is analogous to the proof of Proposition 5.1 that appeared in our earlier paper [@V]. Also, see Example 26 of I.5 of [@Mac]. We prove (\[FormulaForZ\]) by induction on $T$. Using the formula above we substitute sums over $\lambda^{i}$’s with sums over $\tau^{i-1}$’s as in the proof of Proposition 2.1 of [@BR]. This gives $$\prod_{i=0}^{T-1}\Pi({\rho_i^{+}},{\rho_{i+1}^{-}}) \sum_{\mu, \tau}Q_{\mu^{1}} (\rho_0^{+})P_{\mu^{1}/\tau^{1}} ({\rho_2^{-}}) Q_{\mu^{2}/\tau^{1}} (\rho_1^{+}) \ldots P_{\mu^{T-1}} ({\rho_{T}^{-}}).$$ This is the sum of $W(\mu,\tau)$ with $ \mu=(\mu^{1}, \ldots ,\mu^{T-1})$ and $ \tau=(\tau^{1}, \ldots ,\tau^{T-2})$. Inductively, we obtain (\[FormulaForZ\]). ### **[Specializations]{}** [s2.1.2]{} For $\pi=(\dots,\lambda^{-1},\lambda^0,\lambda^{1},\dots) \in \mathcal{P}$ we define a function $\Phi_{\pi}(q,t)$ by $${\label}{alternation} \Phi_{\pi}(q,t)=\frac{1}{b_{\lambda^0}(q,t)}\prod_{n=-\infty}^{\infty}\varphi_{[\lambda^{n-1}/\lambda^{n}]}(q,t),$$ where $b$ and $\varphi$ are given with (6.6.19) and (6.6.24)(i) on p.341 of [@Mac]. Only finitely many terms in the product are different than 1 because only finitely many $\lambda^{n}$ are nonempty partitions. We show that for a suitably chosen specializations the weight function vanishes for every sequence of ordinary partitions unless this sequence represents a plane partition in which case it becomes (\[alternation\]). This, together with Proposition \[Z\], implies [pomocna1]{} $$\sum_{\pi \in \mathcal{P}(r,c)}\Phi_\pi(q,t)s^{|\pi|}=\prod_{i=1}^{r} \prod_{j=1}^{c} \frac{(ts^{i+j-1};q)_\infty}{(s^{i+j-1};q)_\infty}.$$ If $\rho$ is a specialization of $\Lambda$ where $x_1=s,\,x_2=x_3=\ldots=0$ then by (6.7.14) and (6.7.14$^\prime$) of [@Mac] $$\begin{array}{lcc} Q_{\lambda/\mu}(\rho)= \begin{cases} \varphi_{\lambda/\mu} s^{|\lambda|-|\mu|} & \text{$\;\;\;\;\;\;\;\;\;\lambda \supset \mu$, $\lambda/\mu$ is a horizontal strip},\\ 0 & \text{$\;\;\;\;\;\;\;\;\;$otherwise}, \end{cases}\\ P_{\lambda/\mu}(\rho)= \begin{cases} \varphi_{\lambda/\mu}b_\mu/b_\lambda s^{|\lambda|-|\mu|} & \text{ $\lambda \supset \mu$, $\lambda/\mu$ is a horizontal strip},\\ 0 & \text{ otherwise}. \end{cases} \end{array}$$ Let $$\begin{array}{llll} \rho_n^+:x_1=s^{-n-1/2},\,x_2=x_3=\ldots=0 &&&-r \leq n \leq -1,\\ \rho_n^-:x_1=x_2=\ldots=0 &&& -r+1 \leq n \leq -1,\\ \rho_n^-:x_1=s^{n+1/2},\,x_2=x_3=\ldots=0 &&&\;\;\,0 \leq n \leq c-1,\\ \rho_n^+:x_1=x_2=\ldots=0 &&&\;\;\,0 \leq n \leq c-2. \end{array}$$ Then for any two sequences $\lambda=(\lambda^{-r+1}\dots,\lambda^{-1},\lambda^0,\lambda^1,\dots\lambda^{c-1})$ and $\mu=(\mu^{-r+1}\dots,\mu^{-1},\mu^0,\mu^{1},\dots\mu^{c-2})$ the weight function is given with $$\begin{aligned} W(\lambda, \mu)&=&\prod_{n=-r+1}^{c-1} Q_{\lambda^{n}/\mu^{n-1}} (\rho_{n-1}^{+})P_{\lambda^{n}/\mu^{n}} ({\rho_n^{-}}),\end{aligned}$$ where $\mu^{-r}=\mu^{c-1}=\emptyset.$ Then $W(\lambda,\mu)=0$ unless $$\mu^{n}=\begin{cases} \lambda^n&n<0,\\ \lambda^{n+1}&n\geq0, \end{cases}$$ $$\cdots\lambda^{-1} \subset \lambda^0 \supset \lambda^1\supset\cdots,$$ $$\begin{array}{c} [\lambda^{n-1} /\lambda ^{n}]\text{ is a horizontal strip for every } n , \end{array}$$ i.e. $\lambda \in \mathcal{P}$ and in that case $$\begin{aligned} W(\lambda, \mu)&=&\prod_{n=-r+1}^{0}\varphi_{\lambda^{n}/\lambda^{n-1}}(q,t)s^{(-2n+1)(|\lambda^{n}|-|\lambda^{n-1}|)/2}\\ &&\cdot\prod_{n=1}^{c}\frac{b_{\lambda^n}(q,t)}{b_{\lambda^{n-1}}(q,t)}\varphi_{\lambda^{n-1}/\lambda^{n}}(q,t)s^{(2n-1)(|\lambda^{n-1}|-|\lambda^{n}|)/2}\\ &=&\frac{1}{b_{\lambda^0}(q,t)}\prod_{n=-r+1}^{c}\varphi_{[\lambda^{n-1}/\lambda^{n}]}(q,t)s^{|\lambda|}=\Phi_{\lambda}(q,t)s^{|\lambda|}.\end{aligned}$$ If $\rho^+$ is $x_1=s,\,x_2=x_3=\ldots=0$ and $\rho^-$ is $x_1=r,\,x_2=x_3=\ldots=0$ then $$\Pi(\rho^+,\rho^-)=\prod_{i,\,j} \left. \frac{(tx_iy_j;q)_\infty}{(x_iy_j;q)_\infty} \right| _{x=\rho^+,\, y=\rho^-}=\frac{(tsr;q)_\infty}{(sr;q)_\infty}.$$ Then, by Proposition \[Z\], for the given specializations of $\rho_i^+$’s and $\rho_j^-$’s we have $$Z=\prod_{i=-1}^{-r} \prod_{j=0}^{c-1} \Pi(\rho_i^+,\rho_j^-)=\prod_{i=1}^{r} \prod_{j=1}^{c} \frac{(ts^{i+j-1};q)_\infty}{(s^{i+j-1};q)_\infty}.$$ ### **[Final step]{}** [s2.1.3]{} We show that $F_\pi(q,t)=\Phi_\pi(q,t)$. Then Proposition \[pomocna1\] implies Theorem A. Let $\pi \in \mathcal{P}$. Then $$F_{\pi}(q,t)=\Phi_{\pi}(q,t).$$ We show this by induction on the number of boxes in the support of $\pi$. Denote the last nonzero part in the last row of the support of $\pi$ by $x$. Let $\lambda$ be a diagonal partition containing it and let $x$ be its $k$th part. Because of the symmetry with respect to the transposition we can assume that $\lambda$ is one of diagonal partitions on the left. Let $\pi'$ be a plane partition obtained from $\pi$ by removing $x$. We want to show that $F_{\pi}$ and $F_{\pi'}$ satisfy the same recurrence relation as $\Phi_{\pi}$ and $\Phi_{\pi'}$. The verification uses the explicit formulas for $b_\lambda$ and $\varphi_{\lambda/\mu}$ given by (6.6.19) and (6.6.24)(i) on p.341 of [@Mac]. We divide the problem in several cases depending on the position of the box containing $x$. Let I, II and III be the cases shown in Figure \[Cases\]. \[htp!\] ![Cases I, II and III[]{data-label="Cases"}](CasesN1.eps "fig:"){height="4cm"} Let $\lambda^L$ and $\lambda^R$ be the diagonal partitions of $\pi$ containing $x_L$ and $x_R$, respectively. Let $\lambda'$ be a partition obtained from $\lambda$ by removing $x$. If III then $k=1$ and one checks easily that $$\frac{F_{\pi'}}{F_{\pi}}=\frac{\Phi_{\pi'}}{\Phi_{\pi}} =\frac{f(\lambda_1^R,0)}{f(\lambda^R_1-\lambda_1,0)f(\lambda_1,0)}.$$ Assume I or II. Then $$\begin{aligned} \Phi_{\pi'}=\Phi_{\pi}\cdot \frac{\varphi_{[\lambda'/\lambda^L]}}{\varphi_{[\lambda/\lambda^L]}} \cdot\frac{\varphi_{[\lambda'/\lambda^R]}}{\varphi_{[\lambda/\lambda^R]}} \cdot\frac{b_{\lambda^0(\pi)}}{b_{\lambda^0(\pi')}}=\Phi_{\pi}\cdot \Phi_L \cdot \Phi_R \cdot \Phi_0.\end{aligned}$$ Thus, we need to show that $${\label}{vazi} \Phi_L \cdot \Phi_R \cdot \Phi_0=F:=\frac{F_{\pi'}}{F_{\pi}}.$$ If I then $\lambda^L_{k-1}=x_L$ and $\lambda^R_{k}=x_R$. From the definition of $\varphi$ we have that $$\label{phiL} \Phi_L= {\prod_{i=0}^{k-1}\frac{f(\lambda_{k-i}-\lambda_k,i)}{f(\lambda_{k-i},i)}} \cdot {\prod_{i=0}^{k-2}\frac{f(\lambda^L_{k-1-i},i)}{f(\lambda^L_{k-1-i}-\lambda_k,i)}}.$$ Similarly, $$\Phi_R={\prod_{i=0}^{k-2}\frac{f(\lambda_{k-1-i}-\lambda_k,i)}{f(\lambda_{k-1-i},i)}} \cdot {\prod_{i=0}^{k-1}\frac{f(\lambda^R_{k-i},i)}{f(\lambda^R_{k-i}-\lambda_k,i)}}.$$ If II then $\lambda^L_{k-1}=x_L$ and $\lambda^R_{k-1}=x_R$ and both $\Phi_L$ and $\Phi_R$ are given with (\[phiL\]), substituting $L$ with $R$ for $\Phi_R$, while $$\Phi_0=\prod_{i=0}^{k-1}\frac{f(\lambda_{k-i},i)}{f(\lambda_{k-i}-\lambda_k,i)}\cdot {\prod_{i=0}^{k-2}\frac{f(\lambda_{k-1-i}-\lambda_k,i)}{f(\lambda_{k-1-i},i)}}.$$ From the definition of $F$ one can verify that (\[vazi\]) holds. Hall-Littlewood’s case ---------------------- [s2.2]{} We analyze the generalized MacMahon’s formula in Hall-Littlewood’s case, i.e. when $q=0$, in more detail. Namely, we describe $F_\pi(0,t)$. We use the definition of $A_\pi(t)$ from the Introduction. In Proposition \[HLcasePol\] we show that $F_{\pi}(0,t)=A_{\pi}(t)$. This, together with Theorem A, implies Theorem B. Note that the result implies the following simple identities. If $\lambda\in\mathcal{Y}=\bigcup_{r \geq 1}\mathcal{P}(r,1)$ then $k(\lambda)$ becomes the number of distinct parts of $\lambda$. $$\sum_{\lambda \in \mathcal{Y}(r)}(1-t)^{k(\lambda)}s^{|\lambda|}=\prod_{i=1}^{r} \frac{1-ts^i}{1-s^{i}}.$$ In particular, $$\sum_{\lambda \in \mathcal{Y}}(1-t)^{k(\lambda)}s^{|\lambda|}=\prod_{i=1}^{\infty} \frac{1-ts^{i}}{1-s^{i}}.$$ These formulas are easily proved by the argument used in the proof of Corollary \[ObicneParticije\]. We now prove [HLcasePol]{}Let $\pi\in \mathcal{P}$. Then $$F_{\pi}(0,t)=A_{\pi}(t).$$ Let $B$ be a $h$-level border component of $\pi$. Let $F(i,j)=F_\pi(i,j)(0,t)$. It is enough to show that $${\label}{ProdPoBK} \prod_{(i,j)\in B}F(i,j)=1-t^h.$$ Let $$c(i,j)=\chi_B(i+1,j)+\chi_B(i,j+1),$$ where $\chi_B$ is the characteristic function of $B$ taking value 1 on the set $B$ and 0 elsewhere. If there are $n$ boxes in $B$ then $${\label}{C} \sum_{(i,j)\in B}c(i,j)=n-1.$$ Let $(i,j)\in B$. We claim that $${\label}{phic} F(i,j)=(1-t^h)^{1-c(i,j)}.$$ Then (\[C\]) and (\[phic\]) imply (\[ProdPoBK\]). To show (\[phic\]) we observe that $$f(l,m)(0,t)= \begin{cases} 1&l=0\\ 1-t^{m+1}&l\geq1. \end{cases}$$ With the same notation as in (\[lmn\]) we have that $\mu_{m}$, $\nu_{m}$, $\lambda_{m}$, $\lambda_{m+1}$ are all equal to $\lambda_1$ for every $m<h$, while for every $m>h$ they are all different from $\lambda_1$. Then $$\begin{aligned} F(i,j)&=&\prod_{m=0}^{\infty}\frac{f(\lambda_1-\mu_{m+1},m)(0,t)f(\lambda_1-\nu_{m+1},m)(0,t)} {f(\lambda_1-\lambda_{m+1},m)(0,t)f(\lambda_1-\lambda_{m+2},m)(0,t)}\\ &&=\frac{f(\lambda_1-\mu_{h},h-1)(0,t)f(\lambda_1-\nu_{h},h-1)(0,t)} {f(\lambda_1-\lambda_{h},h-1)(0,t)f(\lambda_1-\lambda_{h+1},h-1)(0,t)}\\ &&= \frac{(1-t^h)^{1-\chi_{B}(i+1,j)}(1-t^h)^{1-\chi_{B}(i,j+1)}} {1\cdot (1-t^h)}=(1-t^h)^{1-c(i,j)}.\\\end{aligned}$$ A bijective proof of the shifted MacMahon’s formula =================================================== [MM]{} In this section we are going to give another proof of the shifted MacMahon’s formula (\[ShiftedMacMahonac\]). More generally, we prove [genshifMM]{} $$\sum_{\pi \in \mathcal{SP}(r,c)}2^{k(\pi)}x^{\operatorname{tr}(\pi)}s^{|\pi|}=\prod _{i=1}^{r}\prod_{j=1}^{c}\frac{1+xs^{i+j-1}}{1-xs^{i+j-1}}.$$ Here $\mathcal{SP}(r,c)$ is the set of strict plane partitions with at most $r$ rows and $c$ columns. Trace of $\pi$, denoted with $\operatorname{tr}(\pi)$, is the sum of diagonal entries of $\pi$. The proof is mostly independent of the rest of the paper. It is similar in spirit to the proof of MacMahon’s formula given in Section 7.20 of [@S]. It uses two bijections. One correspondence is between strict plane partitions and pairs of shifted tableaux. The other one is between pairs of marked shifted tableaux and marked matrices and it is obtained by the shifted Knuth’s algorithm. We recall the definitions of a marked tableau and a marked shifted tableau (see e.g. Chapter 13 of [@HH]). Let P be a totally ordered set $$P=\{1<1'<2<2'< \cdots\}.$$ We distinguish elements in $P$ as marked and unmarked, the former being the one with a prime. We use $|p|$ for the unmarked number corresponding to $p \in P$. A marked (shifted) tableau is a (shifted) Young diagram filled with row and column nonincreasing elements from $P$ such that any given unmarked element occurs at most once in each column whereas any marked element occurs at most once in each row. Examples of a marked tableau and a marked shifted tableau are given in Figure \[Tableaux\]. \[htp!\] ![ A marked tableau and a marked shifted tableau []{data-label="Tableaux"}](TableauxNN2.eps "fig:"){height="3cm"} An unmarked (shifted) tableau is a tableau obtained by deleting primes from a marked (shifted) tableau. We can also define it as a (shifted) diagram filled with row and column nonincreasing positive integers such that no $2 \times 2$ square is filled with the same number. Unmarked tableaux are strict plane partitions. We define connected components of a marked or unmarked (shifted) tableau in a similar way as for plane partitions. Namely, a connected component is the set of connected boxes filled with $p$ or $p'$. By the definition of a tableau all connected components are border strips. Connected components for the examples above are shown in Figure \[Tableaux\] (bold lines represent boundaries of these components). We use $k(S)$ to denote the number of components of a marked or unmarked (shifted) tableau $S$. For every marked (shifted) tableau there is a corresponding unmarked (shifted) tableau obtained by deleting all the primes. The number of marked (shifted) tableaux corresponding to the same unmarked (shifted) tableau $S$ is equal to $2^{k(S)}$ because there are exactly two possible ways to mark each border component. For a tableau $S$, we use $\text{sh}(S)$ to denote the shape of $S$ that is an ordinary partition with parts equal to the lengths of rows of $S$. We define $\ell(S)=\ell(\text{sh}(S))$ and $\max (S)=|p_{\max}|$, where $p_{\max}$ is the maximal element in $S$. For both examples $\text{sh}(S)=(5,3,2)$, $\ell(S)=3$ and $\max (S)=5$. A marked matrix is a matrix with entries from $P \cup \{0\}$. Let $\mathcal{ST}^M(r,c)$, respectively $\mathcal{ST}^U(r,c)$, be the set of ordered pairs $(S,T)$ of marked, respectively unmarked, shifted tableaux of the same shape where $\max (S)=c$, $\max (T)=r$ and $T$ has no marked letters on its main diagonal. Let $\mathcal{M}(r,c)$ be the set of $r \times c$ matrices over $P \cup \{0\}$. The shifted Knuth’s algorithm (see Chapter 13 of [@HH]) establishes the following correspondence. [Bij1]{} There is a bijective correspondence between matrices $A=[a_{ij}]$ over $P \cup \{0\}$ and ordered pairs $(S,T)$ of marked shifted tableaux of the same shape such that T has no marked elements on its main diagonal. The correspondence has the property that $\sum _i a_{ij}$ is the number of entries $s$ of $S$ for which $|s|=j$ and $\sum_j a_{ij}$ is the number of entries $t$ of $T$ for which $|t|=i$.\ In particular, this correspondence maps $\mathcal{M}(r,c)$ onto $\mathcal{ST}^M(r,c)$ and $$\begin{aligned} |\operatorname{sh}(S)|=\sum_{i,j}|a_{ij}|,\;\;\;\;\; |S|=\sum_{i,j}j|a_{ij}|,\;\;\;\;\; |T|=\sum_{i,j}i|a_{ij}|.\end{aligned}$$ The shifted Knuth’s algorithm described in Chapter 13 of [@HH] establishes a correspondence between marked matrices and pairs of marked shifted tableaux with row and column [*nondecreasing*]{} elements. This algorithm can be adjusted to work for marked shifted tableaux with row and column [*nonincreasing*]{} elements. Namely, one needs to change the encoding of a matrix over $P \cup \{0\}$ and two algorithms BUMP and EQBUMP, while INSERT, UNMARK, CELL and [*unmix*]{} remain unchanged. One encodes a matrix $A \in \mathcal{P}(r,c)$ into a two-line notation $E$ with pairs $\begin{array}{c}i\\j\end{array}$ repeated $|a_{ij}|$ times, where $i$ is going from $r$ to $1$ and $j$ from $c$ to $1$. If $a_{ij}$ was marked, then we mark the leftmost $j$ in the pairs $\begin{array}{c}i\\j\end{array}$. The example from p. 246 of [@HH]: $$A=\left( \begin{array}{ccc} 1'&0&2\\ 2&1&2'\\ 1'&1'&0 \end{array} \right)$$ would be encoded as $$E=\begin{array}{cccccccccc} 3&3&2&2&2&2&2&1&1&1\\ 2'&1'&3'&3&2&1&1&3&3&1' \end{array}.$$ Algorithms BUMP and EQBUMP insert $x \in P\cup \{0\}$ into a vector $v$ over $P\cup \{0\}$. By BUMP (resp. EQBUMP) one inserts $x$ into $v$ by removing (bumping) the leftmost entry of $V$ that is less (resp. less or equal) than $x$ and replacing it by $x$ or if there is no such entry then $x$ is placed at the end of $v$. For the example from above this adjusted shifted Knuth’s algorithm would give $$S= \begin{array}{ccccc} 3'&3&3&3&1'\\ &2'&2&1&1\\ & &1'& &\\ \end{array} \;\;\; \text{and}\;\;\; T= \begin{array}{ccccc} 3&3&2'&2&2\\ &2&2&1&1\\ & &1& &\\ \end{array}$$ The other correspondence between pairs of shifted tableaux of the same shape and strict plane partitions is described in the following theorem. It is parallel to the correspondence from Section 7.20 of [@S]. [Bij2]{} There is a bijective correspondence $\Pi$ between strict plane partitions $\pi$ and ordered pairs $(S,T)$ of shifted tableaux of the same shape. This correspondence maps $\mathcal{SP}(r,c)$ onto $\mathcal{ST}^U(r,c)$ and if $(S,T)=\Pi(\pi)$ then $$|\pi|=|S|+|T|-|\operatorname{sh}(S)|,$$ $$\operatorname{tr}(\pi)=|\operatorname{sh}(S)|=|\operatorname{sh}(T)|,$$ $$k(\pi)=k(S)+k(T)-l(S).$$ Every $\lambda \in \mathcal{Y}$ is uniquely represented by Frobenius coordinates $(p_1,\dots,p_d\,|\,q_1,\dots,q_d)$ where $d$ is the number of diagonal boxes in the Young diagram of $\lambda$ and $p$’s and $q$’s correspond to the arm length and the leg length, i.e. $p_i=\lambda_i-i+1$ and $q_i=\lambda'_i-i+1$, where $\lambda'\in \mathcal{Y}$ is the transpose of $\lambda$. Let $\pi\in \mathcal{SP}$. Let $(\mu_1,\mu_2,\dots)$ be a sequence of ordinary partitions whose diagrams are obtained by horizontal slicing of the 3-dimensional diagram of $\pi$ (see Figure \[3DDiagram\]). The Young diagram of $\mu_1$ corresponds to the first slice and is the same as the support of $\pi$, $\mu_2$ corresponds to the second slice etc. More precisely, the Young diagram of $\mu_i$ consists of all boxes of the support of $\pi$ filled with numbers greater or equal to $i$. For example, if $$\pi= \begin{array}{ccccc} 5&3&2&1&1\\ 4&3&2&1&\\ 3&3&2&&\\ 2&2&1&&\\ \end{array},$$ then $(\mu_1,\mu_2,\mu_3,\mu_4,\mu_5)$ are $$\mu_1= \begin{array}{ccccc} *&*&*&*&*\\ *&*&*&*&\\ *&*&*&&\\ *&*&*&&\\ \end{array},\;\; \mu_2= \begin{array}{ccc} *&*&*\\ *&*&*\\ *&*&*\\ *&*&\\ \end{array},\;\; \mu_3= \begin{array}{cc} *&*\\ *&*\\ *&*\\ \end{array},\;\; \mu_4= \begin{array}{c} *\\ *\\ \end{array},\;\; \mu_5= \begin{array}{c} *\\ \end{array}.$$ Let $S$, respectively $T$, be an unmarked shifted tableau whose $i$th diagonal is equal to $p$, respectively $q$, Frobenius coordinate of $\mu_i$. For the example above $$S= \begin{array}{ccccc} 5&3&2&1&1\\ &3&2&1&\\ & &1&1&\\ \end{array} \;\;\; \text{and}\;\;\; T= \begin{array}{ccccc} 4&4&3&2&1\\ &3&3&2&\\ & &2&1&\\ \end{array}$$ It is not hard to check that $\Pi$ is a bijection between pairs of unmarked shifted tableaux of the same shape and strict plane partitions. We only verify that $${\label}{okok} k(\pi)=k(S)+k(T)-l(S).$$ Other properties are straightforward implications of the definition of $\Pi$. ![3-dimensional diagram of a plane partition[]{data-label="3DDiagram"}](3DDiagramFin2.eps){height="7cm"} Consider the 3-dimensional diagram of $\pi$ (see Figure \[3DDiagram\]) and fix one of its vertical columns on the right (with respect to the main diagonal). A rhombus component consists of all black rhombi that are either directly connected or that have one white space between them. For the columns on the left we use gray rhombi instead of black ones. The number at the bottom of each column in Figure \[3DDiagram\] is the number of rhombus components for that column. Let $b$, respectively $g$, be the number of rhombus components for all right, respectively left, columns. For the given example $b=4$ and $g=6$. One can obtain $b$ by a different counting. Consider edges on the right side. Mark all the edges with 0 except the following ones. Mark a common edge for a white rhombus and a black rhombus where the black rhombus is below the white rhombus with 1. Mark a common edge for two white rhombi that is perpendicular to the plane of black rhombi with -1. See Figure \[3DDiagram\]. One obtains $b$ by summing these numbers over all edges on the right side of the 3-dimensional diagram. One recovers $c$ in a similar way by marking edges on the left. Now, we restrict to a connected component (one of the white terraces, see Figure \[3DDiagram\]) and sum all the number associated to its edges. If a connected component does not intersect the main diagonal then the sum is equal to 1. Otherwise this sum is equal to 2. This implies that $$k(\pi)=b+g-l(\lambda^0).$$ Since $l(S)=l(\lambda^0)$ it is enough to show that $k(S)=b$ and $k(T)=g$ and (\[okok\]) follows. Each black rhombus in the right $i$th column of the 3-dimensional diagram corresponds to an element of a border strip of $S$ filled with $i$ and each rhombus component corresponds to a border strip component. If two adjacent boxes from the same border strip are in the same row then the corresponding rhombi from the 3-dimensional diagram are directly connected and if they are in the same column then there is exactly one white space between them. This implies $k(S)=b$. Similarly, we get $k(T)=g$. Now, using the described correspondences sending $\mathcal{SP}(r,c)$ to $\mathcal{ST}^U(r,c)$ and $\mathcal{ST}^M(r,c)$ to $\mathcal{M}(r,c)$ we can prove Theorem \[genshifMM\]. $$\begin{aligned} \sum_{\pi \in \mathcal{SP}(r,c)}2^{k(\pi)}x^{\operatorname{tr}(\pi)}s^{|\pi|}&\stackrel{\text{Thm }\ref{Bij2}}{=}&\sum _{{(S,T) \in \mathcal{ST}^{{U}}(r,c)}}2^{k(S)+k(T)-l(S)}x^{|\text{sh}S|}s^{|S|+|T|-|\text{sh}S|}\\ &=&\sum _{{(S,T) \in \mathcal{ST}^{{M}}(r,c)}}x^{|\text{sh}S|}s^{|S|+|T|-|\text{sh}S|}\\ &\stackrel{\text{Thm }\ref{Bij1}}{=}&\sum_{A \in \mathcal{M}(r,c)}x^{\sum_{i,j}|a_{ij}|}s^{\sum_{i,j} (i+j-1)|a_{ij}|}\\ &=&\prod_{i=1}^r\prod_{j=1}^c \sum_{a_{ij} \in P\cup{0}}x^{|a_{ij}|}s^{(i+j-1)|a_{ij}|}\\ &=&\prod _{i=1}^{r}\prod_{j=1}^{c}\frac{1+xs^{i+j-1}}{1-xs^{i+j-1}}.\end{aligned}$$ Letting $r \to \infty$ and $c \to \infty$ we get $$\sum_{\substack {\pi \in \mathcal{SP}}} 2^{k(\pi)}x^{\text{tr}(\pi)}s^{|\pi|}=\prod_{n=1}^{\infty}\left(\frac{1+xs^{n}}{1-xs^{n}}\right)^n.$$ At $x=1$ we recover the shifted MacMahon’s formula. [100]{} \[BR\][BR]{} A. Borodin and E. M. Rains, *Eynard-Metha theorem, Schur process, and their Pffafian analogs*; J. Stat. Phys. 121 (2005), no. 3-4, 291–317, arXiv:math-ph/0409059 \[FW\][FW]{} O.  Foda and M. Wheeler, *BKP Plane Partitions*; J. High Energy Phys. JHEP01(2007)075; arXiv : math-ph/0612018 \[HH\][HH]{} P. N. Hoffman and J. F. Humphreys, *Projective representations of the symmetric groups- Q-Functions and shifted tableaux*, Clarendon Press, Oxford, 1992 \[Mac\][Mac]{} I. G. Macdonald, *Symmetric functions and Hall polynomials*; 2nd edition, Oxford University Press, New York, 1995. \[OR\][OR]{} A. Okounkov and N. Reshetikhin, *Correlation function of Schur process with application to local geometry of a random 3-dimensional Young diagram*; J. Amer. Math.Soc. 16 (2003), no. 3, 581–603, arXiv:math/0107056 \[S\][S]{} R.  Stanley, *Enumerative combinatorics*, Cambridge University Press, Cambridge, 1999 \[V\][V]{} M.  Vuletić, *Shifted Schur process and asymptotics of large random strict plane partitions*; to appear in Int. Math. Res. Not., arXiv:math-ph/0702068 [^1]: $^*$ Mathematics 253-37, California Institute of Technology, Pasadena, CA 91125. E-mail: vuletic@caltech.edu
--- author: - 'D. Porquet, M. Arnaud, A. Decourchelle' date: 'Received March, 6 2001; accepted April 2001' title: 'Impacts of a power–law non–thermal electron tail on the ionization and recombination rates' --- Introduction ============ The ionization and recombination rates for astrophysical plasmas have usually been calculated for a Maxwellian electron distribution (e.g., Arnaud & Rothenflug [@Arnaud85], Arnaud & Raymond [@Arnaud92], Mazzotta [@Mazzotta1998]). However, in many low-density astrophysical plasmas, electron distributions may differ from the Maxwellian distribution. The degree of ionization of a plasma depends on the shape of the electron distribution, as well as on the electronic temperature. This has been studied for the solar corona (e.g. Roussel-Dupr[é]{} [@Roussel-Dupre80], Owocki & Scudder [@Owocki83], Dzifc[á]{}kov[á]{} [@Dzifcakova92], Dzifc[á]{}kov[á]{} [@Dzifcakova98]) and for evaporating interstellar clouds (Ballet et al. [@Ballet1989]), where a non–thermal electron distribution occurs in places where there are high gradients of density or temperature. A non–thermal electron population is expected in various astrophysical plasmas. Strong shocks can convert a large fraction of their energy into the acceleration of relativistic particles by the diffusive shock acceleration process (e.g., Drury [@Drury1983], Blandford & Eichler [@Blandford1987], Jones & Ellison [@Jones1991], Kang & Jones [@Kang1991]). Direct evidence for the presence of accelerated electrons up to relativistic energies ($\simeq~1$GeV) comes from the observations of radio synchrotron emission in supernova remnants and in clusters of galaxies. More recently, non–thermal X-ray emission has been reported in several shell-like supernova remnants and interpreted as synchrotron radiation from cosmic-ray electrons up to $\simeq~100$ TeV (Koyama et al. [@Koyama1995], Allen et al. [@Allen1997], Koyama et al. [@Koyama1997], Slane et al. [@Slane1999], Slane et al. [@Slane2001]). A number of recent works have focused on the non–thermal emission from supernova remnants (e.g., Laming 2001, Ellison et al. [@Ellison2000], Berezhko & V[ö]{}lk [@Berezhko2000], Bykov et al. [@Bykov2000b], Baring et al. [@Baring1999], Gaisser et al. [@Gaisser1998], Reynolds [@Reynolds1996; @Reynolds1998], Sturner et al. [@Sturner1997]) and clusters of galaxies (e.g., Sarazin [@Sarazin1999], Bykov et al. [@Bykov2000a], Sarazin & Kempner [@Sarazin2000]). The impact of efficient acceleration on the hydrodynamics and thermal X-ray emission has been investigated (Decourchelle, Ellison, & Ballet [@Decourchelle2000], Hughes, Rakowski, & Decourchelle [@Hughes2000]). When the acceleration is efficient, the non–thermal population is expected to modify directly the ionization rates in the plasma as well as the line excitation (e.g. Dzifc[á]{}kov[á]{} [@Dzifcakova2000], Seely et al. [@Seely87]). A hybrid electron distribution (Maxwellian plus power–law tail) is expected from diffusive shock acceleration (e.g., Berezhko & Ellison [@Berezhko1999], Bykov & Uvarov [@Bykov1999]). The low energy end of the power–law electron distribution (which connects to the Maxwellian thermal population) is likely to enhance the ionization rates and to significantly modify the degree of ionization of the plasma, which is used as a diagnostic of the plasma electron temperature. In this paper, we shall examine the influence of a power–law non–thermal electron distribution (connecting to the falling Maxwellian thermal population) on the ionization and recombination rates for C, N, O, Ne, Mg, Si, S, Ar, Ca, Fe and Ni. For different characteristic values of the power–law electron distribution, the mean electric charge of these elements has been determined as a function of the temperature at ionization equilibrium and for different values of the ionization timescale. We intend, in this paper, to give a comprehensive study of the dependence of these quantities on the parameters of the non–thermal population, illustrated by simple examples. We do not provide tables, which would be too numerous as the ionization equilibrium depends in our model on four parameters (element, temperature of the thermal component, index and low energy break of the non–thermal population). In the appendix or directly in the text, we give the formula needed for the calculations of the rates which could be easily inserted in computer codes. In Section \[sec:Electrondistributionshapes\], we define the Hybrid electron distribution used in this work. The calculation of the new ionization collisional rates and (radiative and dielectronic) recombination rates is discussed in Section \[sec:rates\]. In Section \[sec:Ionizationequilibria\], we present the derived mean electric charge of the elements in ionization equilibrium as well as in ionizing plasmas. The electron distribution shapes {#sec:Electrondistributionshapes} ================================ =8.cm =8.cm The Maxwellian distribution, generally considered for the electron distribution in astrophysical plasmas, $N_{\rm e}(E)$, is defined as: $$\begin{aligned} \label{eq:fMaxw} dN_{\rm e}(E) &= &n_{\rm e}\ f^{\rm M}_{\rm E}(E)\ dE \\ f^{\rm M}_{E}(E)&=&\frac{2}{\sqrt{\pi}}\ ({\rm k}T)^{-3/2}\ E^{1/2}\ e^{-\frac{E}{{\rm k}T}}\end{aligned}$$ where $E$ is the energy of the electron, $T$ is the electronic temperature and $n_{e}$ the total electronic density. In this expression the Maxwellian function $f^{\rm M}_{\rm E}(E)$ is normalised so that $\int^{\infty}_{0}~f^{\rm M}_{\rm E}(E)~dE=1$. It is convenient to express this distribution in term of the reduced energy $x=E/\kT$: $$\begin{aligned} \label{eq:fM} dN_{\rm e}(x) &=& n_{\rm e}\ f^{\rm M}(x)\ dx \\ f^{\rm M}(x)&=&\frac{2}{\sqrt{\pi}}\ \ x^{\frac{1}{2}}\ e^{-x} \label{eq:distM}\end{aligned}$$ The corresponding scaled (non–dimensional) distribution $f^{\rm M}(x)$ is an universal function, of fixed shape.\ Non–Maxwellian electron distributions expected in the vicinity of shock waves, as in young supernova remnants, seem to be reasonably described by a Maxwellian distribution at low energy up to a break energy $\Eb$, and by a power–law distribution at higher energy (e.g., Berezhko & Ellison [@Berezhko1999], Bykov & Uvarov [@Bykov1999]). We call hereafter this “Maxwellian/Power–law” type of electron distribution the [**Hybrid electron distribution**]{} ($f^{\rm H}$). It is defined, in reduced energy coordinates, as: $$dN_{\rm e}(x) = n_{\rm e}f^{\rm H}(x) dx$$ $$\begin{aligned} \label{eq:distH} f^{\rm H}(x)&=&C(x_{\rm b},\a)\ \frac{2}{\sqrt{\pi}}\ x^{1/2}\ e^{-x} ~~~~~~~~~~~~~~~ x \leq x_{\rm b} \\ f^{\rm H}(x)&=&C(x_{\rm b},\a)\ \frac{2}{\sqrt{\pi}}\ x_{\rm b}^{1/2}\ e^{-x_{\rm b}}\ \left(\frac{x}{x_{\rm b}}\right)^{-\a}~~x \geq \xb, \nonumber\end{aligned}$$ where $\xb =\Eb/\kT$ is the reduced break energy, and $\a$ is the energy index of the power–law ($\a>1$). Note that for $\a \leq$ 2  the energy diverges (in practice a cutoff at very high energy occurs). Since for the calculations of the ionization and recombination rates the very high energies ($\geq$ 20kT) have negligible effect, for simplicity, we use here a power-law defined from $\xb$ to infinity.\ The normalisation factor of the power–law distribution is defined so that the electron distribution is continous at $\xb$. The factor $C(\xb,\a)$ is a normalisation constant, so that $\int^{\infty}_{0}~f^{\rm H}(x)~dx=1$: $$C(x_{\rm b},\a) = \frac{\sqrt{\pi}}{2}\ \frac{1}{\gamma(\frac{3}{2},x_{\rm b}) + (\a-1)^{-1} \ x_{\rm b}^{3/2}\ e^{-x_{\rm b}}}$$ where $\gamma(a,x)$ is the gamma function defined as $\gamma(a,x)=\int^{x}_{0}t^{a-1}\ \e^{-t}\ dt$. For $x$[**$\leq$**]{} $\xb$, the Hybrid distribution only differs from a Maxwellian distribution by this multiplicative factor. The scaled distribution $f^{\rm H}(x)$ only depends on the two non–dimensional parameters, $\xb$ and $\a$. The dependence on $\kT$ of the corresponding physical electron distribution is $f^{\rm H}_{\rm E}(E)=(\kT)^{-1} f^{\rm H}(E/\kT)$.\ The Hybrid distributions $f^{\rm H}(x)$, obtained for several values of the energy break $\xb$, are compared to the Maxwellian distribution in Fig. \[fig:fdist\]. The slope has been fixed to $\a=2$, a typical value found in the models referenced above. The variation of the reduced median energy of the distribution with $\xb$, for $\a=1.5,2.,3.$, is plotted in Fig. \[fig:fCEmed\], as well as the variation of the normalisation factor $C(\xb,\a)$. As apparent in the figures, there is a critical value of $\xb$, for each $\a$ value, corresponding to a qualitative change in the behavior of the Hybrid distribution. This can be understood by looking at the distribution at the break energy $\xb$. Whereas the distribution is continuous, its slope changes. The logarithmic slope is $1/2-\xb$ on the Maxwellian side and $-\a$ on the power–law side. There is no break in the shape of the Hybrid distribution (full line in Fig. \[fig:fdist\]), only for the critical value of $\xb=\a+1/2$. For $\xb>\a+1/2$, the power–law always decreases less rapidly with energy than a Maxwellian distribution and does correspond to an [*enhanced*]{} high energy tail. The contribution of this tail increases with decreasing $\xb$ (and $\a$). Thus the median energy increases and the normalisation parameter, which scales the Maxwellian part, decreases (see Fig. \[fig:fCEmed\]). On the other hand, when $\xb<\a+1/2$, there is an intermediate region above the energy break where the power–law decreases more steeply than a Maxwellian (see dotted line in Fig. \[fig:fdist\]). This results in a deficit of electrons at these energies as compared to a Maxwellian distribution, more and more pronounced as $\xb$ is small. The median energy thus starts to decrease with decreasing $\xb$ and can be even lower than the median energy of a Maxwellian distribution (Fig. \[fig:fCEmed\]). In this paper we only consider the regime where $\xb\geq\a+1/2$. It corresponds to clear cases where the high energy part of the distribution is indeed increased, as expected when electron are accelerated in shocks. Furthermore, the distribution used here is only an approximation, valid when the hard tail can be considered as a perturbation of the original Maxwellian distribution. The simulations of Bykov and Uvarov ([@Bykov1999], see their Figure 2) clearly show that the low energy part of the distribution is less and less well approximated by a Maxwellian distribution, as the ‘enhanced’ high energy tail extends to lower and lower energy (lower ‘break’). Although we cannot rigorously define a corresponding quantitative lower limit on $\xb$, the cases presented by Bykov and Uvarov ([@Bykov1999], see their Figure 2) suggest a limit similar to the one considered here, i.e. a few times the Maxwellian peak energy.\ The distribution considered here differs from the so-called “kappa-distribution” or the “power distribution”, relevant for other physical conditions (see e.g. Dzifc[á]{}kov[á]{} 2000 and references therein). These two distributions have been used to model deviations from a Maxwellian distribution caused by strong plasma inhomogeneities, as in the solar corona, and their impact on the ionization balance has been extensively studied (e.g. Roussel-Dupr[é]{} [@Roussel-Dupre80], Owocki & Scudder [@Owocki83], Dzifc[á]{}kov[á]{} [@Dzifcakova92], Dzifc[á]{}kov[á]{} [@Dzifcakova98]). Although the effect of the Hybrid distribution is expected to be qualitatively similar, it has never been quantitatively studied. In the next section we discuss how the ionization and recombination rates are modified, as compared to a pure Maxwellian distribution, depending on the parameters $\xb$ and $\a$. =8.cm Calculations of the collisional ionization and recombination rates {#sec:rates} ================================================================== Let us consider a collisional process of cross section $\sigma(E)$, varying with energy $E$ of the incident electron. The corresponding rate coefficient (cm$^{3}$s$^{-1}$), either for a Maxwellian distribution or a Hybrid distribution, $f(x)$, is given by: $$\begin{aligned} \mathrm{Rate} &=&\left(\frac{2 {\rm k}T}{\me}\right)^{\frac{1}{2}}\int_{x_{\rm th}}^{\infty}\ x^{\frac{1}{2}}\ \sigma(x {\rm k}T)\ f(x)\ dx \label{eq:C(kT)}\end{aligned}$$ with $x_{\rm th}=E_{\rm th}/\kT$. $E_{\rm th}$ corresponds to the threshold energy of the considered process (for $E<E_{\rm th}$, $\sigma(E)=0$). For the recombination processes, no threshold energy is involved and $x_{\rm th}=0$. The rates for the Hybrid distribution depend on $\kT$, $\xb$ and $\a$ and are noted $\CIH$, $\rrH$ and $\drH$ for the ionization, radiative and dielectronic recombination process respectively. The corresponding rates for the Maxwellian distribution which only depends on $\kT$ are $\CIM$, $\rrM$ and $\drM$. The ionization data are taken from Arnaud & Rothenflug ([@Arnaud85]) and Arnaud & Raymond ([@Arnaud92]), as adopted by Mazzotta ([@Mazzotta1998]) for the most abundant elements considered here. The recombination data are taken from the updated calculations of Mazzotta ([@Mazzotta1998]). In the next sections we outline the general behavior of the rates with the electron distribution parameters, using mostly oxygen ions (but also iron) as illustration. The electronic collisional ionization rates {#sec:ionis} ------------------------------------------- =8.cm =8.cm The ionization cross sections present a threshold at the first ionization potential of the ionizing ion, $\EI$. The cross sections always present a maximum, at $E_{\rm m}$, and decrease as $\ln(E)/E$ at very high energies (e.g., Tawara, Kato,& Ohnishi 1985). The ionization rate is very sensitive to the proportion of electrons above the threshold and the modification of the ionization rate for the Hybrid distribution depends on how the high energy tail affects this proportion. Parametric formulae for the ionization cross sections are available from the litterature and it is easy to derive the corresponding rates for the Hybrid distribution. This is detailed in Appendix \[app:Ionization\]. To understand the influence of the presence of a high energy power–law tail in the electron distribution, we computed the ratio $\bi=\CIH/\CIM$, of the ionization rate in a Hybrid distribution over that in a Maxwellian with the same temperature. This ratio is plotted in Fig. \[fig:fIonisO7a\] to Fig. \[fig:fIonisratio\] for different ions and values of the parameters $\xb$ and $\a$.\ Let us first consider O$^{+6}$. Its ionization potential is $E_{\rm I}=739~{\rm eV}$ and the cross section is maximum at about $3~E_{\rm I}$. Its abundance, for a Maxwellian electron distribution, is maximum at $T^* \simeq 10^{6}$ K under ionization equilibrium (Arnaud & Rothenflug 1985). At this temperature, the threshold energy is well above the thermal energy ($\EI/\kT\sim 8$) and only the very high energy tail of the Maxwellian contributes to $\CIM$, i.e. a small fraction of the electron distribution. This fraction is dramatically increased in the Hybrid distribution as soon at the break energy is not too far off from the threshold, $\xb\sim 15$ for O$^{+6}$ (Fig. \[fig:fIonisO7a\]). The enhancement factor $\bi$ naturally increases with decreasing break $\xb$ and slope $\a$ parameters (Fig. \[fig:fIonisO7a\]), since the distribution median energy increases when these parameters are decreased (Fig. \[fig:fCEmed\]).\ This behavior versus $\xb$ and $\a$ is general at all temperatures as illustrated in Fig. \[fig:fIonisO7T\], provided that the thermal energy is not too close to $E_{\rm m}$, i.e. that the majority of the contribution to the ionization rate is from electrons with energies corresponding to the increasing part of the ionization cross section. If this is no more the case, the ionization rate starts to decrease with increasing distribution median energy. Thus, for high enough values of the temperature (see the curve at $T = 10^{8}$ K in Fig. \[fig:fIonisO7T\]), the factor $\bi$ becomes less than unity and decreases with decreasing $\xb$. The correction factor is small (around $\sim 10\%$) however in that case.\ =8.cm =12.cm More generally the enhancement factor $\bi$ at fixed values of $\xb$ and $\a$, depends on the temperature (Fig. \[fig:fIonisO7T\]). It decreases with increasing temperature: the peak of the distribution is shifted to higher energy as the ratio $\kT/\EI$ increases and the enhancement due to the contribution of the hard energy tail decreases.\ The qualitative behavior outlined above does not depend on the ion considered. We plotted in Fig. \[fig:fIonisOTs\] and in Fig. \[fig:fIonisFeTs\] the enhancement factor for the different ions of oxygen and a choice of iron ions at $T^{*}$ (the temperature of maximum ionization fraction of the ion for a Maxwellian electron distribution under ionization equilibrium). $\EI/\kT^{*}$ is always greater than unity and the ionization rates are increased by the Hybrid distribution, the enhancement factor $\bi$ increasing with decreasing $\xb$. However this enhancement factor differs from ion to ion, it generally increases with increasing $\EI/\kT^{*}$ value (approximatively with an exponential dependence), as shown in Fig. \[fig:fIonisratio\]. This is again due to the relative position of the peak of the distribution with respect to the threshold energy. Note that $\EI/\kT^{*}$ is generally smaller for more ionized ions (but this is not strictly true) so that low charge species are generally more affected by the Hybrid distribution.\ In summary, the Hybrid rates are increased with respect to the Maxwellian rates except at very high temperature. The enhancement factor depends on the temperature, mostly via the factor $\EI/\kT$. It increases dramatically with decreasing temperature and is always important at $T^{*}$, where it can reach several orders of magnitude. The ionization balance is thus likely to be affected significantly, whereas the effect should be smaller in ionizing plasmas but important in recombining plasmas. For $\xb$ typically lower than 10–20 (with this upper limit higher for lower temperature, see fig. \[fig:fIonisO7T\]), the impact of the Hybrid rate increases with decreasing $\xb$ and $\a$.\ The ionization rates for a Hybrid distribution are less dependent on the temperature than the Maxwellian rates, as illustrated on Fig. \[fig:fIonisOT10\] and on Fig. \[fig:fIonisOT2p5\]. This is a direct consequence of the temperature dependance of the enhancement factor: as this factor increases with decreasing temperature, the Hybrid ionization rate decreases less steeply with temperature than the Maxwellian rates. More precisely, as derived from the respective expression of the rates at low temperature (respectively Equation \[eq:CDIM\] and Equation \[eq:GDIH2\]), the Maxwellian rate falls off exponentially (as $\e^{-\EI/kT}$) with decreasing temperature, whereas the Hybrid rate only decreases as a power–law. As expected, one also notes that the modification of the rates is more pronounced for lower value of $\xb$ (compare the two figures corresponding to $\xb=10$ and $\xb=2.5$). =8.cm =8.cm The recombination rates {#sec:recombination} ----------------------- ### The radiative recombination rates The radiative recombination rates are expected to be less affected by the Hybrid distribution, since the cross sections for recombination decrease with energy and no threshold exists. As the net effect of the high energy tail present in the hybrid distribution is to increase the median energy of the distribution (cf. fig. \[fig:fCEmed\]), as compared to a Maxwellian, the radiative recombination rates are decreased.\ To estimate the corresponding dumping factor, $\bRR = \rrH/\rrM$, we follow the method used by Owocki & Scudder ([@Owocki83]). We assume that the radiative recombination cross section varies as a power–law in energy: $$\sRR\propto{E^{-a}} \label{eq:sRR}$$ which corresponds to a recombination rate (Equation \[eq:C(kT)\]), for a Maxwellian distribution (Equation \[eq:distM\]), varying as: $$\rrM\propto{T^{\eta}}$$ with $\eta = a - \frac{1}{2}$.\ =8.cm The dumping factor computed for such a power–law cross section (Equation \[eq:C(kT)\] with Equation \[eq:sRR\]) is: $$\bRR = \frac{ \int_{0}^{\infty} x^{-\eta}\ f^{\rm H}(x)\ dx} {\int_{0}^{\infty} x^{-\eta}\ f^{\rm M}(x)\ dx}$$ Note that the dumping factor is independent of the temperature. It depends on the ion considered via the $\eta$ parameter. Replacing the Maxwellian and Hybrid distribution functions by their expression (respectively Equations \[eq:distM\] and \[eq:distH\]) we obtain: $$\bRR = \frac{C(\xb,\a)}{\!\Gamma\!\left(\frac{3}{2}\!-\eta\right)}\! \left[\gamma\!\left(\frac{3}{2}\!-\eta,\xb\right) + \frac{\xb^{\frac{3}{2}\!-\eta}~e^{-\xb}}{\a+\!\eta\!-1}\right] \label{eq:bRR}$$ This estimate of the dumping factor is only an approximation, since the radiative recombination has to be computed by summing over the various possible states of the recombined ions, taking into account the respective different cross sections. Furthermore, even if often the radiative recombination rate can be approximated by a power–law in a given temperature range, this does not mean that the underlying cross section is well approximated by a unique power–law. However as we will see the correction factor is small, and we can reasonably assume that it allows a fair estimate of the true Hybrid radiative recombination rates. To minimize the errors, the Hybrid radiative recombination rate has to be calculated from the best estimate of the Maxwellian rates, multiplied by this approximation of the dumping factor: $$\rrH = \bRR\ \rrM \label{eq:aRRH}$$ where $\rrM$ is as given in Mazzotta ([@Mazzotta1998]). The parameters $\eta$ for the various ions are taken from Aldrovandi & P[é]{}quignot ([@Aldrovandi73]), when available. For other ions we used a mean value of $\eta=0.8$ corresponding to the mean value $<\eta>$ reported in Arnaud & Rothenflug ([@Arnaud85]). The exact value has a negligible effect on the estimation of the radiative recombination rates.\ =8.cm The dumping factor is plotted in Fig. \[fig:fRROa\] for the various ions of oxygen. In that case a common $\eta$ value is used. The dumping factor decreases with decreasing values of $\xb$ and $\a$, following the increase of the distribution median energy. The modification is however always modest, at most $15\% $ for $\a=2$. For iron, plotted in Fig. \[fig:fRRFea\] for $\a=2$, the value of $\eta$ slightly changes with the considered ions, but this only yields negligible variations in the dumping factor. ### The dielectronic recombination rates The dielectronic recombination is a resonant process involving bound states at discrete energies $E_{i}$ and the rates have to be computed by summing the contribution of many such bound states. According to Arnaud & Raymond ([@Arnaud92]), and Mazzotta et al. ([@Mazzotta1998]), the dielectronic recombination rates for a Maxwellian distribution can be fitted accurately by the formula: $$\label{eq:aDRM} \drM =T_{\rm eV}^{-3/2}\ \sum_{\rm i}c_{\rm i}~e^{-x_{\rm i}}~~~~~~~{\rm cm^{3}\ s^{-1}}$$ where T$_{\rm eV}$ is the temperature expressed in eV and $x_{\rm i}= E_{\rm i}/\kT$. The numerical values for $c_{\rm i}$ and $E_{\rm i}$ are taken from Mazzotta et al. ([@Mazzotta1998]). Only a few terms (typically 1 to 4) are introduced in this fitting formula. They roughly correspond to the dominant transitions for the temperature range considered.\ =8.cm Following again the method used by Owocki & Scudder ([@Owocki83]), we thus assume that the corresponding dielectronic recombination cross section can be approximated by: $$\begin{aligned} \sDR&=&\sum_{\rm i} C_{\rm i}~\delta(E-E_{\rm i})~~{\rm with}~~ C_{\rm i}=\frac{c_{\rm i}\ (2\pi \me)^{\frac{1}{2}}}{4~E_{\rm i}} \label{eq:sDR}\end{aligned}$$ The relation between $C_{\rm i}$ and $c_{\rm i}$ is obtained by comparing Equation (\[eq:aDRM\]) with the equation obtained by integrating (Equation \[eq:C(kT)\]) the above cross section over a Maxwellian distribution (Equation \[eq:distM\]). The dielectronic rates can then be computed from Equation (\[eq:C(kT)\]), with the cross section given by Equation (\[eq:sDR\]) and the distribution function given by Equation (\[eq:distH\]): $$\begin{aligned} \label{eq:aDRH} \drH& =&C(x_{\rm b},\a)\ T_{\rm eV}^{-3/2} \sum_{i,x_{\rm i} \leq x_{\rm b}}\!c_{\rm i}\ e^{-x_{\rm i}} \\ &+& C(x_{\rm b},\a)\ T_{\rm eV}^{-3/2} e^{-x_{\rm b}}\!\sum_{i,x_{\rm i} >x_{\rm b}}\!c_{i}\left(\frac{x_{\rm i}}{x_{\rm b}}\right)^{-\a-\frac{1}{2}}\nonumber\end{aligned}$$ Note that this estimate of $\drH$ is only an approximation, for the same reasons outlined above for the radiative recombination rates.\ To understand the effect of the hybrid distribution, let us assume that only a single energy $\EDR$ is dominant, corresponding to a simple Dirac cross section at this energy. In that case, from Equation (\[eq:C(kT)\]), the ratio of the dielectronic recombination rate in a Hybrid distribution over that in a Maxwellian with the same temperature, $\bDR = \drH/\drM$, is simply the ratio of the Hybrid to the Maxwellian function at the resonance energy. Its expression depends on the position of the resonance energy with respect to the energy break. In reduced energy coordinates, we obtain from Equation (\[eq:distM\]) and Equation (\[eq:distH\]): $$\begin{aligned} \bDR & = &C(\xb,\a) ~~~~~~~~~\mathrm{for}~\frac{\EDR}{\kT} \leq \xb \\ {\bDR} & = & C(\xb,\a)\ {\rm e}^{\left(\frac{\EDR}{\kT}-\xb\right)}\left(\frac{\EDR}{\xb \kT}\right)^{\!-(\frac{1}{2} +\a)}\nonumber \\ &&~~~~~~~~~~~~~~~~~~~~~\mathrm{for}~\frac{\EDR}{\kT} \geq \xb \nonumber\end{aligned}$$ =8.cm =8.cm =8.cm For $(\xb~\kT) > \EDR$ (i.e. at high temperature or high value of $\xb$), the resonance lies in the Maxwellian part of the distribution. $\bDR$ is independent of the temperature and the dielectronic recombination rates are decreased, following the variation of the normalisation factor, $C(\xb,\a)$, i.e. the decrease is modest (see fig. \[fig:fCEmed\]). For $(\xb~\kT) < \EDR$ the resonance lies in the power–law part of the distribution. The increase of the dielectronic recombination rate can be dramatic, increasing with decreasing $\xb$ and $\a$.\ These effects of the Hybrid distribution on the dielectronic recombination rates are illustrated in Fig. \[fig:fDRO7a\] to Fig. \[fig:fDRFeTs\], where we plotted the factor $\bDR$ for various ions and values of the parameters. The factors are computed exactly from Equations (\[eq:aDRM\]) and (\[eq:aDRH\]). In Fig. \[fig:fDRO7a\] we consider O$^{+6}$ at the temperature of its maximum ionization fraction, $T^*=10^{6}~{\rm K}$. For this ion only one term is included in the rate estimate, with $\EDR = 529~{\rm eV}$, and $\EDR/\kT=6.1$ at the temperature considered. We plotted the variation of $\bDR$ with $\xb$ for $\a=3$, $\a=2$ and $\a=1.5$. For $\xb > 6.1$ the ‘resonance’ energy $\EDR$ lies in the Maxwellian part of the distribution and the dielectronic recombination rate is decreased as compared to a Maxwellian, but by less than $10\%$, following the variation of the normalisation factor $C(\xb,\a)$. For smaller values of $\xb$, the rate is increased significantly, up to a factor of 5 for $\a=1.5$. We consider other temperatures, fixing $\a$ to $\a=2$, in Fig. \[fig:fDRO7T\]. Since we only consider the parameter range $\xb > \a+1/2$, there is a threshold temperature, $kT > \EDR/(\a+1/2)$, above which the resonance always falls in the Maxwellian part. The dielectronic recombination rate is decreased via the factor $C(\xb,\a)$. This factor slightly decreases with decreasing $\xb$ (cf. fig. \[fig:fCEmed\]). At lower temperature, the resonance energy can fall above the break, provided that $\xb$ is small enough ($\xb<\EDR/\kT$). This occurs at smaller $\xb$ for higher temperature and the enhancement at a given $\xb$ increases with decreasing temperature. We display in Fig. \[fig:fDROTs\] and Fig. \[fig:fDRFeTs\] the variation of the factor $\bDR$ with $\xb$ (for $\a=2$), for the different ions of oxygen and iron, at the temperature of maximum ionization fraction for a Maxwellian distribution under ionization equilibrium. For most of the ions this temperature is above the threshold temperature, $kT = \EDR/(\a+1/2)$, for all the resonances and the dielectronic rate is decreased. For the ions for which this is not the case (O$^{+1}$, O$^{+6}$ and from Fe$^{+1}$ to Fe$^{+5}$), the dielectronic rate can be increased significantly (by a factor between 2 to 5) provided $\xb$ is small enough (typically $\xb=2.5-5$). The increase starts as soon as $\xb<\EDR/\kT^*$ for the oxygen ions. The behavior of $\bDR$ is more complex for the iron ions (two breaks in the variation of $\bDR$), due to the presence of more than one dominant resonance energy (more than one term), taken into account in the computation of the dielectronic rate.\ =8.cm In conclusion, the effect of the hybrid distribution on the dielectronic rate depends on the position of the resonance energy as compared to the power–law energy break. It can only be increased if $\kT < \EDR/(\a+1/2)$. At high temperature, the dielectronic recombination rate is slightly decreased. ### The total recombination rates At $\xb =10$, the total rates are basically unchanged by the Hybrid distribution. For $\xb=2.5=\a+1/2$ (Fig. \[fig:fRtotOT2p5\]), the total rates are more significantly changed. The radiative recombination rate increases with decreasing temperature and it usually dominates the total recombination rate in the low temperature range. As the dielectronic rate is increased by the Hybrid distribution only at low temperature, there are very few ions for which the total recombination rate can be actually increased. This only occurs in a small temperature range, in the rising part of the dielectronic rate. One also notes the expected slight decrease of the radiative recombination rates (when it is dominant at low temperature) and of the dielectronic rate at high temperature. Ionization equilibria {#sec:Ionizationequilibria} ===================== = Collisional ionization equilibrium (CIE) ---------------------------------------- The ionization equilibrium fractions, for coronal plasmas, can be computed from the rates described in the previous sections. In the low density regime (coronal plasmas) the steady state ionic fractions do not depend on the electron density and the population density ratio $N_{\rm Z,z+1}$/$N_{\rm Z,z}$ of two adjacent ionization stages $Z^{+(z+1)}$ and $Z^{+z}$ of element $Z$ can be expressed by: $$\frac{N_{\rm Z,z+1}}{N_{\rm Z,z}}=\frac{C^{\rm Z,z}_{\rm I}}{\alpha^{\rm Z,z+1}_{\rm R}} \label{eq:frac}$$ where $C^{\rm Z,z}_{\rm I}$ and $\alpha^{\rm Z,z+1}_{\rm R}$ are the ionization and total recombination rates of ion $Z^{+z}$ and $Z^{+(z+1)}$ respectively. To assess the impact of the Hybrid rates on the ionization balance, we computed the variation with temperature of the mean electric charge of the plasma. This variation is compared with the variation obtained for a Maxwellian electron distribution on Fig. \[fig:fZmeanOFe\] for oxygen and iron and for different values of the parameters $\xb$ and $\a$.\ As expected, the plasma is always more ionized for a Hybrid electron distribution than for a Maxwellian distribution. The mean charge at a given temperature is increased, since the enhancement of the ionization rate is always much more important than a potential increase of the dielectronic rate (e.g. compare Fig. \[fig:fIonisOTs\] and Fig. \[fig:fDROTs\]). The effect of the Hybrid distribution on the plasma ionization state is thus governed by the enhancement of the ionization rates. The enhancement of the plasma mean charge is more pronounced for smaller values of $\xb$ and smaller values of $\a$ (Fig. \[fig:fZmeanOFe\]), following the same behavior observed for the ionization rates (due to the increasing influence of the high energy tail). Similarly the effect is more important at low temperature, and a clear signature of the Hybrid distribution is the disappearance of the lowest ionization stages, that cannot survive even at very low temperature. For instance, for $\a=2$ and the extreme corresponding value of $\xb=\a+1/2$, the mean charge is already +4 for oxygen and +6 for iron at $T = 10^{4}$ K. At high temperature, the mean charge can typically be changed by a few units, the effect being more important in the temperature range where the mean charge changes rapidly with temperature in the Maxwellian case. = The same behavior is seen for all elements (Fig. \[fig:fZmean2\]). One notes that the effect of the Hybrid distribution generally decreases with Z. Again this is a consequence of the same behavior observed on the ionization rates (see Fig. \[fig:fIonisratio\]).\ A remarkable effect of the Hybrid distribution is that the mean charge is not always a monotonous function of temperature, in the low temperature regime. This is clearly apparent in Fig. \[fig:fZmeanOFe\] and Fig. \[fig:fZmean2\] for $10^4~{\mathrm K} \le T \le 10^5$ K and $\xb=10$. This phenomenon can only occur when the dielectronic rate dominates the total recombination rate and in the temperature range where this rate increases with temperature. In that case, the density ratio of two adjacent ions, $N_{\rm Z,z+1}$/$N_{\rm Z,z}$, can decrease with temperature provided that the ionization rate of $Z^{+z}$ increases less rapidly with temperature than the recombination rate of the adjacent ion $Z^{+(z+1)}$ (Eq.\[eq:frac\]). This usually does not occur in the Maxwellian case, but can occur in the Hybrid case, due to the flatter temperature dependence of the ionization rates for this type of distribution. For instance, for $ 3~10^{4}~\K \le T \le 7~10^{4}~\K$, the ionization rate of O$^{+2}$ is increased by a factor of 2.5 for an Hybrid distribution with $\xb=10$ (Fig. \[fig:fIonisOT10\]), whereas the total recombination rate of O$^{+3}$ is increased by a slightly larger factor of 2.7 (see the corresponding grey line in Fig. \[fig:fRtotOT2p5\], as seen above for $x_{\rm b}$=10 the total rate is basically unchanged compared to the Maxwellian case). The mean charge, which is around $\langle z \rangle = 2.5$, is thus dominated by the behavior of these ions and decreases in that temperature range. Non–equilibrium ionization (NEI) -------------------------------- \ Collisional Ionization Equilibrium (CIE) is not always achieved. For example, in adiabatic supernova remnants, the ionization timescale is longer than the dynamical timescale, so that the plasma is underionized compared to the equilibrium case. In non–equilibrium conditions, the ionization state of the gas depends on the thermodynamic history of the shocked gas (temperature, density) and time elapsed since it has been shocked.\ The time evolution of the ionic fractions is given by: $$\begin{aligned} \frac{dX_{\rm Z,z}}{dt} &=& n_{\mathrm e} [C_{\rm I}^{\rm Z,z-1}X_{\rm Z,z-1} + {\alpha^{\rm Z,z+1}_{\rm R}}X_{\rm Z,z+1} \\ \nonumber &-& (C_{\rm I}^{\rm Z,z} + {\alpha^{\rm Z,z}_{\rm R}})X_{\rm Z,z} ] \\ {\rm with}~&X_{\rm Z,z}& = \frac{N_{\rm Z,z}}{\!\sum_{\rm i}\! N_{\rm Z,i}} \nonumber \label{eq:fracNEI}\end{aligned}$$ To estimate the effects of a Hybrid electron distribution on the ionization in non–equilibrium ionization conditions, we assume that the gas has been suddently heated to a given temperature, which stays constant during the evolution. The ionization timescale depends then on $\int{n_{\mathrm e}}~dt$, where $n_{\mathrm e}$ is the number density of electrons and $t$ the time elapsed since the gas has been heated. Within this assumption, the coupled system of rate equations can be resolved using an exponentiation method (e.g., Hughes & Helfand 1985). For different ionization timescales (up to equilibrium), we computed the variation with temperature of the mean electric charge of oxygen and iron in two extreme cases of the electron distribution: Maxwellian and Hybrid with $\xb=\a+1/2$ and $\a = 2$. For small ionization timescales ($n_{\mathrm e}~t \simeq 10^{8}-10^9$ s cm$^{-3}$), the effect of the Hybrid distribution on the mean electric charge is small, it increases with the ionization timescale and is maximum at equilibrium as is illustrated for oxygen and iron in Fig. \[fig:fZmeanNei\]. As in the equilibrium case, the effect from non–thermal electrons is always more important at low temperature and vanishes at high temperature. Note that the mean electric charge is slightly larger at high temperature for the thermal population than for the non–thermal one, as a consequence of the decrease of the ionization cross section at very high energy. Conclusions =========== We have studied the effect on the ionization and recombination rates, as well as on the ionization balance, of a non–thermal electron distribution, as expected in the vicinity of strong shocks. The electron distribution is modelled by a Maxwellian distribution at low energy up to a break energy, and by a power–law distribution at higher energy. It is caracterised by the three parameters $\kT$ (the temperature of the Maxwellian part), $\xb$ the reduced energy break, and $\a$ the slope of the power–law component. We only considered the parameter range where $\xb > \a+1/2$ which corresponds to an enhanced high energy tail. All the behaviors outlined are only valid for this range of parameters. We provide exact formulae of the ionization rates for this Hybrid electron distribution in the Appendix, and approximate estimates of the radiative recombination rates (Eq. \[eq:bRR\] and Eq. \[eq:aRRH\]) and of the dielectronic recombination rates (Eq. \[eq:aDRH\]). The Hybrid rates depend on the ion considered and on the parameters $\kT$, $\a$ and $\xb$. Computer codes are available on request. For the parameter range considered, the proportion of electrons at high energies and the mean energy of the distribution is a monotonic function of $\xb$ and $\a$. As expected, the modification of the rates for the Hybrid distribution, as compared to the Maxwellian distribution of the same temperature, increases with decreasing $\xb$ (with a threshold at about $\xb \sim 10-20$, higher for lower temperature) and decreasing $\a$. The impact of the Hybrid electron distribution on the ionization rates depends on how the high energy tail affects the proportion of electrons above the ionization potential $\EI$. The Hybrid rates are increased with respect to the Maxwellian rates except at very high temperature. The enhancement factor depends on the temperature, mostly via the factor $\EI/\kT$, and increases dramatically with decreasing temperature. For a given ion, it is always important at $T^{*}$, the temperature of maximum ionization fraction for a Maxwellian distribution under ionization equilibrium, where it can reach several orders of magnitude. The effect of the hybrid distribution on the dielectronic rate depends on the position of the resonance energies $\EDR$ as compared to the power–law energy break. The dielectronic rate can only be increased if $\kT < \EDR/(\a+1/2)$. At $T^{*}$ the enhancement factor is typically less than an order of magnitude. At high temperature, the dielectronic recombination rate is slightly decreased (by typically $10\%$ at most). The effect of the hybrid distribution on the radiative recombination rates is only of the order of a few $10\% $ at most. The ionization balance is affected significantly, whereas the effect is smaller in ionizing NIE plasmas. The plasma is always more ionized for a Hybrid electron distribution than for a Maxwellian distribution. The effect is more important at low temperature, and a clear signature of the Hybrid distribution is the disappearance of the lowest ionization stages, which cannot survive even at very low temperature. We would like to thank Jean Ballet for a careful reading of the manuscript. Aldrovandi, S. M. V. & P[é]{}quignot, D. 1973, A&A, 25, 137 Allen, G. E., Keohane, J. W., Gotthelf, E. V., Petre, R., Jahoda, K., Rothschild, R. E., Lingenfelter, R. E., Heindl, W. A., Marsden, D., Gruber, D. E., Pelling, M. R., Blanco, P. R. 1997, ApJ, 487, L97 Arnaud, M. & Rothenflug, R. 1985, A&AS, 60, 425 Arnaud, M. & Raymond, J. 1992, ApJ, 398, 394 Ballet, J., Luciani, J. F., Mora, P. 1989, A&A, 218, 292 Baring, M. G., Ellison, D. C., Reynolds, S. P., Grenier, I. A., Goret, P. 1999, ApJ, 513, 311 Berezhko, E. G., Ellison, D. C. 1999, ApJ, 526, 385 Berezhko, E. G., V[ö]{}lk, H. J 2000, APh, 14, 201 Blandford, R. D., Eichler, D. 1987, Phys. Rep., 154, 1 Bykov, A. M., Uvarov, Yu. A. 1999, JETP, 88, 465 Bykov, A. M., Bloemen, H., Uvarov, Yu. A. 2000a, A&A, 362, 886 Bykov, A. M., Chevalier, R. A., Ellison, D. C., Uvarov, Yu. A. 2000b, ApJ, 538, 203 Decourchelle, A., Ellison, D. C., Ballet, J. 2000, ApJ, 543, L57 Drury, L.O’C. 1983, Rep.Prog.Phys., 46, 973 Dzifc[á]{}kov[á]{}, E. 1992, Solar Physics, 140, 247 Dzifc[á]{}kov[á]{}, E. 1998, Solar Physics, 178, 317 Dzifc[á]{}kov[á]{}, E. 2000, Solar Physics, 196, 113 Ellison, D. C., Berezhko, E. G., Baring, M. G. 2000, ApJ, 540, 292 Gaisser, T. K., Protheroe, R. J.; Stanev, T. 1998, ApJ, 492, 219 Hughes, J. P., Helfand, D. J. 1985, ApJ, 291, 544 Hughes, J. P., Rakowski, C. E., Decourchelle, A. 2000, ApJ, 543, L61 Jones, F.C., Ellison, D.C. 1991, Space Sci. Rev., 58, 259 Kang, H., Jones, T.W. 1991, ApJ, 249, 439 Koyama, K., Petre, R., Gotthelf, E. V., Hwang, U., Matsura, M., Ozaki, M., Holt, S. S. 1995, Nature, 378, 255 Koyama, K., Kinugasa, K., Matsuzaki, K., Nishiuchi, M., Sugizaki, M.,Torii, K., Yamauchi, S., Aschenbach, B. 1997, PASJ, 49, L7 Laming, J. M. 2001, ApJ, 546, 1149 Mazzotta, P., Mazzitelli, G., Colafrancesco, S., Vittorio, N. 1998, A&AS, 133, 403 Owocki, S. P. & Scudder, J. D. 1983, ApJ, 270, 758 Reynolds, S. P. 1996, ApJ, 459, L13 Reynolds, S. P. 1998, ApJ, 493, 375 Roussel-Dupr[é]{}, R. 1980, Solar Physics, 68, 243 Sarazin, C. L. 1999, ApJ, 520, 529 Sarazin, C. L. & Kempner, J. C. 2000, ApJ, 533, 73 Seely, J.F., Feldman, U., Doschek, G. A. 1987, ApJ, 319, 541 Slane, P., Gaensler, B.M., Dame, T.M., Hughes, J.P., Plucinsky, P.P., Green, A. 1999, ApJ, 525, 357 Slane, P., Hughes, J. P., Edgar, R. J., Plucinsky, P. P., Miyata, E., Tsunemi, H., Aschenbach, B. 2001, ApJ, 548, 814 Sturner, S. J., Skibo, J. G., Dermer, C. D., Mattox, J. R. 1997, ApJ, 490, 619 Tawara, H., Kato, T., Ohnishi, M. 1985, IPPJ-AM-37, Institute of plasma physics, Nagoya University Younger, S.M. 1981, J. Quant. Spectrosc. Radiat. Transfer, 26, 329 Ionization rates {#app:Ionization} ================ Direct ionization (DI) {#app:DI} ---------------------- For the direct ionization (DI) cross sections we chose the fitting formula proposed by Arnaud & Rothenflug ([@Arnaud85]) from the work of Younger ([@Younger1981]): $$\begin{aligned} \label{eq:QDI} \sigma_{\DI}(E)\!&=&\!\sum_{j}\!\frac{1}{\uj \Ij^{2}}\!\left[\Aj~\Uj+ \Bj~\Uj^{2} +\Cj~\ln(\uj)+\Dj~\frac{\ln(\uj)}{\uj}\right]\nonumber \\ &&{\rm with}~~\uj=\frac{E}{\Ij}~; ~~~~\Uj=1-\frac{1}{\uj} \end{aligned}$$ The sum is performed over the subshells j of the ionizing ion. E is the incident electron energy and $\Ij$ is the collisional ionization potential for the level $j$ considered.\ The parameters $\Aj$, $\Bj$, $\Cj$, $\Dj$ (in units of 10$^{-14}$cm$^{2}$eV$^{2}$) and $\Ij$ (in eV) are taken from the works of Arnaud & Raymond ([@Arnaud92]) for iron, and of Arnaud & Rothenflug ([@Arnaud85]) for the others elements. The parameters for elements not considered in these works are given in Mazzotta ([@Mazzotta1998]). ### The Maxwellian electron distribution For a Maxwellian electron distribution, Arnaud & Rothenflug ([@Arnaud85]) obtained according to Equations (\[eq:fMaxw\]), (\[eq:C(kT)\]) and (\[eq:QDI\]), the rate: $$\label{eq:CDIM} \CDIM =\frac{6.692\times 10^{7}}{(\kT)^{3/2}} \sum_{j}\frac{{\rm e}^{-\xj}}{\xj} F^{\rm M}_{\DI}(\xj)~~~{\rm cm^3s^{-1}} \label{eq:eq2a}$$ where $$\begin{aligned} \label{eq:FDI} \xj & =&\frac{\Ij}{\kT}\\ F^{\rm M}_{\DI}(\xj)&=& \Aj~[1-\xj f_{1}(\xj)]\nonumber \\ &+&\Bj~[1+\xj -\xj(2+\xj)f_1(\xj)]\\ &+&\Cj~f_1(\xj)+\Dj~\xj f_2(\xj)\nonumber\end{aligned}$$ where $\kT$ and $\Ij$ are in eV. The summation is performed over the subshells j of the ionizing ion. The mathematical functions, $f_1(x)={\rm e}^x \int_{1}^{\infty} \frac{{\rm e}^{-tx}}{t}~dt$, and $f_2(x) ={\rm e}^x \int_{1}^{\infty} \frac{{\rm e}^{-tx}}{t}~\ln (t)~dt$ can be computed from the analytical approximations given by Arnaud & Rothenflug ([@Arnaud85]) in their AppendixB. ### The Hybrid electron distribution Similar to the Hybrid electron distribution, the direct ionization rate $\CDIH$ is given by: $$\begin{aligned} \label{eq:CDIH} \CDIH&=&C(\xb,\a)~\frac{6.692 \times 10^{7}}{(\kT)^{3/2}}\nonumber \\ &\times &\sum_{j}G^{\rm H}_{\DI}(\xj,\xb, \a)~~~{\rm cm^3s^{-1}} \end{aligned}$$ The function $G^{\rm H}_{\DI}(\xj,\xb, \a)$ depends on the position of the power–law break energy as compared to the ionization potential: - For $\xb>\xj$: $G^{\rm H}_{\DI}(\xj,\xb, \a)$ is the sum of the contribution of the truncated Maxwellian component and the power–law component: $$\begin{aligned} G^{\rm H}_{\DI}(\xj,\xb,\a)&=&\frac{{\rm e}^{-\xj}}{\xj}\ F^{\rm M}_{\DI}(\xj)- \frac{{\rm e}^{-\xb}}{\xj}\ F^{'}_{\DI}(\xj,\xb) \nonumber\\ &+& {\rm e}^{-\xb}\ F_{\DI}^{\rm PL}(\ubj,\a)\end{aligned}$$ where: $$\begin{aligned} \ubj& =& \frac{\xb}{\xj} \\ F^{'}_{\DI}(\xj,\xb)&=&\Aj~\left[1-\xj f_{1}(\xb)\right]\nonumber \\ &+&\Bj~\left[1+\frac{\xj}{\ubj}-\xj (2+\xj) f_1(\xb)\right]\nonumber\\ &+&\Cj~\left[f_1(\xb) + \ln\left(\ubj\right)\right]\\ &+&\Dj~\left[\xj f_2(\xb)+\ln\left(\ubj\right) \xj f_1(\xb)\right] \nonumber\\ F_{\DI}^{\rm PL}(\ubj,\a)~&=& \Aj\left[ \frac{\ubj}{\a -1/2} - \frac{1}{\a +1/2} \right]\nonumber \\ &+&\Bj\left[ \frac{\ubj}{\a -1/2} - \frac{2}{\a +1/2} +\frac{\ubj^{-1}}{\a +3/2} \right]\nonumber\\ &+&\frac {\Cj \ubj }{\a-1/2} \left[ \ln(\ubj) + \frac{1}{\a -1/2}\right] \\ &+& \frac{D_j}{\a+1/2} \left[\ln(\ubj) + \frac{1}{\a+1/2}\right] \nonumber\end{aligned}$$ - For $\xb<\xj$: Only the power–law component contributes of the electron distribution to the rate: $$\begin{aligned} \label{eq:GDIH2} G^{\rm H}_{\DI}(\xj,\xb, \a)&=&{\rm e}^{-\xb}\ \left(\frac{\xb}{\xj}\right)^{\a+\frac{1}{2}}\ f_{\DI}(\a) \end{aligned}$$ where $$\begin{aligned} f_{\DI}(\a)&=&\frac{\Aj}{\a^{2}-1/4} + \frac{2 \Bj} {\left(\a^{2}-1/4\right)\left(\a+3/2\right)}\nonumber \\ && + \frac{\Cj}{\left(\a-1/2\right)^{2}} + \frac{\Dj}{\left(\a+1/2\right)^{2}}\end{aligned}$$ Excitation autoionization (EA) {#app:EA} ------------------------------ For the excitation autoionization (EA) cross sections, we used the generalized formula proposed by Arnaud & Raymond ([@Arnaud92]): $$\begin{aligned} \label{eq:QEA} \sigma_{\EA}(E)\!&=&\!\frac{1}{u \IEA}\!\left[\!A+B~U+C~U_{2}+D~U_{3}+E~\ln(u)\right]\nonumber \\ &&{\rm with}~~u=\frac{E}{\IEA}~; ~~~~U_{n}=1-\frac{1}{u^n} \end{aligned}$$ where $\IEA$ is the excitation autoionization threshold and E is the incident electron energy.\ The parameters $A$, $B$, $C$, $D$, $E$ (in units of 10$^{-16}$cm$^{2}$eV) and $\IEA$ (in eV) are taken from the works of Arnaud & Rothenflug ([@Arnaud85]) and Arnaud & Raymond ([@Arnaud92]). The parameters for elements not considered in these works are given in Mazzotta ([@Mazzotta1998]). ### The Maxwellian electron distribution The excitation autoionization rate for a Maxwellian distribution is: $$\begin{aligned} \label{eq:CEAMaxw} \CEAM&=&\frac{6.692 \times 10^{7}~{\rm e}^{-\xEA}}{(\kT)^{1/2}}~ F^{\rm M}_{\EA}(\xEA)~~~{\rm {cm}^3\,s^{-1}}\end{aligned}$$ where $$\begin{aligned} \label{eq:FEA} \xEA&=&\frac{\IEA}{\kT}\\ F^{\rm M}_{\EA}(\xEA)&=& A+B[1-\xEA f_1(\xEA)]\nonumber \\ &+&C[1-\xEA+\xEA^{2} f_1(\xEA)]\\ &+&D\left[1-\frac{\xEA}{2}+\frac{\xEA^{2}}{2}-\frac{\xEA^3}{2} f_1(\xEA)\right]\nonumber\\ &+& E f_1(\xEA)\nonumber\end{aligned}$$ ### The Hybrid electron distribution For the Hybrid electron distribution, the excitation autoionization rate $\CEAH$ is given by: $$\begin{aligned} \label{eq:CEA_H} \CEAH&=&C(\xb,\a)~\frac{6.692 \times 10^{7}}{(\kT)^{1/2}}\nonumber \\ & \times &G^{\rm H}_{\EA}(\xEA,\xb, \a)~~~~~~~~{\rm cm^3s^{-1}} \end{aligned}$$ The function $G^{\rm H}_{\EA}(\xEA,\xb, \a)$ depends on the position of the power–law break energy as compared to the ionization potential:\ - For $\xb>\xEA$: $G^{\rm H}_{\EA}(\xEA,\xb, \a)$ is the sum of the contribution of the truncated Maxwellian component and the power–law component: $$\begin{aligned} G^{\rm H}_{\EA}(\xEA,\xb,\a)&=&{\rm e}^{-\xEA}~F^{\rm M}_{\EA}(\xEA)\nonumber \\ &-& {\rm e}^{-\xb}~F^{'}_{\EA}(\xEA,\xb)\\ &+&\xb~{\rm e}^{-\xb}~F^{\rm PL}_{\EA}(\ucEA,\a)\nonumber\end{aligned}$$ where: $$\begin{aligned} \ucEA& =& \frac{\xb}{\xEA} \\ F^{'}_{\EA}(\xEA,\xb)&=& A+B\left[1-\xEA~f_1(\xb)\right]\nonumber \\ &+&C\left[1-\frac{\xEA}{\ucEA}+\xEA^{2}~f_1(\xb)\right] \\ &+&D\left[1-\frac{\xEA}{2 \ucEA^{2}}+ \frac{\xEA^{2}}{2 \ucEA}-\frac{\xEA^3}{2}~f_1(\xb)\right]\nonumber\\ &+&E\left[ \ln(\ucEA)+f_1(\xb)\right]\nonumber \\ F_{\EA}^{\rm PL}(\ucEA,\a)&=&A\left[\frac{1}{\a-1/2}\right]\nonumber \\ &+&B\left[\frac{1}{\a -1/2} - \frac{\ucEA^{-1}}{\a +1/2}\right]\nonumber\\ &+&C\left[\frac{1}{\a -1/2} - \frac{\ucEA^{-2}}{\a +3/2}\right]\\ &+&D\left[\frac{1}{\a -1/2} - \frac{\ucEA^{-3}}{\a +5/2}\right]\nonumber \\ &+&E\left[\frac{1}{(\a -1/2)^2} +\frac{\ln(\ucEA)}{\a -1/2}\right]\nonumber\end{aligned}$$ - For $\xb< \xEA$: Only the power–law component contributes of the electron distribution to the rate: $$\begin{aligned} G^{\rm H}_{\EA}(\xEA,\xb,\a)\!&=&\!\xb\ {\rm e}^{-\xb}\left(\frac{\xb}{\xEA} \right)^{\a-\frac{1}{2}}\!f_{\EA}(\a)\end{aligned}$$ where $$\begin{aligned} f_{\EA}(\a)&=&\frac{A}{\a -1/2}+\frac{B}{\a^{2} -1/4}\nonumber\\ &+&\frac{2 ~C}{(\a -1/2)(\a +3/2)}\\ &+&\frac{3~ D}{(\a -1/2)(\a+5/2)}\nonumber\\ &+&\frac{E}{(\a-1/2)^2}\nonumber\end{aligned}$$ Total ionization rates (DI + EA) -------------------------------- The total ionization rate $\CIH$ is obtained by: $$\begin{aligned} \CIH&=&\CDIH+\CEAH\end{aligned}$$
--- abstract: 'Probing extended polyene systems with energy in excess of the bright state ([$1^1B_u^+$]{}/ $S_2$) band edge generates triplets via singlet fission. This process is not thought to involve the [$2^1A_g^-$]{} / $S_1$ state, suggesting that other states play a role. Using density matrix renormalisation group (DMRG) calculations of the Pariser-Parr-Pople-Peierls Hamiltonian, we investigate candidate states that could be involved in singlet fission. We find that the relaxed [$1^1B_u^-$]{} and [$3^1A_g^-$]{} singlet states and $1^5A_g^-$ quintet state lie below the $S_2$ state. The [$1^1B_u^-$]{}, [$3^1A_g^-$]{} and $1^5A_g^-$ states are all thought to have triplet-triplet character, which is confirmed by our calculations of bond dimerization, spin-spin correlation and wavefunction overlap with products of triplet states. We thus show that there is a family of singlet excitations (i.e., [$2^1A_g^-$]{}, [$1^1B_u^-$]{}, [$3^1A_g^-$]{}, $\cdots$), composed of both triplet-pair and electron-hole character, which are fundamentally the same excitation, but have different center-of-mass energies. The lowest energy member of this family, the $2^1A_g^-$ state, cannot undergo singlet fission. But higher energy members (e.g., the [$3^1A_g^-$]{}) state, owing to their increased kinetic energy and reduced electron-lattice relaxation, can undergo singlet fission for certain chain lengths.' author: - 'D. J. Valentine' - 'D. Manawadu' - 'W. Barford' bibliography: - 'library.bib' title: 'Higher energy triplet-pair states in polyenes and their role in intramolecular singlet fission' --- \_define:nn [ miguel/label ]{} [ label .tl\_set:N = ł\_miguel\_label\_tl, labelbox .bool\_set:N = ł\_miguel\_label\_box\_bool, labelbox .default:n = true, fontsize .tl\_set:N = ł\_miguel\_label\_size\_tl, fontsize .initial:n = , pos .choice:, pos/nw .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ left,up ]{}, pos/ne .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ right,up ]{}, pos/sw .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ left,down ]{}, pos/se .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ right,down ]{}, pos/n .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ hc,up ]{}, pos/w .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ left,vc ]{}, pos/s .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ hc,down ]{}, pos/e .code:n = \_set:Nn ł\_miguel\_label\_pos\_tl [ right,vc ]{}, pos .initial:n = nw, unknown .code:n = \_put\_right:Nx ł\_miguel\_label\_clist [ ł\_keys\_key\_tl = \_not:n [ \#1 ]{} ]{} ]{} \_new:N ł\_miguel\_label\_clist \_new:N ł\_miguel\_label\_box \_new:N ł\_miguel\_label\_image\_box \_new\_protected:Nn \_includegraphics:nn [ ]{} \_generate\_variant:Nn \_includegraphics:nn [ V ]{} \[sec:intro\]Introduction ========================= Current commercially available solar cell technology is impeded by the Shockley–Queisser limit, which means that higher energy photons are not efficiently utilized for electricity generation [@Shockley1961]. When a high energy photon is absorbed, the energy greater than the device’s band gap is lost as heat. There are a number of different ways to better utilize the solar spectrum, one such option is singlet fission. Singlet fission is a process in which a singlet exciton generated by photoexcitation evolves into two separate triplets [@Smith2013b]. Many polyene systems have been shown to exhibit this phenomenon [@Lanzani1999; @Lanzani2001; @Antognazza2010; @Musser2013; @Kasai2015; @Busby2015; @Kraabel1998; @Musser2019; @Huynh2018; @Huynh2017; @Hu2018]. If the two separate triplets have energy greater than or equal to the band gap of a photovoltaic carrier material, they can each generate a free electron-hole pair in the carrier material [@Rao2017]. Singlet fission is often assumed to involve three processes or steps. The first step is state interconversion from the initial photoexcited state to a singlet state with triplet-pair character. After state interconversion the triplet-pair is coherent and correlated. In the next step, the triplets migrate away from one another; this process can be described as a loss of electronic interaction. During this step, which is spin allowed, the triplets retain their spin coherence forming a geminate triplet-pair in an overall singlet state. The final step involves the loss of spin coherence and leads to two independent triplets (or a non-geminate triplet-pair) [@Marcus2020]. This step is not spin allowed and is expected to be slower than the preceding steps. Marcus and Barford have recently investigated this step using a Heisenberg spin chain model. They show how spin orbital coupling and dephasing from the environment determines this process [@Marcus2020]. Polyene systems are often modelled as having $C_{2h}$ symmetry [@Bursill1999; @Barford2013a; @Barford2001; @Ren2017; @Aryanpour2015; @Aryanpour2015a; @Schmidt2012a; @Tavan1987; @Hu2015]. In this framework, the first excited singlet state, $S_1$, has the same symmetry as the ground state, $1^1A_g^-$, and is therefore optically inactive. The strongly optically absorbing singlet state is the $1^1B_u^+$ state. Although this is not generally the second excited singlet state in polyenes, it is typically labelled $S_2$. In polyenes some low energy excited states have multiple triplet excitation character[@Tavan1987]. This is the case for the $2^1A_g^-$ state, which is sometimes considered as a bound pair of triplet excitations [@Bursill1999; @Barford2013a; @Barford2001; @Ren2017; @Aryanpour2015; @Aryanpour2015a; @Schmidt2012a; @Tavan1987]. In polyene systems it remains unclear if singlet fission proceeds via the [$2^1A_g^-$]{} state, a vibrationally hot variant of the [$2^1A_g^-$]{} state, or a different state [@Musser2013; @Musser2019; @Antognazza2010]. It is also unclear whether singlet fission in polyene type materials is an inter- or intra-molecular process [@Musser2013; @Wang2011; @Yu2017a]. It has been observed that in long isolated chains no singlet fission occurs after photoexcitation *at* the band edge. Instead, the system relaxes non-radiatively via the [$2^1A_g^-$]{} state. The $S_2$ to $S_1$ transition occurs via internal conversion between the two potential energy surfaces [@Taffet2019], taking place on a time-scale of 100s fs[@Musser2013; @Antognazza2010; @Musser2019]. Upon excitation with energy in excess of the band edge, however, triplets are detected, with isolated triplet signatures appearing in transient absorption spectroscopy measurements [@Musser2013; @Antognazza2010]. The occurrence of these triplet signals is attributed to singlet fission. Experiments suggest that this mid-band excited singlet fission does not proceed via the [$2^1A_g^-$]{} state. It is claimed that there are two relaxation pathways: one to the ground state (which proceeds via the [$2^1A_g^-$]{} state) and a different singlet fission pathway with no [$2^1A_g^-$]{} involvement, [@Musser2013; @Antognazza2010] as illustrated in Fig. \[fig:Scheme\]. If singlet fission in polyenes does not involve the [$2^1A_g^-$]{} state, but does require excess energy to overcome a barrier, it can be asked do any higher energy states contribute? Upon vertical excitation it has been found that the $1^1B_u^-$ and $3^1A_g^-$ states exist above the $1^1B_u^+$, although as the chain length increases the $1B_u^-$ energy falls below the $1^1B_u^+$ [@Tavan1987; @Hashimoto2018]. It is also thought that the $1^1B_u^-$ and $3^1A_g^-$ states have triplet-triplet character [@Tavan1987]. In addition to the singlet triplet-pair state, a quintet triplet-pair fission intermediate, $^5(T_1 T_1)$ has been observed in acene materials [@Tayebjee2017; @Weiss2017a; @Sanders2019]. Spin mixing is possible between the $^1(T_1T_1)$ and $^5(T_1T_1)$ states [@Merrifield1971], meaning the quintet could be involved in the singlet fission process or offer an alternative relaxation pathway for the excited molecule. In this paper we present our calculations of the properties of the key excited states of polyenes, i.e., the [$2^1A_g^-$]{}, [$1^1B_u^+$]{}, $1^1B_u^-$ and $3^1A_g^-$ singlet states, the [$1^5A_g^-$]{} quintet state and the [$1^3B_u^-$]{} triplet state. We use the Parsier-Parr-Pople-Peierls model to describe interacting $\pi$-electrons coupled to the nuclei, which is solved using the density matrix renormalization group (DMRG) method. We investigate the relaxed geometries of these states within a soliton framework. Excitations in polyene systems contain spin-density wave, bond-order excitations and charge density waves. The interplay between these contributions leads to a myriad of phenomena [@Barford2013a]. To gain insight into the nature of the higher energy excited states, we characterize the states using the spin-spin correlation function, and triplet-pair and electron-hole projections. We also investigate the optical transitions from these key states. As we explain in the Discussion Section, we postulate that the [$3^1A_g^-$]{} state (or another member of the ‘$2A_g$ family’) is the spin-correlated $^1(T \cdots T)$ state, some times referred to as the geminate triplet-pair, observed in the SF process in polyenes. ![Schematic of potential relaxation pathways from the bright state in polyenes[]{data-label="fig:Scheme"}](figures/scheme.pdf){width="\linewidth"} The paper is organized as follows. In Sec. \[sec:PPPP\] we introduce the Pariser-Parr-Pople-Peierls model. In Sec. \[sec:energies\] we discuss the results of our vertical and relaxed energy calculations. Sec. \[sec:bond\_dime\] discusses the relaxed geometries of the excited states. In Sec. \[sec:spin\] we use the spin-spin correlation function to characterise the soliton structure of the states. In Section \[sec:wf\] we also characterise the states via their electron-hole wavefunctions and their overlap with products of triplet states. In Section \[sec:spec\] we relate our work to experimental results via a calculation of the excited state spectra and conclude in Section \[sec:con\]. \[sec:PPPP\] Pariser-Parr-Pople-Peierls Model ============================================= We use the Pariser-Parr-Pople-Peierls (PPPP) model to treat the $\pi$-electrons of the conjugated system. This model includes both long-range electronic interactions and electron-nuclear coupling. It is defined as[@Barford2013a] $$H_{PPPP} = H_{PPP} + H_{el-ph} + H_{elastic},$$ where $H_{PPP}$ is the Pariser-Parr-Pople Hamiltonian, defined by $$H_{PPP} = -2t_0 \sum_n \hat{T}_n + U \sum_n \big( N_{n \uparrow} - \frac{1}{2} \big) \big( N_{n \downarrow} - \frac{1}{2} \big) + \frac{1}{2} \sum_{n \neq m} V_{n, m} \big( N_n -1 \big) \big( N_m -1 \big).$$ Here, $\hat{T}_ n = \frac{1}{2}\sum_{\sigma} \left( c^{\dag}_{n , \sigma} c_{n + 1 , \sigma} + c^{\dag}_{n+ 1 , \sigma} c_{n , \sigma} \right)$ is the bond order operator, $t_0$ is the hopping integral for a uniform, undistorted chain, $U$ is the Coulombic interaction of two electrons in the same orbital and $V_{nm}$ is the long range Coulombic repulsion. We use the Ohno potential given by $V_{nm} = U / \sqrt{ 1 + (U \epsilon_r r_{nm} / 14.397 )^2 }$, with bond lengths $r_{nm}$ in Å. $H_{el-ph}$ is the electron-phonon coupling, given by $$H_{el-ph} = 2 \alpha \sum_n \left( u_{n+1} - u_n \right) \hat{T}_n - 2 \alpha W \sum_n \left( u_{n+1} - u_n \right) \left( N_{n+1} -1 \right) \left( N_n - 1 \right),$$ where $\alpha$ is the electron-nuclear coupling parameter and $u_n$ is the displacement of nucleus $n$ from its undistorted position. Through this term changes in bond length cause changes in the hopping integrals and the Coulomb interactions. Due to the rapid decay of the density-density correlator, $(N_n -1 )(N_m - 1)$, with distance only changes in the Coulomb potential to first order are considered. Therefore, $W$ is determined by $$W = \frac{U\gamma r_0}{\left( 1+\gamma r_0^2 \right)^{3/2}},$$ where $\gamma = \left( U \epsilon_r / 14.397 \right)^2$ and $r_0$ is the undistorted average bond length in Å. The elastic energy of the nuclei contributes through $H_{ph}$, defined as $$H_{ph} = \frac{\alpha^2}{\pi t_0 \lambda} \sum_n \left( u_{n+1} - u_n \right)^2 + \Gamma \sum_n \left( u_{n+1} - u_n \right),$$ where $\lambda $, the dimensionless electron-nuclear coupling parameter, is $\frac{2\alpha^2}{\pi K t}$ and $K$ is the nuclear spring constant. $\Gamma$ is a Lagrange multiplier which ensures a constant chain length. The requirement that the force per bond vanishes at equilibrium gives a self-consistent equation for the bond distortion, namely $$\left( u_{n+1} - u_n \right) = \frac{ \pi t \lambda }{\alpha} \left(\Gamma - \Braket{ \hat{T_n} } + W \Braket{ \hat{D_n} } \right),$$ where $\hat{D_n} = \left( N_n -1\right) \left( N_{n+1} -1 \right )$ is the nearest neighbor density-density correlator. We follow a parameterization of the PPP Hamiltonian by Mazumdar and Chandross for *screened* polyacetylene, namely $U=8\ eV$, $\epsilon_r=2$ and $t_0 = 2.4 \ eV$ [@Chandross1997]. We use the electron-nuclear coupling constants of Barford and co-workers, namely, $\lambda=0.115$ and $\alpha=0.4593 \ eV$ Å$^{-1} $   [@Barford2001]. \[sec:energies\] Vertical and Relaxed Energies ============================================== ![Vertical excitation energy of low-lying singlet and quintet states. Also shown is twice the vertical excitation energy of the triplet for chain lengths $N/2$. $N$ is the number of C-atoms. The insert shows the energies in the asymptotic limit.[]{data-label="fig:vert"}](figures/vertical_energy_singlets_insert.pdf){width="0.95\linewidth"} ![Relaxed excitation energy of low-lying singlet and quintet states. Also shown is twice the relaxed excitation energy of the triplet for chain lengths $N/2$. $N$ is the number of C-atoms. The insert shows the energies in the asymptotic limit.[]{data-label="fig:relax"}](figures/relaxed_energy_insert.pdf){width="0.95\linewidth"} Using the Hellmann-Feynman iterative procedure, the relaxed geometries of the ground state, $1^1A_g^-$, for a range of chain lengths up to 102 carbon atoms (or sites) are calculated using the DMRG method [@Schollwock2005a; @Barford2001; @White1992]. The vertical excitation energies for the lowest energy singlets are shown in Fig. \[fig:vert\]. For short chains we see the usual energetic ordering of $2^1A_g^- < 1^1B_u^+ < 1^1B_u^- < 3^1A_g^-$ [@Tavan1987; @Kurashige2004]. For chain lengths greater than 26 sites the vertical [$1^1B_u^-$]{} energy becomes lower than the $1^1B_u^+$ energy, while at chain lengths greater than 46 sites the [$3^1A_g^-$]{} vertical energy falls below the [$1^1B_u^+$]{} energy. The $1^5A_g^-$ quintet state, however, remains above the bright state at all chain lengths. Its energy converges to twice the triplet energy evaluated at half the chain length, implying that it corresponds to an unbound triplet-pair. This assumption will be confirmed by an analysis of the bond-dimerization and spin-spin correlation in the following sections. (This observation of the triplet-pair character of the quintet state is often invoked for simpler models, but remains valid with interacting electrons and electron-nuclear coupling [@Musser2019].) The inset of Fig. \[fig:vert\] shows that the vertical energies of the $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$ states converge to the same value in the asymptotic limit, being $\sim 0.3$ eV lower than the vertical quintet state. This result indicates that these vertical singlet states are different pseudo-momentum members of the same family of excitations, as described in more detail in Section \[sec:wf\]. They have different energies because of their different center-of-mass kinetic energies, which vanishes in the long-chain limit. Turning now to the relaxed energies, as shown in Fig. \[fig:relax\] we find that the relaxed $2^1A_g^-$ state is always lower in energy than the relaxed $1^1B_u^+$ state. For chain lengths greater than 10 and 20 sites the [$1^1B_u^-$]{}and [$3^1A_g^-$]{}states, respectively, also fall below the $1^1B_u^+$ state. The quintet state undergoes a considerable geometry relaxation compared to the [$1^1B_u^+$]{} state and its energy falls below the bright state for $N>16$. Comparing the vertical and relaxed energies, we find that between 10 and 26 sites the vertical [$1^1B_u^-$]{} state lies above the vertical [$1^1B_u^+$]{} state, but the relaxed [$1^1B_u^-$]{} state is below the relaxed [$1^1B_u^+$]{} state; and similarly for the [$3^1A_g^-$]{}  state between 20 and 46 site chains. Thus, our calculations suggest that there might exist internal conversion pathways to these states from the optically excited [$1^1B_u^+$]{} state. In addition, if spin mixing is allowed relaxation pathways could also involve the $1^5A_g^-$ quintet state. Based on the experimental observations that certain relaxation pathways becoming available only for mid-band or higher excitation [@Antognazza2010; @Musser2019], these pathways are likely to have a barrier. The $2A_g^-$ state is found to be a bound state compared to two free (relaxed) triplets. Whilst endothermic singlet fission is possible [@Wilson2013; @Swenberg1968], this state is unlikely to be involved in singlet fission and instead offers an alternative relaxation pathway, as has been observed experimentally [@Musser2013; @Antognazza2010]. Similarly, for realistic chain lengths the relaxed $1^1B_u^-$ energy lies below the relaxed energy of two free triplets, while the relaxed $3^1A_g^-$ energy lies above them for all chain lengths. As for the vertical calculation, for the relaxed states we find that $E(1^5A_g^-) \approx 2 \times E(1^3B_u^-[N/2])$. We note that the relaxation energies increase as $3^1A_g^- < 1^1B_u^- < 2^1A_g^- $. Consequently, unlike the vertical energies, the relaxed energies of the $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$ states do not converge to the same value in the asymptotic limit, and indeed they saturate for $N \gtrsim 50$. This energy saturation occurs because of self-localization of the solitons, which is a consequence of treating the nuclei as classical variables and can be corrected by using a model of fully quantized nuclei [@Barford2002; @Barford2013a] In practice, however, in realistic systems disorder will also act to localise excited states [@Tozer2014]. \[sec:bond\_dime\]Soliton Structures ==================================== In the even $N$ polyene ground state, nuclei are dimerized along the chain, with a repeated short-long-short bond arrangement. Electronically excited states lower their total energy by distorting from the ground state geometry. In some cases, the ground state bond alternations is reduced or reversed over sections of the chain. The change of dimerisation is characterised by domain walls called solitons [@Heeger1988; @Barford2013a; @Hayden1986; @Barford2001; @Roth2015; @Su1995]. In neutral chains with an even number of sites, each soliton ($S$) is associated with an antisoliton ($\bar{S}$). Solitons in linear conjugated systems are of two types: radical or ionic. For a radical soliton associated with covalent states the nuclei distortion is centered around a radical unpaired spin (or a spinon); the soliton has a net spin but is neutral. For an ionic soliton, however, the distortion is associated with an unoccupied or doubly occupied site, so has $S^z = 0$ but is charged [@Barford2013a; @Roth2015]. To investigate the solitonic structure of the excited singlet and quintet states, we calculate the staggered, normalised bond dimerization, $\delta_n$, of their relaxed geometries, defined as $$\delta_n = (-1)^n \frac{(t_n - \bar{t})}{\bar{t}},$$ with $t_n = t_0 + \alpha (u_{n+1}- u_n)$ and $\bar{t}$ being the average of $t_n$. For the ground state $\delta_n \approx 0.85$, across the chain with the bond dimerization being slightly larger at the ends of the chain. The lowest lying triplet state, [$1^3B_u^-$]{}, is a two-soliton state (i.e., $S\bar{S}$) with each soliton being associated with a radical spin, residing towards the ends of the chain. On the other hand, the $2^1A_g^-$, [$1^1B_u^-$]{} and $1^5A_g^-$ states are four-soliton states. Fig. \[fig:bond\_order\_4\](a) presents the staggered bond dimerization for the $2^1A_g^-$ and $1^5A_g^-$ states, implying that the soliton arrangement is $S\bar{S} S\bar{S}$. The $1^5A_g^-$  bond dimerization strongly resembles that of two triplets residing on either half of the chain, suggesting that the $1^5A_g^-$  state consists of two spatially separated triplets. The bond dimerization of the [$2^1A_g^-$]{} state is well-known [@Hu2015; @Hayden1986; @Tavan1987; @Heeger1988; @Su1995] [@Barford2001]: the solitons are more bound, indicating that the [$2^1A_g^-$]{} state is a bound triplet-pair. These observations are quantified by fitting the bond dimerization of the [$2^1A_g^-$]{} and $1^5A_g^-$  states by[@Su1995] $$\label{eqn:four_soliton} \begin{split} \delta_n = \delta_0 \Big[ 1 + \tanh \left( \frac{2 n_0 a}{\xi} \right) \Big\lbrace \tanh \left( \frac{2 (n-n_d-n_0) a}{\xi} \right) - \tanh \left( \frac{2 (n-n_d+n_0) a}{\xi} \right) \\ + \tanh \left( \frac{2 (n+n_d-n_0) a}{\xi} \right) - \tanh \left( \frac{2 (n+n_d+n_0) a}{\xi} \right)\Big\rbrace \Big], \end{split}$$ where $\xi$ is the domain wall width, $2n_0$ is the separation of the soliton and antisoliton within a $S\bar{S}$ pair on either side of the chain, while $2n_d$ is the separation of the pairs. The bond dimerization of the [$1^1B_u^-$]{} state, shown in Fig. \[fig:bond\_order\_4\](b), can be explained by the soliton arrangement of $SS\bar{S}\bar{S}$. Its bond dimerization fits the equation $$\label{eqn:four_soliton_alt} \begin{split} \delta_n = \delta_0 \Big[ 1 + \frac{1}{2} \tanh \left( \frac{2 n_0 a}{\xi} \right)\Big\lbrace \tanh \left( \frac{2 (n-n_d-n_0) a}{\xi} \right) + \tanh \left( \frac{2 (n-n_d+n_0) a}{\xi} \right) \\ - \tanh \left( \frac{2 (n+n_d-n_0) a}{\xi} \right) - \tanh \left( \frac{2 (n+n_d+n_0) a}{\xi} \right)\Big\rbrace \Big], \end{split}$$ where $2n_0$ is the separation of the soliton and soliton within a $SS$ pair (and likewise of the antisoliton and antisoliton within a $\bar{S}\bar{S}$ pair), while $2n_d$ is the separation of these pairs. The [$3^1A_g^-$]{}state bond dimerization can be explained by the six-soliton arrangement of $SS\bar{S}S\bar{S}\bar{S}$. As we will see in Section \[sec:triplet\_triplet\], there are many different triplet-triplet contributions to the [$3^1A_g^-$]{} state, which when summed give rise to a more complicated bond dimerization. \[fig:bondorder3Ag2Ag\] \[fig:bond\_order\_1BuM\] The fitted parameters, $n_0$, $n_d$ and $\xi$, for the three four-soliton states are plotted in Fig. \[fig:fitted\_parameters\] against inverse chain length. In Fig. \[fig:fitted\_parameters\](a) we see that the coherence length, $\xi$, converges with chain length for all states. The rapid convergence of $n_0$ with chain length for the [$2^1A_g^-$]{} state, shown in Fig. \[fig:fitted\_parameters\](b), implies that the solitons within a $S\bar{S}$ pair are more strongly bound compared to other states. For the $1^5A_g^- $ state, the change in $n_0$ with chain length, $N$, resembles that of the [$1^3B_u^-$]{} state for a chain of half-length, $N/2$, indicating that the $1^5A_g^- $ state has significant $T_1-T_1$ character with two triplet-like excitations occupying either side of the chain. $n_0$ converges as a function of $N$ for the [$1^1B_u^-$]{} state, implying that the solitons (antisolitons) within a $SS$ ($\bar{S}\bar{S}$) pair are bound. The distances between soliton pairs, $n_d$, are shown in Fig. \[fig:fitted\_parameters\](c). Again, for the [$2^1A_g^-$]{} state there is rapid convergence in the separation of these pairs. In contrast, both the $1^5A_g^-$ and [$1^1B_u^-$]{} states do not show convergence, with the pair separation increasing as the chain length increases. The $S\bar{S}$ pair distance for the $1^5A_g^-$ state follows $n_d \approx N/4$, again indicating that the pairs are unbound. \[fig:fitted\_parameters\_2\] \[fig:fitted\_parameters\_1\] \[fig:fitted\_parameters\_3\] \[sec:spin\]Spin-spin Correlation ================================= In addition to the bond dimerization, further insight into the radical (spinon) character of the triplet-pair states is obtained via the spin-spin correlation function, defined as $$S_{nm} = \Braket{S_n^z S_m^z}.$$ A positive/negative spin-spin correlation value indicates a ferromagnetic/antiferromagnetic alignment between a pair of spins. The spin-spin correlations for the relaxed [$1^3B_u^-$]{} state are shown in Fig. \[fig:spin\_spin\_T\]. We see that the radical soliton/antisoliton of the triplet state localize towards the end of the chain and there is a long range spin-spin correlation between them. The solitons are delocalized over a small region well described by the coherence length, $\xi$. Fig. \[fig:spin\_spin\_Q\] shows the spin-spin correlation for the relaxed [$1^5A_g^-$]{} state. We see three correlations between neighbouring solitons and antisolitons, and three further long range correlations. This correlation pattern is consistent with two unbound triplets, i.e., four solitons positioned along the chain as predicted from Eq. \[eqn:four\_soliton\] and presented in Fig. \[fig:bond\_order\_4\]a. A schematic of the soliton interactions that lead to these six correlations is shown in Fig. \[fig:spin\_spin\_corr\]. Fig. \[fig:spin\_spin\_2Ag\] shows $S_{nm}$ for the [$2^1A_g^-$]{} state. It is difficult to discern correlations between individual solitons, because the correlations overlap each other. However, along the anti-diagonal, $m = (N-n)$, long range correlations between sites $\approx 10 $ and $ \approx 40$ can be seen. Overall, the spin-spin correlations of the [$2^1A_g^-$]{} state further confirm its bound triplet-pair character. The triplets are bound in the middle of the chain, and individual solitons contributing to the triplets cannot be identified. $S_{nm}$ for the relaxed [$1^1B_u^-$]{} state shows correlations similar to [$3^1A_g^-$]{}, but with much more delocalized correlations along the antidiagonal. [0.45]{} ![image](figures/28_06_relaxed_spin_spin_T_54_mirror.pdf){width="\textwidth"} [0.45]{} ![image](figures/28_06_relaxed_spin_spin_2Ag_54_mirror.pdf){width="\textwidth"} [0.45]{} ![image](figures/28_06_relaxed_spin_spin_1BuM_54_mirror.pdf){width="\textwidth"} [0.45]{} ![image](figures/3Ag_spin_spin.pdf){width="\textwidth"} [0.45]{} ![image](figures/28_06_relaxed_spin_spin_Q_54_mirror.pdf){width="\textwidth"} [0.45]{} ![image](figures/soliton_correlation.pdf){width="\textwidth"} Excited State Wavefunctions {#sec:wf} =========================== The low-lying singlet dark states, i.e., $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$, have negative particle-hole symmetry and are sometimes characterised as being ‘covalent’ or of predominately spin-density-wave (SDW) character. In contrast, the optically allowed $1^1B_u^+$ state has positive particle-hole symmetry and is characterised as being ‘ionic’ or of electron-hole character. (In this paper we adopt the chemists’ definition of particle-hole symmetry, which is opposite to the physicists’ notation (see ref. 17).) In practice, however, the multi-excitonic $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$ states have both covalent and ionic character. In addition, as we saw in Section \[sec:energies\], the vertical energies of these states converge to the same value as $N \rightarrow \infty$ suggesting that they are related. In this section we describe the multi-excitonic character of these states and explain how they are members of the same family of excitations. We first discuss the triplet-pair components of these states before describing their excitonic wavefunctions. \[sec:triplet\_triplet\] Triplet-Triplet Overlap ------------------------------------------------ By comparing the excitation energy of the low-energy singlets of polyenes with the excitation energy of the individual triplets, it has been suggested that the triplet-triplet combinations that contribute to each state are: [@Taffet2019; @Tavan1987] $$\begin{split} 2^1A_g^- \equiv T_1 \otimes T_1 \\ 1^1B_u^- \equiv T_1 \otimes T_2 \\ 3^1A_g^- \equiv T_2 \otimes T_2 \\ \end{split}$$ where $T_1 \equiv 1^3B_u^-$ and $T_2 \equiv 1^3A_g^-$. To quantify the triplet-triplet character of the [$2^1A_g^-$]{}, [$3^1A_g^-$]{} and [$1^1B_u^-$]{} singlet states we compute their overlap with triplet-pair direct product wavefunctions. We calculate the vertical excited state wavefunctions for a chain of 12 sites. Then, by calculating the triplet states for a chain of 6 sites and taking the direct product of different pairs of triplets, we can generate states that have triplets on either half of the chain. The square overlap of the triplet-triplet wavefunctions with the [$2^1A_g^-$]{}, [$3^1A_g^-$]{} and [$1^1B_u^-$]{} states are presented in Tab. \[tab:2Ag\_overlap\]. The [$2^1A_g^-$]{} state, whilst primarily consisting of $T_1 \otimes T_1$ components, also contains some $T_1 \otimes T_2$ character. The [$1^1B_u^-$]{} state consists exclusively of $T_1 \otimes T_2$, as only these combinations are symmetry allowed. The [$3^1A_g^-$]{} state, rather than primarily having $T_2 \otimes T_2$ character, has both $T_1 \otimes T_1$ and symmetry allowed combinations of $T_1$ and $T_2$ components in its wavefunction. Indeed, the sum of the $T_1 \otimes T_1$ and $T_1 \otimes T_2$ components has larger amplitude than the $T_2 \otimes T_2$ character. Since the [$3^1A_g^-$]{} state has character from each of the triplet-triplet combinations, the sum of these contributions lead to the complicated staggered bond dimerization and spin-spin correlation, discussed in Section \[sec:bond\_dime\] and Section \[sec:spin\]. [ C[1.2cm]{} C[1.2cm]{} C[2cm]{} C[2cm]{} C[2cm]{} ]{} & &\ $T_l$ & $T_r$ & $ 2^1A_g^-$ & $ 3^1A_g^- $ & $1^1B_u^-$\ $T_1^{0}$ & $T_1^{0}$ & 0.134 & 0.020 & -\ $T_1^{+1}$ & $T_1^{-1}$ & 0.134 & 0.020 & -\ $T_1^{-1}$ & $T_1^{+1}$ & 0.134 & 0.020 & -\ $T_1^{0}$ & $T_2^{0}$ & 0.010 & 0.012 & 0.022\ $T_2^{0}$ & $T_1^{0}$ & 0.010 & 0.012 & 0.022\ $T_1^{+1}$ & $T_2^{-1}$ & 0.010 & 0.012 & 0.022\ $T_2^{-1}$ & $T_1^{+1}$ & 0.010 & 0.012 & 0.022\ $T_2^{+1}$ & $T_1^{-1}$ & 0.010 & 0.012 & 0.022\ $T_1^{-1}$ & $T_2^{+1}$ & 0.010 & 0.012 & 0.022\ $T_2^{0}$ & $T_2^{0}$ & - & 0.015 & -\ $T_2^{+1}$ & $T_2^{-1}$ & - & 0.015 & -\ $T_2^{-1}$ & $T_2^{+1}$ & - & 0.015 & -\ & 0.462 & 0.177 & 0.132\ Triplet-pair Wavefunctions {#sec:bimagnon} -------------------------- In the previous section we saw that higher-energy covalent states are composed of linear combinations of higher-energy triplet states. In this section we quantify how to construct bound triplet-pair states from free triplet-pair states. To do this it is convenient to assume translationally invariant systems. We also assume a dimerized antiferromagnetic groundstate, from which bound triplet-pair excitations are predicted. [@Harris1973; @Uhrig1996] Suppose that $a_{k_1}^{\dagger}$ and $a_{k_2}^{\dagger}$ create triplet excitations (or more precisely, bound spinon-antispinon pairs[@Uhrig1996]) with wavevectors $k_1$ and $k_2$. Then a free triplet-pair excitation is $$\ket{k_1,k_2} = \ket{K-k’/2,K+k’/2} = a_{k_1}^{\dagger} a_{k_2}^{\dagger} \ket{\mathrm{GS}},$$ where $2K = (k_1+k_2)$ is a the center-of-mass wavevector, $k’=(k_1-k_2)$ is the relative wavevector, and $\ket{\mathrm{GS}}$ represents the dimerized antiferromagnetic groundstate. A bound triplet-pair excitation is a linear combination of the kets $\{\ket{k_1,k_2} \}$, namely $$\ket{\Phi_n(K)} = \sum_{k’,K’} \Phi_n(k’,K’) \ket{K-k’/2,K+k’/2},$$ where $\Phi_n(k’,K’)$ is the triplet-pair wavefunction in $k$-space and $n$ is the principal quantum number. $K$ is a good quantum number for the bound state (although $k'$ is not), hence $$\Phi_n(k’,K’) =\psi_n(k’) \delta(K’-K)$$ and thus $$\label{eq:54} \ket{\Phi_n(K)} = \sum_{k’} \psi_n(k’) \ket{K-k’/2,K+k’/2}.$$ Fourier transforming $\Phi_n(k’,K’)$ gives the real-space triplet-pair wavefunction: $$\tilde{\Phi}_{n,K}(r,R) = \tilde{\psi}_n(r)\tilde{\Psi}_K(R)$$ where the center-of-mass wavefunction is the Bloch state $$\tilde{\Psi}_K(R) = \frac{1}{\sqrt{N}}\exp(iKR).$$ $R$ is the center-of-mass coordinate, and $\tilde{\psi}_n(r)$ is the relative wavefunction with $r$ being the T-T separation. Equation (\[eq:54\]) indicates that the bound triplet-pair state is constructed from a linear combination of free triplet-pair states with different $k_1$ and $k_2$, subject to a definite center-of-mass wavevector (and momentum). These states form a band, whose band width is determined by their center-of-mass kinetic energy. A bound triplet-pair is unstable to dissociation (or fission) if its kinetic energy is greater than the triplet-pair binding energy. Exciton Wavefunctions {#sec:excitons} --------------------- We now describe the exciton wavefunctions of the low-energy states of linear polyenes, using a real-space representation. The excitation of an electron from the valence-band to the conduction-band in semiconductors creates a positively charged hole in the valence-band. In conjugated polymers, the electrostatic interaction between the two create a bound electron-hole pair, termed an exciton. Assuming the ‘weak-coupling’ limit, excitons in conjugated polymers are described by an effective H-atom model [@Barford2002b; @Barford2013a; @Barford2013b] or by a mapping from a single-CI calculation [@Barford2008]. An excitation from the valence-band to the conduction-band can thus be characterized by an effective particle model [@Barford2013a]. In the real-space picture, an exciton is described by the center-of-mass coordinate of the exciton, $R$ and the relative coordinate, $r$[@Barford2013a]. $r$ is a measure of the size of the exciton. The electron-hole coordinate, $r$, is associated with the principal quantum number, $n$ [@Barford2002b; @Barford2013a; @Barford2013b] , while the center-of-mass coordinate is associated with the center-of-mass quantum number, $j$. We denote an exciton basis state by $\ket{R+r/2,R-r/2}$. The exciton creation operator $S_{rR}^\dagger$ creates a hole in the valence-band orbitals at $(R-r/2)$ and an electron in the conduction-band orbitals at $(R+r/2)$, i.e., $$\begin{aligned} \ket{R+r/2,R-r/2} &= S_{rR}^\dagger \ket{\mathrm{GS}},\end{aligned}$$ where $\ket{\mathrm{GS}}$ is the ground state in this basis. To investigate the electron-hole nature of the excited states, we express an excited state $\ket{\Phi}$ as a linear combination of the real-space exciton basis $\{\ket{R+r/2,R-r/2\}}$: $$\begin{aligned} \label{eq:16} \ket{\Phi} &= \sum_{r, R} \Phi(r, R)\ket{R+r / 2, R-r / 2}. \end{aligned}$$ $\Phi(r, R)$ is the exciton wavefunction and is given by the projection of the excited states on to the ground state: $$\begin{aligned} \label{eq:17} \Phi(r, R) &= \mel{\mathrm{GS}}{S_{rR}}{\Phi}. \end{aligned}$$ The calculated vertical exciton wavefunctions for the $1^1 B_u^+$, $1^1 A_g^+$, $2^1 A_g^-$, $1^1 B_u^-$ and $3^1 A_g^-$ states of chain of $L=102$ are illustrated in Fig. \[exciton\]. [0.45]{} ![Exciton components obtained from Eq. (\[eq:17\]). $n$ and $j$ are the exciton principal and center-of-mass quantum numbers, respectively.[]{data-label="exciton"}](figures/1Bu-25sitesU8.pdf "fig:"){width="\textwidth"} [0.45]{} ![Exciton components obtained from Eq. (\[eq:17\]). $n$ and $j$ are the exciton principal and center-of-mass quantum numbers, respectively.[]{data-label="exciton"}](figures/1Ag-25sitesU8.pdf "fig:"){width="\textwidth"} [0.45]{} ![Exciton components obtained from Eq. (\[eq:17\]). $n$ and $j$ are the exciton principal and center-of-mass quantum numbers, respectively.[]{data-label="exciton"}](figures/2Ag+25sitesU8.pdf "fig:"){width="\textwidth"} [0.45]{} ![Exciton components obtained from Eq. (\[eq:17\]). $n$ and $j$ are the exciton principal and center-of-mass quantum numbers, respectively.[]{data-label="exciton"}](figures/1Bu+25sitesU8.pdf "fig:"){width="\textwidth"} [0.45]{} ![Exciton components obtained from Eq. (\[eq:17\]). $n$ and $j$ are the exciton principal and center-of-mass quantum numbers, respectively.[]{data-label="exciton"}](figures/3Ag+25sitesU8.pdf "fig:"){width="\textwidth"} The nodal patterns of $\Phi (r,R)$ indicate that the $1^1B_u^+$ and $1^1A_g^+$ states have components belonging to the $n=1$ family of (even-parity) excitons with center-of-mass quantum numbers, $j$ = 1 and 2, respectively. Similarly, the $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$ states have components belonging to the $n=2$ family of (odd-parity) excitons with center-of-mass quantum numbers, $j$ = 1, 2 and 3, respectively. Thus, the single electron-hole components of the [$2^1A_g^-$]{}, [$3^1A_g^-$]{} and [$1^1B_u^-$]{} states belong to the same fundamental excitation. We note, however, that the electron-hole weights for these states are five times smaller than for the $1^1B_u^+$ and $1^1A_g^+$ states. The ‘$2A_g$ Family’ ------------------- As we have shown, the $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$ states have both triplet-pair and electron-hole components. For a translationally invariant system their vertical excitations can be expressed as $$\label{eq:65} \ket{\Phi(K)} = a^{\textrm{TT}}(K)\ket{\Phi_{m}^{\textrm{TT}}(K)} + a^{\textrm{e-h}}(K)\ket{\Phi_{n}^{\textrm{e-h}}(K)},$$ where $\ket{\Phi_{m}^{\textrm{TT}}(K)}$ is given by Eq. (\[eq:54\]) and $\ket{\Phi_{m}^{\textrm{e-h}}(K)}$ is given by the Fourier transform of Eq. (\[eq:16\]). Both components are labelled by the same center-of-mass quantum number, $K$, but the principal quantum numbers for the triplet-pair ($m$) and electron-hole ($n$) components are different, being $1$ and $2$, respectively. Equation (\[eq:65\]) conveys the concept that the $2^1A_g^-$, $1^1B_u^-$ and $3^1A_g^-$ states are the three lowest-energy members of the same set of fundamental excitations which are distinguishable only by their center-of-mass momentum. Spectra {#sec:spec} ======= As the low-energy states have triplet-triplet components, we might expect their spectra to resemble that of the [$1^3B_u^-$]{} and $1^3A_g^-$ triplet states. We calculated the approximate spectra of the [$2^1A_g^-$]{}, [$3^1A_g^-$]{}, [$1^1B_u^-$]{}, [$1^5A_g^-$]{}, [$1^3B_u^-$]{}($T_1$) and $1^3A_g^-(T_2)$ states for a $N=26$ using the expression, $$I(E) = \sum_i |\Braket{i|\hat{\mu}|\Psi}|^2 \delta(E_i -E_{\Psi} - E), \label{eqn:spectra}$$ where the sum is over states with opposite particle-hole and $C_2$ symmetry to the state $\Ket{\Psi}$. $(E_i - E_{\Psi})$ is the energy difference between state $i$ and state $\Psi$, $\hat{\mu}$ is the transition dipole operator and $ |\Braket{i|\hat{\mu}|\Psi}|^2$ is square of the transition dipole moment between states $i$ and $\Psi$. As shown in Figs. \[fig:spectra\_26\] -\[fig:spectra\_54\], for all of the triplet-pair states, the maximum absorption occurs within $\sim 0.5$ eV of the [$1^3B_u^-$]{} maximum absorption. Given that the [$2^1A_g^-$]{}  state is considered to be a bound triplet-pair, we might expect that the maximum absorption energy to be higher than the triplet state, as a photoexcitation would need to overcome the binding energy of the triplet, in addition to having enough energy to excite a triplet state. However, the maximum absorption of the [$2^1A_g^-$]{} state is found to be lower than the [$1^3B_u^-$]{} state, in agreement with experimental observations in carotenoids [@Polak2019]. The [$2^1A_g^-$]{} state also has an absorption in the near infra-red part of the spectrum, which can be attributed to the [$2^1A_g^-$]{}$\rightarrow$ [$1^1B_u^+$]{} transition. The near infra-red absorption of a bound triplet-pair has also been predicted in acene materials [@Khan2020]. The [$1^5A_g^-$]{} quintet state exhibits a single absorption with energy closest to the triplet maximum absorption over all chain lengths and whose intensity most closely matches the [$1^3B_u^-$]{} absorption. Comparing the [$1^1B_u^-$]{} to the triplets, for each triplet absorption there is a corresponding red-shifted absorption in the [$1^1B_u^-$]{} spectra, also indicating that despite being a bound state, absorption are lower in energy compared to individual triplets. Due to the mixed triplet-pair character of the [$3^1A_g^-$]{} state there are many different absorptions. As for the [$2^1A_g^-$]{}state, the [$3^1A_g^-$]{} state has a lower energy absorption in the infra-red to yellow portion of the spectrum. We note that although our calculated transition energy from the [$1^5A_g^-$]{} state coincides with our calculated $T-T^*$ transition energy (i.e., ca. 2.56 eV for the 26-site chain) this energy is over an eV higher than the observed $T-T^*$ transition energy for singlet fission in conjugated polyenes [@Musser2013; @Musser2019]. We explain this discrepancy to the failure of the Mazumdar and Chandross parameterization of the PPP model to correctly estimate the solvation energy of weakly bound excitons and charges [@Chandross1997]. The $T^*$ state is expected to be the $n=2$ (charge-transfer) triplet exciton, whose solvation energy is over an eV larger than predicted by the parametrized PPP model [@Barford2011]. Discussion and Conclusions {#sec:con} ========================== By calculating the relaxed energies of the singlet states of conjugated polyenes, we find that the [$1^1B_u^-$]{}  and [$3^1A_g^-$]{} states lie below the bright [$1^1B_u^+$]{}  state at experimentally relevant chain lengths. This implies that these states could be involved in relaxation pathways, particularly if systems are excited with energy higher than the band edge. In addition, we find that the energy of the relaxed [$1^5A_g^-$]{} state on a chain of $N$ C-atoms is twice the energy of the relaxed triplet state on a chain of $N/2$ C-atoms, so if spin mixing where allowed this state could represent an intermediate unbound triplet-pair state for the singlet fission process. An analysis of the bond dimerization of the relaxed excitations indicates that the [$2^1A_g^-$]{} is a four-soliton state, as previously found [@Hayden1986]. The [$1^5A_g^-$]{}  and [$1^1B_u^-$]{} states are also found to be four-soliton states. Both of these states seem to consist of repelling soliton pairs, with the bond dimerization of the [$1^5A_g^-$]{}  resembling two [$1^3B_u^-$]{} triplets occupying either side of the chain. The [$3^1A_g^-$]{}state bond dimerization is more complicated due to the mixed triplet-pair combinations that contribute to this state. The spin-spin correlation function offers another way to visualise the soliton structure. This again indicates that the [$2^1A_g^-$]{} is a bound triplet-pair. We also find that the [$1^5A_g^-$]{} and [$1^1B_u^-$]{} states show long-range spin correlations, which correspond to the staggered bond dimerisation. The calculated spectra indicate that the [$1^5A_g^-$]{}  state most closely resembles the triplet absorption, although the [$1^1B_u^-$]{} and [$3^1A_g^-$]{} states also absorb at a similar energy. Recent pump-push-probe experiments by Pandya et al. excited the [$2^1A_g^-$]{} state (push) after being generated from the relaxation of the initially photoexcited state [@Pandya2020]. As the [$2^1A_g^-$]{} state has $^1(T_1T_1)$ character, the excited push state is expected to be of $^1(T_1T^*)$ character. Relaxation from this state was found to involve a state with spatially separated, but correlated triplet pairs. We predict that this state is either the [$1^1B_u^-$]{} or [$3^1A_g^-$]{} state [@Pandya2020]. We further probed the triplet-triplet nature of the [$2^1A_g^-$]{}, [$3^1A_g^-$]{} and [$1^1B_u^-$]{} states by calculating the overlap of these states with half-chain triplet combinations. The $ T_1 \otimes T_1 $ nature of the [$2^1A_g^-$]{} and $ T_1 \otimes T_2 $ nature of the [$1^1B_u^-$]{} state was confirmed. The [$3^1A_g^-$]{} state has a mixture of $ T_1 \otimes T_1 $, and symmetry allowed $T_1 \otimes T_2$ and $ T_2 \otimes T_2 $ contributions. We also showed that the electron-hole excitation components of the [$2^1A_g^-$]{}, [$1^1B_u^-$]{} and [$3^1A_g^-$]{} states belong to the same $n=2$ family of excitons with center-of-mass quantum numbers $j=1,2,$ and 3, respectively. One of the aims of this work has been to identify a singlet state in polyenes that is intermediate between the initially photoexcited singlet state, $S_2$, and the final non-geminate pair of triplet states. Such a state should satisfy the following conditions: 1. [It should have significant triplet-triplet character.]{} 2. [Its vertical energy should lie above the vertical energy of $S_2$, but its relaxed energy should lie below the relaxed energy of $S_2$. Such conditions imply the possibility of an efficient interconversion from $S_2$ via a conical intersection.]{} 3. [Its relaxed energy should lie slightly higher than twice the relaxed energy of the triplet state, so that fission is fast and exothermic.]{} For our choice of model parameters we find that: the $2^1A_g^-$ state only satisfies condition (1.); the $1^1B_u^-$ state satisfies condition (1.), conditions (2.) for $10 < N < 26$, but not condition (3.); the $3^1A_g^-$ state satisfies condition (1.), conditions (2.) for $20 < N < 46$, and condition (3.). Thus, the $3^1A_g^-$ state would appear to be a candidate intermediate state for longer polyenes, but such a state does not exist for shorter carotenoids. We should be cautious, however, about making a prediction about the precise intermediate state, as owing to using semi-empirical parameters our calculated excitation energies are only expected to be accurate to within a few tenths of an eV. Our key conclusion, therefore, is that there is a family of singlet excitations (the ‘[$2^1A_g^-$]{}family’), composed of both triplet-pair and electron-hole character, which are fundamentally the same excitation (i.e., have the same principal quantum numbers), but have different center-of-mass energies. The lowest energy member of this family, the $2^1A_g^-$ state, cannot undergo singlet fission. But higher energy members, owing to their increased kinetic energy and reduced electron-lattice relaxation, can undergo singlet fission for certain chain lengths. We are currently investigating the dynamics of interconversion to the ‘[$2^1A_g^-$]{}family’ from $S_2$ using time-dependent DMRG. It is tempting to assign the $3^1A_g^-$ state (or one of its relatives) as the geminate triplet-pair, often denoted as $^1(T \cdots T)$. A possible mechanism to explain how this state undergoes spin decoherence to become a non-geminate pair is described in the recent paper by Marcus and Barford [@Marcus2020]. The authors thank Jenny Clark and Max Marcus for helpful discussions. D.V and D.M. would like to thank the EPSRC Centre for Doctoral Training, Theory and Modelling in Chemical Sciences, under Grant No. EP/L015722/1, for financial support. D.V. would also like to thank Balliol College Oxford for a Foley-Béjar Scholarship. D.M. would also like to thank Linacre College for a Carolyn and Franco Giantruco Scholarship and the Department of Chemistry, University of Oxford.
--- abstract: 'In cavity optomechanics, nonlinear interactions between an optical field and a mechanical resonator mode enable a variety of unique effects in classical and quantum measurement and information processing. Here, we describe nonlinear optomechanical coupling in the membrane-in-the-middle (MIM) setup in a way that allows direct comparison to the intrinsic optomechanical nonlinearity in a standard, single-cavity optomechanical system. We find that the enhancement of nonlinear optomechanical coupling in the MIM system as predicted by Ludwig et al. [@Ludwig2012] is limited to the degree of sideband resolution of the system. Moreover, we show that the selectivity of the MIM system of nonlinear over linear transduction has the same limit as in a single cavity system. These findings put constraints on the experiments in which it is advantageous to use a MIM system. We discuss dynamical backaction effects in this system and find that these effects per cavity photon are exactly as strong as in a single cavity system, while allowing for reduction of the required input power. We propose using the nonlinear enhancement and reduced input power in realistic MIM systems towards parametric squeezing and heralding of phonon pairs, and evaluate the limits to the magnitude of both effects.' address: - '^1^ Department of Applied Physics and Institute of Photonic Integration, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands' - '^2^ Center for Nanophotonics, AMOLF, Science Park 104, 1098 XG Amsterdam, The Netherlands' author: - 'Roel Burgwal ^1,2^' - Javier del Pino ^2^ - 'Ewold Verhagen ^1,2^' bibliography: - 'njpbibopts.bib' - 'library.bib' title: 'Comparing nonlinear optomechanical coupling in membrane-in-the-middle and single-cavity optomechanical systems' --- Introduction ============ Cavity optomechanics enables a wide variety of control over either optical or mechanical degrees of freedom by exploiting radiation pressure interactions. Using an effectively linear optomechanical coupling, many celebrated effects have been demonstrated, such as optical sideband cooling through dynamical backaction [@Arcizet2006; @Chan2011]. On the other hand, *nonlinear* optomechanical interaction has been recognised as a potential resource to generate nonclassical optical and mechanical states [@Rabl2011; @Nunnenkamp2011]. In particular, quadratic optomechanical coupling, for which optical eigenmode frequencies scale with the square of mechanical displacement, offers several quantum applications such as a phonon quantum non-demolition (QND) measurements [@Braginsky1980; @Thompson2008], squeezing of optical and mechanical modes [@Nunnenkamp2010], the observation of phonon shot noise [@Clerk2010], sub-Poissonian phonon lasing [@Lorch2015], controlled quantum-gate operations between flying optical or stationary phononic qubits [@Stannigel2012] and nonclassical state generation through measurement [@Brawley2016]. Additionally, there are also classical applications, such as a 2-phonon analogue of optomechanically-induced-transparency [@Huang2011]. Moreover, systems that feature quadratic coupling offer new ways to let mechanical modes interact with quantum two-level systems [@Cotrufo2017; @Ma2020] Even the simplest optomechanical systems, where a single cavity is parametrically coupled to a mechanical resonator, feature nonlinear interaction between the optical and mechanical degrees of freedom described by the Hamiltonian: $$\label{eq:single_hamiltonian} \hat{H} = \Omega_m \hat{b}^\dagger \hat{b} + \left[\omega_c - g_0 (\hat{b}^\dagger + \hat{b})\right]\hat{a}^\dagger \hat{a},$$ where $\Omega_m$ and $\omega_c$ are the mechanical and optical mode frequencies, respectively, $g_0$ is the single-photon optomechanical coupling rate, $\hat{a}$ and $\hat{b}$ are the optical and mechanical annihilation operator, respectively, and we set $\hbar=1$ [@Aspelmeyer2014]. For nonlinear effects to be appreciable for quantum-level motion, however, one requires the so-called single-photon strong coupling (SPSC) regime $g_0/\kappa > 1$, where $\kappa$ is the optical mode decay rate [@Rabl2011; @Nunnenkamp2011]. As this SPSC condition is inaccessible in solid-state optomechanical systems, most experiments use large coherent optical fields, that effectively linearise the linear interaction. It was recognised that special forms of nonlinear optomechanics could be achieved in multimode systems [@Thompson2008; @Jayich2008; @Ludwig2012]. The so-called membrane-in-the-middle (MIM) system consists of two cavities coupled through optical tunnelling at rate $J$. If a mechanical mode, e.g. that of a highly reflective membrane that separates the two cavities, alters the cavity lengths with equal magnitude but opposite sign, the frequencies of the optical supermodes depend on the square of displacement to lowest order. Such quadratic coupling is described by terms $\propto (b^\dagger+b)^2 a^\dagger a$ in the Hamiltonian, whose magnitude scales inversely with $J$ [@Thompson2008; @Jayich2008]. Here $a$ refers to one of the optical supermodes. MIM systems were realised in Fabry-Perot cavities [@Thompson2008; @Karuza2013], nanoscale platforms that include ringresonators [@Hill2013] and photonic crystals [@Paraiso2015], ultracold atom systems [@Purdy2010] and levitated nanosphere platforms [@Bullier2020]. The development of large quadratic optomechanical coupling has also inspired closely related designs  [@Doolin2014; @Kaviani2015; @Hauer2018]. ![(a) The optomechanical membrane-in-the-middle (MIM) system, consisting of two coupled optical mode which both couple to one mechanical resonator. Note the use of both ports for input and output. (b) optical eigenfrequencies for varying oscillator position $x$.[]{data-label="fig:fig1"}](fig1){width="\linewidth"} Although optomechanical interaction in the MIM system is often described by only the quadratic interaction  [@Nunnenkamp2010; @Huang2011; @Jiang2016; @Xie2016; @Liao2013; @Xu2020], it is generally an insufficient description. In addition to quadratic coupling, the mechanical mode also creates linear cross-coupling between the two optical supermodes [@Biancofiore2011; @Cheung2011], allowing quantum vacuum fluctuations to excite the mechanical resonator and precluding phonon QND measurements, that become limited to the SPSC condition [@Miao2009]. Moreover, when the frequency splitting of the optical supermodes is comparable to the mechanical frequency, i.e. $2J-\Omega_m\ll2J$, quadratic optomechanical coupling is resonantly enhanced  [@Heinrich2011; @Ludwig2012; @Stannigel2012; @Liao2014a; @Liao2015; @Lorch2015], an effect which is also not captured in a model in which quadratic coupling is explained through the interaction of a mechanical mode with a single optical mode at an avoided crossing of optical supermodes (b). This picture is only applicable in the regime where mechanical motion can be regarded as quasi-static, i.e. $\Omega_m \ll 2J$. A general description of MIM system dynamics that extends beyond these constraints is still missing. Moreover, it is an open question how strong quadratic coupling in the MIM system can be made to be, and how that compares to the nonlinear interaction in a single cavity of similar size and optomechanical properties. Having such a description is useful in determining how quadratic optomechanical coupling can be achieved in general systems, for either quantum or classical applications, and to identify applications in the regime of weak optomechanical coupling $g_0<\kappa$ that is experimentally widely relevant. In this work, we aim to provide an intuitive description of optomechanical dynamics of the MIM system that is valid for arbitrarily small optical mode spacings and use it to describe its unique features and limitations. We quantify the strength of linear and nonlinear processes through the amplitude of the intracavity sidebands at $\pm\Omega_m$ and $\pm 2\Omega_m$, respectively, which give the strength of transduction of the mechanical mode onto the optical field, but also determine the dynamical backaction effect [@Jayich2008]. These amplitudes also provide useful information about the system in the quantum regime: as Stokes and anti-Stokes (inelastic) scattering are associated with phonon generation, the $\omega_L-2\Omega_m$ sideband amplitude controls the rate of generation of pairs of phonons. Second-order sideband amplitudes determine the imprecision on a measurement of $\hat{x}^2$ [@Leijssen2017], and linear sideband amplitudes determine the linear backaction of such a measurement. In constraining the discussion to only these two pairs of sidebands, we assume small cavity frequency fluctuations due to motion, $\sqrt{\langle \hat{x}^2 \rangle}g_0 < \kappa$. Because current optomechanical devices are not in the SPSC regime, this holds for most devices, although exceptions with large $g_0$ and thermal excitation break this condition  [@Leijssen2017]. Indeed, various applications of quadratic coupling rely on this more practically reached coupling regime [@Clerk2010; @Lorch2015; @Cotrufo2017; @Ma2020]. Next, we revisit the dynamical backaction that the mechanical resonator experiences. Our analysis underlines that the apparent quadratic coupling in the MIM system is due to the intrinsic optomechanical nonlinearity. In particular, we see that linear transduction (i.e. the $\pm\Omega_m$ sidebands) can not be entirely suppressed and is related in size to quadratic ($\pm 2\Omega_m$ sidebands) transduction in the same way as in a single cavity system. Importantly, we show that the magnitude of the nonlinear enhancement, with respect to a single (uncoupled) optomechanical cavity, for the optimal condition of $2J=\Omega_m$ is limited to the sideband resolution $2\Omega_m/\kappa$. By describing dynamical backaction with the same approach, we put previous results on the optical spring shift and heating in a MIM system [@Paraiso2015; @Lee2015] in a new perspective, the most critical point being that the backaction per intracavity photon is equal in size in the two different systems. However, the multimode nature of the MIM system can be exploited to reduce the input power significantly [@Dobrindt2010]. We discuss a two-tone parametric driving scheme in a MIM system that also has a reduced threshold power compared to a single cavity. Finally, we propose a scheme that exploits the enhanced nonlinearity in the MIM system to herald nonclassical 2-phonon states and works with a lower cavity occupation. This paper is organised as follows. In , we introduce the model and analytical results for the linear and quadratic optical transduction sidebands. We analyse these results in and trace out links with existing approaches to the quasi-static ($2J\gg\Omega_m$) regime. We subsequently focus on the enhancement of nonlinear effects that is expected in the resonant ($2J\approx\Omega_m$) case and describe the upper bounds for this nonlinearity. Next, in , we estimate dynamical backaction by calculating the optically-induced changes in the mechanical response in the MIM system. We discuss these results and how, in the case of a two-tone parametric driving scheme, the MIM system can be exploited to reduce required driving power. Finally, we discuss also how the description of the MIM system in this paper might shed light on quadratic coupling in general optomechanical systems. Model and method {#sec:model} ================ First and second order sidebands in a single cavity --------------------------------------------------- We begin by revisiting the linear and intrinsic nonlinear optomechanical coupling that occurs in single cavity optomechanical system. The optical mode couples to an external input of output field with rate $\kappa_\mathrm{ex}$. The mechanical dissipation rate is $\Gamma_m$. Starting from the Hamiltonian of , moving to a frame rotating at the laser drive frequency $\omega_L$ and introducing the laser detuning $\Delta=\omega_L-\omega_c$, the quantum Langevin equations can be derived. These govern the dynamics of the operators in the open quantum system [@Aspelmeyer2014] and read $$\begin{aligned} \dot{\hat{a}}=& -\frac{\kappa}{2}\hat{a}+i(\Delta+g_0\hat{x})\hat{a} + \sqrt{\kappa_\mathrm{ex}}\hat{a}_\mathrm{in}+\sqrt{\kappa_0}\hat{f}_\mathrm{in}\\ \dot{\hat{x}}=&\Omega_m\hat{p}\\ \dot{\hat{p}}=&-\Omega_m\hat{x}-\Gamma_m\hat{p}+g_0\hat{a}^{\dagger}\hat{a} - \frac{\hat{F}_\mathrm{in}}{m\Omega_m x_\mathrm{zpf}}+\sqrt{\Gamma_m}\hat{P}_\mathrm{in}, \end{aligned}$$ where we have used the unitless mechanical position and momentum operators, $\hat{x}=\frac{1}{\sqrt{2}}(\hat{b}^\dagger + \hat{b})$ and $\hat{p}=\frac{i}{\sqrt{2}}(\hat{b}^\dagger - \hat{b})$, respectively. We have introduced input fields $\hat{a}_\mathrm{in},\hat{f}_\mathrm{in}$, for the optical input field through the external channel and quantum fluctuations that enter the system through instrinsic decay, with rates $\kappa_\mathrm{ex}$ and $\kappa_0$, respectively, fulfilling $\kappa_0+\kappa_\mathrm{ex}=\kappa$, where $\kappa$ is the total decay rate. The field $\hat{P}_\mathrm{in}$ introduces mechanical fluctuations associated with coupling to a thermal bath whereas $\hat{F}_\mathrm{in}$ accounts for coherent mechanical drive fields ($\hat{H}_\mathrm{d}=-\hat{x}\hat{F}_{\mathrm{in}}/(m \Omega_m x_\mathrm{zpf})$). Also, $x_\mathrm{zpf} \equiv \sqrt{1/(2m\Omega_m)}$ is the mechanical zero point motion for the mechanical oscillator with effective mass $m$. In our calculations, we reduce these equations to the semiclassical -nonlinear- equations of motion in the mean-field approximation ${\ensuremath{\left\langle{\hat{x}\hat{a}}\right\rangle}}\simeq x a$, denoting ${\ensuremath{\left\langle{\hat{a}}\right\rangle}}=a$ and ${\ensuremath{\left\langle{\hat{x}}\right\rangle}}=x$. Assuming no external mechanical forces (${\ensuremath{\left\langle{\hat{F}_\mathrm{in}}\right\rangle}}=0$) and incoherent (e.g. thermal) input fluctuations, ${\ensuremath{\left\langle{P_\mathrm{in}}\right\rangle}}=0,{\ensuremath{\left\langle{f_\mathrm{in}}\right\rangle}}=0$, and we arrive to: $$\begin{aligned} \label{single_opticalEOMS} \ddot{x} &= -\Omega_m^2 x - \Gamma_m\dot{x} + \Omega_m g_0 |a|^2, \\ \dot{a} &= i(\tilde{\Delta} + g_0 x)a+ \sqrt{\kappa_\mathrm{ex}} a_{\mathrm{in}}. \end{aligned}$$ Here we, for convenience, absorbed the optical decay rate as an imaginary part of the complex detuning $\tilde{\Delta}$: $\kappa = 2 \mathrm{Im}(\tilde{\Delta})$ First, we find steady-state solutions: $$\begin{aligned} \bar{a} &= i\frac{\sqrt{\kappa_\mathrm{ex}}}{\bar{\Delta}} a_\mathrm{in}, \\ \bar{x} &= \frac{g_0}{\Omega_m}|\bar{a}|^2. \end{aligned}$$ Here, $\bar{\Delta} = \tilde{\Delta} + g_0 \bar{x}$, which still contains $\bar{x}$. However, we will assume that the optical power is limited such that the static displacement of the resonator is much smaller than the linewidth, $g_0 \bar{x} \ll \kappa$, such that $\bar{\Delta} \approx \tilde{\Delta}$. This sets an upper limit for a few hundred intracavity photons in photonic crystal systems [@Safavi-Naeini2011a], while for other system it is much less restricting. We will evaluate the optical sidebands created by coherent mechanical motion of a specific amplitude $X_0$, described by $x = \bar{x} + X_0 \cos(\Omega_m t)$. For the optical field, we look for a perturbative solution of the form [@Heinrich2011]: $$a(t)=\bar{a}+\sum_{\zeta=\pm}A_{\zeta}^{(1)}e^{i\zeta\Omega_{m}t}+A_{\zeta}^{(2)}e^{i\zeta2\Omega_{m}t}.$$ By collecting terms in the mean-field EOM with the same time dependence, we can solve for the first-order coefficients: $$\label{eq:singlecav_firstsb} A_{\pm}^{(1)} = \frac{g_0\bar{a}}{\pm \Omega_m - \bar{\Delta}} \frac{X_0}{2}.$$ And, using this result, we can also retrieve second-order coefficients $$\label{eq:singlecav_secondsb} A_{\pm}^{(2)} = \frac{g_0 A_{\pm}^{(1)}}{\pm 2\Omega_m - \bar{\Delta}} \frac{X_0}{2} = \frac{g_0^2 \bar{a}}{(\pm 2\Omega_m-\bar{\Delta})(\pm \Omega_m-\bar{\Delta})} \big(\frac{X_0}{2}\big)^2.$$ In the approach we take above, the hierarchy of higher-order sidebands has been truncated assuming the cavity resonance frequency shift because of mechanical motion is negligible compared to the optical linewidth, i.e. $g_0|x|<\kappa$, in which case every higher-order sideband can be treated as a perturbation of the previous. Interaction and sidebands in the MIM system ------------------------------------------- Having applied our approach to single cavities, we now move to the MIM system. Our starting point is the standard Hamiltonian of the MIM system in the rotating frame of an input laser field detuned from two optical modes by $\mathrm{Re}\tilde{\Delta}_i=\omega_L-\omega_{c,i}$, with loss rates $2\mathrm{Im}\tilde{\Delta}_i=\kappa_i$ that are coupled to a single mechanical membrane, displaced from the equilibrium position by $\hat{x}$. In the basis of the physical cavities with annihilation operators $\hat{a}_i$ ($i=\{1,2\}$) the system is governed by the Hamiltonian $$\hat{H}=\Omega_m \hat{b}^\dagger \hat{b}+\hat{H}_\mathrm{OM}+\hat{H}_{J} + \sum_i\hat{H}_{\kappa_i},\label{eq:system_H}$$ where optomechanical coupling reads $$\label{baremodeeqs} \hat{H}_\mathrm{OM} = (\Delta_1 - g_{0,1}\hat{x})\hat{a}_1^\dagger \hat{a}_1 + (\Delta_2 +g_{0,2}\hat{x})\hat{a}_2^\dagger \hat{a}_2,$$ and the optical inter-cavity coupling is characterized by $$\hat{H}_{J} = - J(\hat{a}_1^\dagger\hat{a}_2 + \hat{a}_2^\dagger \hat{a}_1),$$ where $J$ is the rate of inter-cavity coupling. Coupling to input/output channels via Hamiltonians $\hat{H}_{\kappa_i}$ is assummed to occur to separate environments, (e.g. single-mode waveguides) with rates $\kappa_\mathrm{ex,i}$. Because the optical cavities are coupled, can be expressed in terms of the optical supermodes that arise. In conditions of equal cavity frequency $\Delta_1=\Delta_2 \equiv \Delta$ and optomechanical coupling $g_{0,1}=g_{0,2}\equiv g_0$, these are given by $\hat{a}_{e,o}=(\hat{a}_1\pm\hat{a}_2)/\sqrt{2}$. These supermodes are also depicted in a. In this basis $\hat{H} = \sum_{\eta=e,o}\omega_\eta\hat{a}_\eta^\dagger \hat{a}_\eta + \hat{H}_\mathrm{OM} + \Omega_m\hat{b}^{\dagger}\hat{b}$ with $\omega_{e,o}=\Delta\mp J$, with an optomechanical interaction: $$\hat{H}_\mathrm{OM} = -g_0 \hat{x}(\hat{a}_e^\dagger \hat{a}_o + \hat{a}_o^\dagger \hat{a}_e).\label{supermodeeqsb}$$ Here, we want to emphasize the fact that optomechanical coupling has now become cross-mode, i.e. the Hamiltonian contains terms $\propto \hat{x}\hat{a}_e^\dagger \hat{a}_o$, whereas it previously contained self-mode terms, e.g. $\propto \hat{x}\hat{a}_1^\dagger \hat{a}_1$. The frequencies of these optical supermodes can be found by treating this mechanical position as a quasi-static parameter analogous to the Born-Oppenheimer approximation of molecular physics ($\hat{x}\mapsto x$). This is only valid for mechanical motion that is slow with respect to the optical coupling rate, or $J \gg \Omega_m$, which is not true for a number of experimental implementations  [@Thompson2008; @Grudinin2010; @Hill2013]. Using this approximation allows for diagonalization of the system Hamiltonian in  [@Jayich2008], yielding the $x$-dependent eigenfrequencies in b. Still assuming equal frequency of both optical cavities, this dependence is approximately quadratic and given by $ \omega^{\mathrm{ad.}}_{e,o}(x) \simeq \Delta \mp (J + g_0^{(2)}x^2)$, or, equivalently, the effective quadratic coupling Hamiltonian $$\label{eq:effective_quad} \hat{H}^{\mathrm{ad.}}=\Delta\left(\hat{a}_{e}^{\dagger}\hat{a}_{e}+\hat{a}_{o}^{\dagger}\hat{a}_{o}\right)-(J+g_0^{(2)}\hat{x}^{2})(\hat{a}_{e}^{\dagger}\hat{a}_{e}-\hat{a}_{o}^{\dagger}\hat{a}_{o}),$$ with effective quadratic coupling $g_0^{(2)}= g_0^2/2J$. It is this form of the Hamiltonian that drew attention to the MIM system as a platform for strong quadratic optomechanical coupling. This adiabatic limit, however, breaks down as optical Rabi oscillations occur at scales that compare with mechanical oscillations, i.e. where the supermode splitting approaches the mechanical frequency ($2J\approx \Omega_m$). In this limit, optical and mechanical degrees of freedom need to be treated on the same footing, via numerical methods or effective Hamiltonians that are perturbative in $g_0$ [@Yanay2017; @Ludwig2013]. Moreover, as described in the introduction, it was quickly recognised that this effective Hamiltonian does not fully describe the system, because the linear cross-mode coupling is no longer included [@Miao2009; @Yanay2016]. In order to provide a more complete, while still intuitive picture of the MIM system dynamics that naturally covers adiabatic and resonant regimes, we apply the same perturbative approach as with the single cavity to the full model in . Our mean-field equations of motion are: $$\begin{aligned} \label{eq:opticalEOMS} \Ddot{x} &= -\Omega_m^2 x - \Gamma_m\dot{x} + \Omega_m(g_{0,1}|a_1|^2 - g_{0,2}|a_2|^2) +\frac{F_\mathrm{in}}{mx_\mathrm{xpf}},\\ \dot{a}_1 &= i(\tilde{\Delta}_1 + g_{0,1} x)a_1 + iJa_2 + \sqrt{\kappa_\mathrm{ex,1}} a_{\mathrm{in},1}, \\ \dot{a}_2 &= i(\tilde{\Delta}_2 - g_{0,2} x)a_2 + iJa_1 + \sqrt{ \kappa_\mathrm{ex,2}} a_{\mathrm{in},2}. \end{aligned}$$ Here $m$ stands for the effective oscillator mass and the optical decay rates $\kappa_i=-2\mathrm{Im}{\tilde{\Delta}_i}$ are included in the complex detunings $\tilde{\Delta}_i$. We have added the term $\propto F_\mathrm{in} ={\ensuremath{\left\langle{\hat{F}_\mathrm{in}}\right\rangle}}$ to represent external classical forces acting on the resonator, which will be of use later on. We first find steady state values for $a_1, a_2$ and $x$: \[eq:opticalSteady\] $$\begin{aligned} \bar{a}_{1,2}&=i\frac{(\bar{\Delta}_{1,2}\xi_{1,2}+J\xi_{2,1})}{J^{2}-\bar{\Delta}_{1}\bar{\Delta}_{2}},\\ \bar{x} &= \frac{g_{01}|a_1|^2 - g_{02}|a_2|^2)}{\Omega_m}. \end{aligned}$$ Where $\bar{\Delta}_i = \tilde{\Delta}_i \pm g_{0i}\bar{x}$ is the detuning to the cavity resonance that has been displaced by mean mechanical position $\bar{x}$ and incoming photon population $\xi_i = \sqrt{\kappa_\mathrm{ex} }a_{\mathrm{in},i}$. Similarly to the discussion of the single-cavity intrinsic nonlinearity, we propose an ansatz $$a_{i}=\bar{a}_{i}+\sum_{\zeta=\pm}A_{i,\zeta}^{(1)}e^{i\zeta\Omega_{m}t}+A_{i,\zeta}^{(2)}e^{i\zeta2\Omega_{m}t}.$$ We then derive explicit expressions for the first-order coefficients, \[firstsb\] $$\begin{aligned} A_{1,\pm}^{(1)} &= -\frac{X_{0}}{2}\frac{-J g_{0,2}\bar{a}_2 + (\bar{\Delta}_2\mp\Omega_m)g_{0,1} \bar{a}_1}{(\bar{\Delta}_1\mp \Omega_m)(\bar{\Delta}_2 \mp \Omega_m )-J^2}, \\ A_{2,\pm}^{(1)} &= \frac{X_{0}}{2}\frac{-J g_{0,1}\bar{a}_1 + (\bar{\Delta}_1\mp\Omega_m)g_{0,2} \bar{a}_2}{( \bar{\Delta}_1\mp\Omega_m )(\bar{\Delta}_2\mp\Omega_m)-J^2} , \end{aligned}$$ as well as the second order coefficients \[fullsb2\] $$\begin{aligned} A_{1,+}^{(2)} =&-\frac{X_{0}}{2}\frac{g_{0,1}A^{(1)}_{1,+}(\bar{\Delta}_{2}-2\Omega_{m})-g_{0,2}A^{(1)}_{2,+}J}{(\bar{\Delta}_{1}-2\Omega_{m})(\bar{\Delta}_{2}-2\Omega_{m})-J^{2}},\\ A_{1,-}^{(2)} =&-\frac{X_{0}}{2}\frac{g_{0,1}A^{(1)}_{1,-}(\bar{\Delta}_{2}+2\Omega_{m})-g_{0,2}A^{(1)}_{2,-}J}{(\bar{\Delta}_{1}+2\Omega_{m})(\bar{\Delta}_{2}+2\Omega_{m})-J^{2}},\\ A_{2,+}^{(2)} =&\frac{X_{0}}{2}\frac{-g_{0,1}A^{(1)}_{1,+}J+g_{0,2}A^{(1)}_{2,+}(\bar{\Delta}_{1}-2\Omega_{m})}{(\bar{\Delta}_{1}-2\Omega_{m})(\bar{\Delta}_{2}-2\Omega_{m})-J^{2}},\\ A_{2,-}^{(2)} =&\frac{X_{0}}{2}\frac{-g_{0,1}A^{(1)}_{1,-}J+g_{0,2}A^{(1)}_{2,-}(\bar{\Delta}_{1}+2\Omega_{m})}{(\bar{\Delta}_{1}+2\Omega_{m})(\bar{\Delta}_{2}+2\Omega_{m})-J^{2}}. \end{aligned}$$ Optomechanical transduction {#sec:transd} =========================== Having obtained the expressions for the sideband amplitudes for a given mechanical amplitude, we now discuss these results in the context of mechanical transduction. We begin by retrieving the results of the quasi-static model from our approach. Recovering the quasi-static limit {#sec:quasistatic} --------------------------------- ![Schematic depiction of the frequencies of input field, sidebands and optical modes. Blue and orange colour coding indicate the even and odd optical modes, respectively. (a) In the adiabatic limit ($2J\gg \Omega_m$) and for driving of the even mode, the first sideband (SB1) is far off resonance with the odd mode, while the second sideband (SB2) is on resonance with the even mode. (b) Conversely, for $2J\simeq\Omega_m$, a doubly resonant condition can be satisfied. Results are shown for $\Delta=J+2\Omega_m$.[]{data-label="fig:fig2"}](fig2){width="\linewidth"} Here, we impose the quasi-static limit ($2J \gg \Omega_m$) in the general solutions above and assume mode splitting to be larger than the individual modes ($2J \gg \kappa_i$). Without loss of generality, we drive the input of cavity 1 close to the even optical supermode, resulting in $\bar{a}_1 \approx \bar{a}_2$ according to , but such that the $2\Omega_m$ sideband is on resonance: $\bar{\Delta} = 2\Omega_m+J$ (see ). We will assume a sideband resolved system with $\Omega_m>\kappa$, which is the more interesting regime for the MIM, as we will discuss later. The quasi-static diagonalization approach shows that photonic eigenmodes acquire a dependence on $x$. For $\kappa_1\neq\kappa_2$, this in addition yields an effective $x$-dependent supermode decay rate (also known as *dissipative* coupling [@Wu2014; @Yanay2016]), leading to information about $\hat{x}$ leaking from the cavity. In a similar but distinct effect, the two optical supermodes also become coupled through their dissipation into the same optical channel for $\kappa_1\neq\kappa_2$ [@Dobrindt2010; @Yanay2016]. However, for clarity of our discussion, we will neglect both of these effects by assuming identical optical cavities ($g_{0,1}=g_{0,2}\equiv g_0$, $\Delta_1=\Delta_2\equiv\Delta$, and $\kappa_1=\kappa_2\equiv\kappa$). Since the drive is close to supermode resonance we have $\bar{a}_2 \approx \bar{a}_1 = \bar{a}$ and the relevant first-order sideband amplitudes reduce to $$\label{firstsb_simp} A_{1,+}^{(1)} = \frac{g_0}{\Omega_m+J-\bar{\Delta}} \frac{\bar{a}X_0}{2}=-A^{(1)}_{2,+}.$$ Here we see that this first sideband amplitude has a resonance only at the *even* optical mode, or for $\mathrm{Re}(\bar{\Delta}) =\Omega_m - J $. Because this resonance frequency is far from the (odd mode) input frequency (see a), first sideband generation is suppressed. This is a signature of the inter-mode optomechanical coupling between supermodes in : if the even mode is populated, the mechanical mode scatters light from the carrier into the odd mode. In figure a, we illustrate this situation. In our perturbative picture, the second sidebands at $\pm 2\Omega_m$ are seen as being scattered from the first sidebands by the mechanical mode. Because of the cross-mode coupling the second sidebands are again in the even mode. For our choice of detuning, this means the positive frequency second sideband is on resonance with the *even* mode and has amplitude $$\begin{aligned} \label{eq:2sd_adiabatic} A_{1,+}^{(2)} &= A_{2,+}^{(2)} = \frac{g_0A_{1,+}^{(1)}}{2\Omega_m- J-\bar{\Delta}}\frac{X_0}{2} \approx \frac{g_0^2}{2J} \frac{a}{-i\kappa/2}\left(\frac{X_0}{2}\right)^2, \end{aligned}$$ which is depicted in a. Note that a quadratic optomechanical interaction, which in practice involves the adiabatic elimination of the supermode off-resonant with the input field ($\hat{a}_o$ in this case), yields the same result for the effective quadratic coupling as in the adiabatic diagonalisation (see ), namely $g_0^{(2)} = g_0^2/2J$. We conclude that our approach gives the correct quadratic coupling found in the quasi-static approach, but now as a manifestation of the intrinsic optomechanical nonlinearities of cavities 1 and 2, as recognised by [@Stannigel2012]. Enhanced linear and quadratic transduction ------------------------------------------ We now use our model to describe general transduction in the MIM system. In particular, we show how transduction of motion to $\Omega_m$ and $2\Omega_m$ optical sidebands changes with tunnelling rate $J$ and input laser detuning $\Delta$. In doing so, we will first assume only one optical supermode is excited by the input field, even when this field is not on resonance with that mode. This assumption makes the following discussion more clear and in fact be also achieved in experiment by exciting the MIM system through both input ports with a particular relative phase. For example, using $a_{1,\mathrm{in}} = a_{2,\mathrm{in}}$ allows excitation of only the even optical mode, regardless of optical detuning. When discussing the dynamics of the MIM system, two distinct situations can be distinguished, namely, *i)* a constant input power ($P_\mathrm{in}\equiv\sum_i\omega_{c,i}|a_{i,\mathrm{in}}|^2$) or *ii)* a constant cavity photon number ($\bar{n}_c\equiv\sum_i|\bar{a}_i|^2$). The latter scenario allows isolating optomechanical effects, including the strength of nonlinear transduction, from purely optical cavity input effects, i.e. the enhancement of cavity occupation for a resonant input field. Moreover, cavity occupation is often the limiting factor in the experiment, due to nonlinear effects and heating [@Ren2019]. However, it could also be advantageous to minimise the input power that is required to achieve a certain cavity photon number in certain scenarios. Thus, we will discuss both situations in the following. The amplitude of the $-\Omega_m$ and $+2\Omega_m$ sidebands of the supermodes, $A_{e,+}^{(1)}$ and $A_{o,-}^{(2)}$, for odd input ($a_{1,\mathrm{in}}=-a_{2,\mathrm{in}}$) are shown in for constant $P_\mathrm{in}$ (panels a,b) and constant $\bar{n}_c$ (panels c,d). These amplitudes are defined as $A_{e,-}^{(1)} = \frac{1}{\sqrt{2}} \left( A_{1,-}^{(1)} + A_{2,-}^{(1)} \right) $ and $A_{o,+}^{(2)} = \frac{1}{\sqrt{2}} \left( A_{1,+}^{(2)} - A_{2,+}^{(2)} \right) $. The amplitudes are normalised to the optimum first sideband or second sideband amplitude that would be obtained in a single cavity for the same $P_\mathrm{in}$ or $\bar{n}_c$, which occur at $\bar{\Delta} = \pm\Omega_m$). From and , these read \[eq:refs\] $$\begin{aligned} A_{+}^{(1)} (\Delta=\Omega_m)\equiv& A_\mathrm{ref} = i\frac{g_{0}\bar{a}}{\kappa}X_{0}, \\ A_{+}^{(2)} (\Delta=\Omega_m)\equiv&A_\mathrm{ref}^{(2)} = \frac{i}{\kappa}\frac{g_{0}^{2}\bar{a}}{\Omega_{m}-i\kappa/2}\frac{X_{0}^{2}}{2}, \end{aligned}$$ with $\bar{a} = \sqrt{\bar{n}_c}$ or $\bar{a} = \sqrt{ \kappa_\mathrm{ex}}a_{in}/(\kappa/2 - i\Omega_m) $ for constant $\bar{n}_c$ or $P_\mathrm{in}$, respectively. We choose to display the $+\Omega_m$ first order and $-2\Omega_m$ second order sidebands, because these show special double resonance conditions for the even mode illumination condition, as discussed below. From a, we observe strong first-order sideband generation in the even mode either when the carrier is on resonance with the odd mode ($\mathrm{Re}(\bar{\Delta})=-J$), or when the first sideband is on resonance with the odd mode ($\mathrm{Re}(\bar{\Delta})-\Omega_m=J-\Omega_m$). Where these two resonant conditions are simultaneously met, we see a resulting enhancement of first sideband generation [@Dobrindt2010] and the sideband amplitude exceeds $A_\mathrm{ref}$, the largest amplitude possible in a single cavity. Moving to c, we now keep the cavity photon number $\bar{n}_c$ constant, instead of the input power. We see that the resonance of the carrier no longer results in large sideband amplitude, because we now consider a constant $\bar{n}_c$. The sideband amplitude no longer exceeds $A_\mathrm{ref}$, that of a single cavity, anywhere and we can not recognise an enhancement anymore. We conclude that the enhancement observed in a does not result from enhanced processes inside the cavity, but from a better cavity acceptance of input light. Moving to the second sideband amplitude in b, we see resonance lines that correspond to either carrier resonance or $+\Omega_m$ sideband resonance. Wherever the first positive sideband amplitude is large (not shown separately), the second sideband amplitude rises accordingly. However, an additional resonance is observed for the second-order sideband in b, where the second sideband is on resonance with the even mode ($\mathrm{Re}(\bar{\Delta})=-J+2\Omega_m$). b and d show identical dependencies, except for the line of carrier resonance ($\mathrm{Re}(\bar{\Delta})=-J$), that is not observed for constant $\bar{n}_c$. Of special interest is the crossing of two resonance lines in the plots for quadratic transduction in (b,d), corresponding to the doubly resonant case $\mathrm{Re}(\bar{\Delta}) = 3\Omega_m/2$ and $2J=\Omega_m$. For these conditions, both the first and the second sidebands are on resonance with their respective optical mode, as we have sketched in figure b. At these points we find the strongest generation of second-order (nonlinear) sidebands, the maxima for $A_{o,+}^{(2)}$, which are larger than possible in a single cavity ($A_\mathrm{ref}^{(2)}$). Unlike with the enhanced first sideband, this effect does not disappear when considering a fixed $\bar{n}_c$. This resonance effect has been described before by Ludwig *et al.* [@Ludwig2012] through a perturbative expansion of the threefold interaction between $\hat{a}_o,\hat{a}_o$ and $\hat{x}$ in . This leads to an effective nonlinear interaction Hamiltonian that is enhanced for $2J-\Omega_m\ll\kappa$, namely $\hat{H}_\mathrm{OM}^{\mathrm{eff}}\sim g_0^2(1/(2J-\Omega_m)+1/(2J+\Omega_m))(\hat{a}_e^\dagger\hat{a}_e-\hat{a}_o^\dagger\hat{a}_o)\hat{x}^2$. However, the magnitude of this interaction and its dependency on parameters such as $\kappa$ was not discussed. This and related works [@Komar2013; @Liao2015] have investigated the implications of this enhancement for specific quantum applications at the strong single-photon optomechanical coupling level ($g_0>\kappa$) and weak driving/low cavity occupation regime. In these works, it was demonstrated that the coupled cavity system had a significant advantage over a single cavity system [@Stannigel2012], but single-photon strong coupling was still needed to produce the sought-after nonclassical effects. ![Mechanical transduction amplitudes as a function of laser detuning $\Delta$ and mode splitting $J$ for constant input power $P_\mathrm{in}$ (panels (a) and (b)) or constant intracavity photon number $\bar{n}_c$ (panels (c),(d)). We depict first (left column) and second (right column) sidebands. Our colormap is chosen such that any sideband amplitude over the single cavity limits, i.e. an enhancement, is coloured red. For this plot, we used sideband resolution $\Omega_m/\kappa = 20$.[]{data-label="fig:fig3"}](fig3){width="0.8\linewidth"} ![image](fig4.png){width="\linewidth"} If we were to excite using even input light conditions ($a_{1,\mathrm{in}}=a_{2,\mathrm{in}}$), the roles of odd and even modes would be interchanged (not shown) and the same resonance conditions found on the $+\Omega_m$ and $-2\Omega_m$ sidebands. From a more practical perspective, using only single-port excitation of our MIM system would result in a $\Delta$-dependence convoluted with the detuning-dependent excitation of the supermodes. Upper bounds for second sideband enhancement -------------------------------------------- To understand exactly what the MIM system offers over a single cavity system in terms of optomechanical nonlinearity, it is important to calculate how large the enhanced second sideband amplitude is and how this depends on system parameters. To do so, we compare optimum second sideband amplitude from , i.e. at the double resonance condition described above, to optimum second sideband from a single cavity, as described in . For this, we introduce a metric that combines both sidebands of the same order, namely \[metric\] $$\begin{aligned} \mathcal{A}^{(1)}_{s} &= |A_{s,-}^{(1)}| + |A_{s,+}^{(1)}|, \\ \mathcal{A}^{(2)}_{s} &= |A_{s,-}^{(2)}| + |A_{s,+}^{(2)}|, \end{aligned}$$ where $s=o,e$. As shown in \[appendixone\], this metric is proportional to the homodyne signal amplitude at $\Omega_i$ or $2\Omega_i$ in the optimum optical quadrature. This metric can also be applied to the single-cavity case using and , to obtain the reference values $\mathcal{A}^{(1,2)}_\mathrm{ref}$. In figure d, we plot the ratio of $\max_\Delta (\mathcal{A}^{(2)}_{e}(\Delta))$ and $\max_\Delta (\mathcal{A}^{(2)}_\mathrm{ref}(\Delta))$ for even input drive as a measure of the enhancement of nonlinearity for different values of sideband resolution $\Omega_m/\kappa$. For sideband-unresolved systems ($\Omega_m < \kappa$), we see the nonlinearity is equally strong in the MIM and the single-cavity system. However, for sideband-resolved systems ($\Omega_m > \kappa$), the enhancement increases with sideband resolution factor. d demonstrates that the MIM system can only feature larger quadratic transduction than in a single cavity when it is sideband-resolved. The absence of enhancement for a sideband-unresolved system ($\Omega_m \ll \kappa$) can be attributed to the fact that, in a single cavity, carrier, first and second sidebands are already resonantly enhanced due to the large spectral overlap. We derive an expression for the enhancement factor in the case of a sideband resolved system. We look at the case of constant $\bar{n}_{cav}$, driving of the even mode and finally large sideband resolution $\Omega_m\gg \kappa$ to simplify the expression. We find $$\label{eq:sideband_limit} \left| \frac{\max_\Delta (\mathcal{A}^{(2)}_{e}(\Delta))}{\max_\Delta (\mathcal{A}^{(2)}_{ref}(\Delta))}\right| \approx 2\frac{\Omega_m}{\kappa},$$ demonstrates that, for a sideband-resolved system, the MIM system enhancement of nonlinearity is given by the degree of sideband resolution. This result is plotted as a red dashed line in figure d. Similar results are obtained for an even input condition (not shown). In , we highlight the differences between mechanical transduction in a single cavity and in a MIM system. In a, we see the characteristic MIM supermode frequency dependence on the static mechanical displacement $\bar{x}$. When static displacement is large (b), the two cavities have frequencies that differ by more than $2J$ and we effectively recover the limit of two uncoupled cavities, whereas a zero static displacement gives the coupled cavity MIM system ( c). In b and c, we look at sidebands generated in a sideband-resolved MIM system corresponding to these crosscuts. For this, we assume a drive of the even mode and plot quantities $\mathcal{A}^{(1)}_o$ and $\mathcal{A}^{(2)}_e$. The horizontal dashed lines are the single cavity limits $\mathcal{A}_\mathrm{ref}^{(1)} \approx A_\mathrm{ref}^{(1)}$ and $\mathcal{A}_\mathrm{ref}^{(2)}\approx A_\mathrm{ref}^{(2)}$ for first and second sidebands as calculated previously. All plotted values are now normalised by $A_\mathrm{ref}^{(1)}$, which is done to give an idea of the relative size of first and second sidebands for currently available system parameters. In b, we see that transduction for the uncoupled cavities adheres to the single cavity limits, as expected. Moving to the coupled cavity system in c, we see that the second sideband amplitude now surpasses the single cavity limit. In d, we plot the enhancement of the MIM system over a single cavity for second sideband amplitude: $\mathcal{A}^{(2)}/A_\mathrm{ref}^{(2)}$. We see that, at the double resonance condition, the enhancement peaks to the value of $2\Omega_m/\kappa$. We predict that this effect could be experimentally observed in the currently available MIM systems, where sideband resolution reaches $\Omega_m/\kappa\approx 10$ [@Thompson2008; @Sankey2010; @Karuza2013]. Related coupled microtoroid resonators platforms [@Grudinin2010] feature tunable inter-cavity coupling $J\leq\Omega_m/2$ and $\Omega_m/\kappa \approx 10$. An additional implementation of a coupled-cavity system was proposed for 2D optomechanical crystals [@Safavi-Naeini2011a], of which it was recently shown that individual cavities could reach $\Omega_m/\kappa \approx 28$ [@Ren2019]. Selectivity of quadratic over linear transduction ------------------------------------------------- For experiments in which readout of the mechanical energy $\sim\hat{x}^2$ is desired, maximising the ratio of first to second-order sideband amplitude is crucial. This is because first sidebands carry information about $\hat{x}$ and their creation is thus inevitably associated with a linear quantum backaction that changes the mechanical state of the system [@Miao2009; @Yanay2016]. As a figure of merit, we calculate the optimal ratio of the different sidebands, $\zeta = |A_{e,+}^{(2)}|/|A^{(1)}_{o,+}|$. From the equations (), it can be derived this value is highest at the double resonance condition, which we find to be $$\label{limit} \zeta \leq \frac{g_0 X_0}{\kappa}.$$ Using , we easily see that this is the same limit as can be found in a single cavity. In other words, indicates that the MIM system does not allow for more selective generation of the second over first sideband as compared to a single cavity. In figure e, we have plotted this sideband ratio as a function of $\Delta$ for $2J=\Omega_m$ and ground-state motion $X_0$=1. We see it also peaks at the double resonance condition, where it is limited by $g_0/\kappa$. We thus recover the condition found by Miao *et al.* [@Miao2009], for a QND measurement of mechanical energy in the MIM system, in the ratio of sideband amplitudes, valid in either the classical or quantum domain and for general system parameters. As we will briefly discuss later, the calculation underlying is indeed closely related to an analysis of quantum measurement noise limits. Finally, we want to highlight another feature of the MIM system. Next to the (limited) enhancement of optomechanical nonlinearity, the MIM systems offer a simple method for separation of different sidebands, as they occur in orthogonal modes. Separation can be attained by a beam splitter (cf. a), even if the different sidebands are too close in frequency for the use of other filtering techniques. The degree of filtering this offers, though, is reduced when the cavity is not perfectly balanced, e.g. $g_{0,1} \neq g_{0,2}$ or $\kappa_1 \neq \kappa_2$, because the different sidebands are no longer output into orthogonal modes. Back-action in the MIM system {#sec:apps} ============================= Having considered the effect of coherent mechanical motion on the cavity light field, we now move on to the effect of the light field on the resonator. In particular, we look at the well-known dynamical backaction (DBA) that occurs when the mechanically generated sidebands in the light field exert a force, whose sign and phase depends on laser detuning, back upon the resonator. Although these effects have been described in the MIM system previously  [@Jayich2008; @Heinrich2011; @Lee2015; @Paraiso2015], we will now revisit these works using our general sideband picture to reinterpret and unify previous results. Dynamical backaction and quadratic spring shift ----------------------------------------------- Our approach starts again from the semiclassical equations of motion  and is similar to that of Jayich *et al.* [@Jayich2008]. A related method is used to determine DBA effects in single cavities [@Aspelmeyer2014]. The aim is to find the susceptibility $\chi(\omega)$ of the mechanical resonator to an external force, given by the real amplitude $F_\mathrm{in}(t) = F_0\cos(\omega t)$. We solve for a mechanical motion that is strictly real, but can have an arbitrary phase that we account for by letting $X_0 \in \mathbb{C}$, i.e. $x(t) = (X_0 e^{i\omega t} + X_0^\ast e^{-i\omega t})/2$, . Note that this means information about both mechanical quadratures is now caught in the complex nature of $X_0$. We thus want to rewrite the mechanical EOM in the form $X_0(\omega) = \chi(\omega) F_0$. For mechanical coherent motion given by $X_0$, we can write down the generated first sidebands using our previous and thus expand $|a_i|^2$ in in terms of $X_0$. In the present case we observe that the sidebands $A_{i,-}^{(1)}$ at $-\Omega_m$ actually depend on $X_0^\ast$ instead of $X_0$. By collecting all terms with the same time dependence, we can derive: $$\begin{aligned} \label{eq:susceptibility} \chi(\omega)^{-1} =& x_\mathrm{zpf} m\big[ -\omega^2 + \Omega_m^2 + i\Gamma_m \omega + \Omega_m (g_{0,1}\beta_{1,+}, - g_{0,2}\beta_{2,+}) \big],\\ \beta_{i,+} =& \bar{a}_i \tilde{A}_{i,-}^\ast + \bar{a}_i^\ast \tilde{A}_{i,+}, \end{aligned}$$ and where $\tilde{A}_{i,-} = 2{A_{i,-}^{(1)}}/{X_0^\ast}$ and $\tilde{A}_{i,+} = 2{A_{i,+}^{(1)}}/{X_0}$. One of the striking features of a Hamiltonian with quadratic optomechanical coupling, as in , is that the optical cavity occupation $\bar{n}_c={\ensuremath{\left\langle{\hat{a}_{c}^{\dagger}\hat{a}_{c}}\right\rangle}}$ directly changes the mechanical frequency by acting as an additional potential well for the resonator [@Lee2015; @Paraiso2015]. This can be seen from the Hamiltonian to be: $$\label{statspringold} \Omega_\mathrm{eff} = \Omega_m + 2g_0^{(2)} \bar{n}_c.$$ We shall refer to this effect as the static optical spring effect. Here, we show this effect can be described as a consequence of DBA, in which form it is much easier to include other DBA effects that can not be recovered from the quadratic coupling Hamiltonian, but are present in the MIM system. By inserting $\Omega_\mathrm{eff} = \Omega_m + \delta \Omega$ and $\Gamma_\mathrm{eff} = \Gamma_m + \delta \Gamma$ into the susceptibility for $\bar{n}_c = 0$, and comparing to , we can find expressions for these shifts to be \[eq:DBAshifts\] $$\begin{aligned} \delta \Omega &= \frac{1}{2}\mathrm{Re}(g_{0,1}\beta_{1,+} - g_{0,2}\beta_{2,+}),\\ \delta \Gamma_m &= \mathrm{Im}(g_{0,1}\beta_{1,+} - g_{0,2}\beta_{2,+}). \end{aligned}$$ Now, we assume that the drive is close to resonance of the even supermode and, as before, that the two cavities are identical. In the adiabatic limit $2J\gg\Omega_m$, $\tilde{A}_{i,\pm}$ simplifies to $g_0\bar{a}_i/(2J)$ and $\beta_{1,+} \approx -\beta_{2,+}$. Combining these findings, we recover the quadratic coupling approximation: $\delta \Omega = g_0^2 \bar{n}_c/J$ by identifying $g_0^{(2)} = g_0^2/2J$. ![Dynamical back-action effects in the MIM system for an input power of 1 $\mu$W in cavity 1, $\frac{g_0}{2\pi} = 1$ MHz, $\frac{\Gamma_m}{2\pi} = 3$ MHz, $\frac{\kappa}{2\pi} = 1$ GHz and $\frac{\Omega_m}{2\pi} = 5$ GHz. (a) the optical spring effect normalised by the mechanical frequency (b) the optical amplification and cooling normalised by mechanical decay rate.[]{data-label="fig:backaction"}](fig5){width="0.8\linewidth"} We see that the static optical spring effect can be regarded as a consequence of DBA, which considers only first sidebands, and thus is not a consequence of nonlinear optomechanical coupling. To be precise, for $J\gg\Omega_m$, the static optical spring effect is almost the same as the optical spring effect in a single cavity with a laser detuned from optical resonance by $J$. The only difference is that, due to the multimode MIM system, the carrier can be on resonance with one of the supermodes while the sidebands are far from resonance (i.e. the other supermode), allowing for the an optical spring effect with less input power, an idea related to that presented by Grudinin *et al.* [@Grudinin2010]. The reduction in input power is given by $\Delta_0/\kappa$, where $\Delta_0$ is the desired detuning from resonance for the particular application. To suppress unwanted DBA heating or cooling, it is generally taken larger than $\Omega_m$ [@Sonar2018]. This is one of the applications in which the MIM could outperform a single cavity: optical tuning the mechanical resonance through the optical spring effect using a detuned laser to suppress DBA heating or cooling in a sideband-resolved system. In order to further discuss optically-induced mechanical frequency and linewidth in the MIM system, we depict the relative modifications $\delta\Omega_m/\Omega_m$ and $\delta\Gamma_m/\Gamma_m$ as a function of $J$ and $\bar{\Delta}$ for constant input power . In a, we can see that, in the adiabatic regime $2J>\Omega_m$, we find the static optical spring effect around the supermode resonances, which closely resembles results from Lee *et al.* [@Lee2015]. Approaching the regime where $2J\approx\Omega_m$, the size of the optical spring increases because both sideband and carrier can be on resonant with one of the supermodes. A strong transition is found for $2J=\Omega_m$, where one of the sidebands crosses the resonance and changes the sign of the spring effect. In b, we can see the optically-induced change in linewidth. The effect is again most substantial when both the carrier and one of the first sidebands are on resonance. Comparing to the standard optical spring effect, the linewidth change falls of more quickly when sidebands are not on resonance, which is already well known for single cavities [@Aspelmeyer2014]. In previous work [@Jayich2008], Jayich *et al.* extensively studied dynamical back-action as a function of inter-cavity detuning, here given by $\delta = \omega_{c,1}-\omega_{c,2}$. They noted a lack of backaction for $\delta=0$ in the adiabatic regime. It is argued that backaction vanishes completely because of the fact that the first derivative of supermode frequency vanishes at $\delta=0$, suppressing linear coupling (see b). Here, however, we have seen that the DBA does not vanish completely. For $\delta=0$ and in the adiabatic regime, the first sideband amplitude is suppressed, as discussed in , but is not identically zero. This fact is important, because we have shown that second sideband amplitude (and thus nonlinear transduction) is suppressed when the first sideband amplitude is suppressed. Conversely, the generation of nonlinear transduction is associated with the presence of DBA. This last statement can be seen as a classical analogue of previous results concerning the quantum non-demolition (QND) measurement of mechanical Fock states using quadratic optomechanical coupling [@Miao2009; @Yanay2016]. These authors showed that, as a result of the linear cross-mode coupling of the MIM system, the light field’s vacuum fluctuations would destroy a mechanical Fock state before it could be measured through the effective $x^2$-coupling, unless the SPSC condition was fulfilled. An expression for quantum backaction was found by calculating the susceptibility of the optical modes to the input quantum fluctuations, leading to a result similar to $A^{(1)}_{i,\pm}$ in , where we calculate the susceptibility of the optical modes to mechanically-induced fluctuations. It is therefore also not surprising that we recover that the ratio of second to first sideband is limited by the same SPSC condition $g_0/\kappa$. Indeed, the ratio of second to first sideband amplitude is closely related to the ratio between the amount of information on $\hat{x}^2$ leaving the cavity and the quantum backaction, as the quantum backaction is directly related to the amount of information on $\hat{x}$ (i.e. the linear transduction) that leaves the cavity [@Clerk2010a]. Parametric squeezing -------------------- In parametric squeezing, the spring constant of a resonator is modulated at twice the mechanical frequency, which results in a quadrature-dependent amplification or damping of the resonator [@Rugar1991]. Such a scheme has previously been used in electromechanical- (e.g. [@Szorkovszky2013; @Poot2014]) and linearly coupled optomechanical [@Pontin2014; @Sonar2018] systems. In a quadratically coupled optomechanical system, it is possible to directly alter the mechanical spring constant using the optical field, which can be exploited to implement this scheme [@Nunnenkamp2010]. In fact, we find that the parametric squeezing effect lies at the heart of the two-phonon OMIT-like effect reported for the MIM system [@Huang2011]. This can be seen from the fact that this OMIT effect works by amplifying thermal fluctuations in only one particular mechanical quadrature, de-amplifying motion in the opposite quadrature. We now set out to compare the parametric driving effect in the MIM system to a single cavity system. To include cavity modulation, we now start from an intracavity field given by: $$\label{DBAinput} \bar{a}_i = a(1 + \epsilon e^{i2\Omega_mt}),$$ where the constant $\epsilon\in\mathbb{C}$, assumed to be $|\epsilon| \ll 1$, controls the modulation phase and amplitude. Our approach share ingredients in common with that by Rugar and Grütter [@Rugar1991]. We assume a force with fixed phase $F_{ex}(t) = F_0 \cos(\Omega_m t)$ and allow $x(t) = (X_0 e^{i\Omega t} + X_0^\ast e^{i\Omega t})/2$ as previously. The modulation sideband controlled by $\epsilon$ gives an additional component to $|a_i|^2(\pm \Omega)$, that shows up in . After making the dependence on $X_0$ explicit, the EOM from implies: $$\begin{gathered} \label{eq:paramdriveEOM} \Omega_m\left[ i\Gamma_m - (g_{0,1}\beta_{1,+} - g_{0,2}\beta_{2,+})\right]\frac{X_0}{2} = \left[ \Omega_m (g_{0,1}\beta_{1,-} - g_{0,2}\beta_{2,-}) \right]\frac{X_0^\ast}{2} + \frac{F_0}{2 x_\mathrm{zpf} m}, \end{gathered}$$ where now $\beta_{i,-} = \bar{a}_i \epsilon \tilde{A}_{i,+}^\ast + \bar{a}_i^\ast E_{i}$ and the amplitudes for the sidebands generated from the modulation tone $a\epsilon$ by mechanical motion read $$\begin{aligned} E_1 &= -\epsilon\frac{-Jg_{0,2}\bar{a}_2 + (\bar{\Delta}_2 - \Omega_m)g_{0,1}\bar{a}_1}{(\bar{\Delta}_1 - \Omega_m)(\bar{\Delta}_2 - \Omega_m)-J^2}, \\ E_2 &= \epsilon\frac{-Jg_{0,1}\bar{a}_1 + (\bar{\Delta}_1 - \Omega_m)g_{0,2}\bar{a}_2}{(\bar{\Delta}_1 - \Omega_m)(\bar{\Delta}_2 - \Omega_m)-J^2}. \end{aligned}$$ Because the modulation tone is displaced by $2\Omega_m$ from the carrier, its sidebands have a different dependence on $\bar{\Delta}$ than the $\tilde{A}_{i,\pm}$. In b, we sketch the sidebands that are created and the associated contribution to the radiation pressure force. Here the carrier ($a$) and modulation tone sideband ($a\epsilon$) develop sidebands through mechanical motion. Similarly, $X_0$ can be retrieved by combining with its complex conjugate, to give $$\label{eq:paramdrive_resp1} X_0 = \frac{c^\ast + d}{|c|^2 - |d|^2}\frac{F_0}{x_\mathrm{zpf} m}$$ with $$\label{eq:paramdrive_resp2} \begin{split} c &= i\Gamma_m\Omega_m - \Omega_m (g_{0,1}\beta_{1,+} - g_{0,2}\beta_{2,+}),\\ d &= \Omega_m (g_{0,1}\beta_{1,-} - g_{0,2}\beta_{2,-}). \end{split}$$ When changing the phase of $\epsilon$, $b$ changes with similar phase, altering $|X_0|$. This is quadrature-dependent amplification of motion: depending on the relative phase of the modulation tone $\epsilon$ and the force $F_{ex}$, the response $|X_0|$ of the system can be larger or smaller than in a system with no optomechanical coupling. In figure a, we have plotted an example of the mechanical response changing with the phase of $\epsilon$. ![Parametric driving of the mechanical resonator using $2\Omega_m$-modulated light in the MIM system. (a) An example of thermal squeezing of the mechanical mode. By changing the phase between modulation tone $\epsilon$ and force $F_{ex}$, the response of the mechanical oscillator changes. For this plot, we assume a driven even mode with $\bar{n}_c = 1000$, $\frac{g_0}{2\pi} = 1$ MHz, $\frac{\Gamma_m}{2\pi} = 3$ MHz, $\frac{\kappa}{2\pi} = 0.25$ GHz, $\frac{\Omega_m}{2\pi} = 5$ GHz and large splitting $J=10\Omega_m$. (b) A schematic depiction of the parametric drive explained in terms of mechanically-generated sidebands of the carrier and modulation tone sidebands. The beating created by these sidebands acts back upon the mechanical resonator. (c) A proposed use of the MIM system in a parametric driving experiment. Exploiting the multimode character of the system, the required input power is reduced. In (b) and (c), the colours indicate whether light occupies the even (blue) or odd (orange) mode.[]{data-label="fig:paramdrivefig"}](fig6.png){width="\linewidth"} Now, from , we can make some observations on parametric driving in the MIM system. The amplitude of the enhanced mechanical quadrature depends on first sideband amplitudes, of which we have determined that these are not enhanced in the MIM system with respect to a single cavity for constant cavity number. In other words, although the MIM promises enhanced nonlinear coupling, the parametric drive per cavity photon is not larger than in a single cavity. A system with multiple optical modes, such as the MIM, could, however, help to reduce the required input power, as was shown previously in the context of linear position measurement [@Dobrindt2010] and phonon lasing [@Grudinin2010]. Here, we propose a similar use that is particularly useful in optical parametric driving. A schematic example of the idea is shown in  c. In an optomechanical parametric driving scheme, it is often desirable to have the carrier far detuned from the cavity to suppress DBA heating or cooling of the resonator [@Sonar2018]. This means a considerable input power is needed to reach an appreciable intracavity photon number. In the application we envision, the carrier is on resonance with one of the two supermodes. In that case, the sidebands can be far off-resonant given that $2J\gg \Omega_m$, while requiring much less input power. Heralded phonon pair generation =============================== Previously the optomechanical interaction has been used in the heralded generation of single phonons [@Galland2014; @Hong2017]. When the optomechanical interaction is linearised through using a strong optical drive, Stokes scattering of a drive photon into the lower frequency sidebands is associated with the generation of a phonon. When using a mechanical system close to the ground state, the consecutive detection of a single Stokes photon within the mechanical decoherence time then heralds a 1-phonon mechanical Fock state. Analogously, the detection of photons in a Stokes sideband shifted by $-2\Omega_m$ from the drive laser, created through a nonlinear optomechanical interaction, would herald the pairwise generation of two phonons. Specifically, if a single mechanical mode is involved, the detection heralds a 2-phonon Fock state in the resonator. This scheme works outside the SPSC regime. Here, we consider the feasibility of such a scheme in a MIM system, compare it to using a single cavity and discuss limitations due to the presence of first sideband photon generation. From the intracavity fields we calculated previously, we can calculate the output field by using the input-output relations [@Aspelmeyer2014]. Assuming the input light field contains only carrier light, the output light field at the frequency of the first or second sideband is simply $\sqrt{\kappa_\mathrm{ex}} A^{(1,2)}_{i,\pm}$. Assuming optimal combination of the outputs of both cavities such that all photons in the proper cavity supermode are detected, the photon detection rate in any of the sidebands is $$\Gamma_\pm^{(1,2)}=\kappa_\mathrm{ex}(|A^{(1,2)}_{1,\pm}|^2+|A^{(1,2)}_{2,\pm}|^2).$$ We can evaluate the Stokes sidebands for a system initialised in the mechanical ground state by setting $X_0=2$ in equations and , accounting for sideband asymmetry [@Aspelmeyer2014]. We now consider a short measurement interval $\Delta t$ (which could be defined by the duration of an optical pulse) and a low enough first sideband amplitude such that the probability $p_1 = \Delta t\Gamma_+^{(1)}$ of detecting a single photon in the first sideband is much smaller than unity, to ensure that a heralded state is not spoiled by a probabilistic excitation of single phonons. This condition sets an upper limit to the number of carrier photons that can be employed in a single measurement. We denote the maximum allowed probability of single-phonon generation (determined by the wanted level of purity) as $p_{1,max}$. With the associated maximum laser power, the probability $p_2=\Delta t\Gamma_+^{(2)}$ of detecting a photon in the second Stokes sideband to herald a pure two-phonon state is maximised at $$p_2=\frac{p_2}{p_1}p_{1,max}\leq \left(\frac{g_0}{\kappa}\right)^2 p_{1,max},$$ where we used our previous observation that $|A^{(2)}|/|A^{(1)}| \leq g_0/\kappa$. As we found before, this limitation holds for both single-cavity and MIM systems. Nonetheless, the optical power (intracavity photon number) that is required to reach the maximal rate of heralding two-phonon states is reduced for MIM systems at the optimal condition for second sideband generation, by a factor equal to $2\Omega_m/\kappa$, as we found in . This leads to a practical advantage of the MIM system for this scheme, especially in cryogenic settings, where heating through laser absorption is often a significant limiting effect. Conclusion ========== In this work, we have presented a general framework to describe nonlinear transduction and backaction effects in a MIM optomechanical system. Using this framework, we discuss in what applications a MIM system offers an advantage over an optomechanical cavity with single optical and mechanical mode. We show that the MIM system gives an enhancement of the intrinsic nonlinearity of the optomechanical interaction for supermode splitting $2J=\Omega_m$ that is limited by the degree of sideband resolution $\Omega_m/\kappa$. Additionally, the ratio of nonlinear to linear transduction in the MIM system is limited by the same condition as it is in the single cavity, namely $g_0/\kappa$, imposing constraints on the applications of the MIM system, as was previously shown for a QND measurement of phonon number [@Miao2009]. In a discussion of backaction, we show that DBA in the MIM system is equal in strength per cavity photon to that in a single cavity, but is altered by the fact that the MIM system is multimode optically. Similarly, we discussed that a $2\Omega_m$-parametric driving scheme is also not enhanced in the MIM system, but that the multimode character of the system can be used to reduce the amount of input light required to reach a specific cavity photon number. Finally, we proposed a scheme to use the nonlinear interaction in the weak coupling regime to herald the generation of phonon pairs, for which we found that in the MIM system the required cavity photon number is reduced by $2\Omega_m/\kappa$ for a generation rate that is limited by the ratio of $g_0$ and $\kappa$. Although the above considerations all consider the MIM system, they can be applied to a larger class of multimode optomechanical systems. In several works that study quadratically-coupled optomechanical systems, second-order perturbation theory is used to derive the quadratic coupling coefficient from the unperturbed optical and mechanical mode fields [@Rodriguez2011; @Kaviani2015; @Kalaee2016; @Hauer2018]. The quadratic coupling coefficient $g_0^{(2)}$ is proportional to the second-order correction to the eigenmode frequency for a small perturbation of mechanical displacement: $$g_0^{(2)} \propto \frac{\delta \omega^{(2)}}{\omega} \frac{1}{4}\frac{|\langle \mathbf{E}_\omega | \Delta \epsilon | \mathbf{E}_\omega\rangle|^2}{|\langle \mathbf{E}_\omega |\epsilon | \mathbf{E}_\omega\rangle|^2} - \frac{1}{2}\sum_{\omega^\prime \neq \omega} \left( \frac{\omega^3}{\omega^{\prime 2} - \omega^2} \right) \frac{|\langle \mathbf{E}_{\omega^\prime} | \Delta \epsilon | \mathbf{E}_\omega\rangle|^2}{\langle \mathbf{E}_\omega |\epsilon | \mathbf{E}_\omega\rangle \langle \mathbf{E}_{\omega^\prime} |\epsilon | \mathbf{E}_{\omega^\prime}\rangle}.$$ Here, $|\mathbf{E}_{\omega}\rangle$ indicates the electric field of a cavity eigenmode at frequency $\omega$, the bra-ket products indicate overlap integrals and $\Delta \epsilon,\delta\omega^{(2)}$ denote the change in system permittivity distribution $\epsilon$ and eigenfrequency, due to a small mechanical displacement $\Delta x$. In this equation, the first term is fully determined by, and much smaller than, $g_0$. The second term contains perturbation-induced overlaps between different eigenmodes, which are weighted by their frequency difference such that the contribution from closely spaced eigenmodes is enhanced. When applying this equation to the MIM system, it is the close spacing of $2J$ between the two supermodes that enhances quadratic coupling. However, it is this same mechanically-induced overlap between the two optical supermodes that gives the cross-mode optomechanical coupling, of which we have seen it limits the selectivity of quadratic over linear optomechanical coupling in the system. At this point a question arises: given the generality of the second-order perturbation theory calculation, is it at all possible to design an optomechanical system such that it has a $x^2$-coupling without the linear cross-mode coupling? As already described by Miao *et al.* [@Miao2009], any system that does have cross-coupling would always be restricted by the single-photon strong coupling requirement for QND measurements, and also be limited in that there will be residual linear DBA, as discussed in this paper. Currently, several proposals claim to circumvent this restriction [@Kaviani2015; @Hauer2018; @Dellantonio2018a]. Although it is beyond the scope of this paper to discuss these works individually, the authors would like to stress that cross-coupling between any two modes may allow information about the position $x$ to escape the cavity and impose quantum backaction on the resonator. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Pierre Busi, Andrea Fiore, Simon Gröblacher, Kevin Cognée and Femius Koenderink for valuable discussions. We thank Ilan Shlesinger for critical reading of the manuscript. This work is part of the research programme of the Netherlands Organisation for Scientific Research (NWO). E.V. acknowledges support from an NWO-Vidi grant and the European Research Council (ERC starting grant no. 759644-TOPP). Homodyne signal in optimal quadrature in terms of sideband amplitudes {#appendixone} ===================================================================== Consider homodyne detection on one of the two beamsplitter outputs from for even driving. Depending on the output, these contain either first- or second-order sidebands. We will assume first-order sidebands, although the exact same argument holds for second-order sidebands. The output of the beam splitter combined with a local oscillator field with amplitude $\bar{a}_\mathrm{L.O.}$ is given by the following expression $$a_\mathrm{h.d.} = \bar{a}_\mathrm{L.O.}e^{i\theta} + \bar{a}_\mathrm{const} + \sqrt{\kappa_\mathrm{in}} A_{o,+}^{(1)} e^{i\Omega_m t} + \sqrt{\kappa_\mathrm{ex}} A_{o,-}^{(1)} e^{-i\Omega_m t},$$ which we derived via the input-output relation $a_\mathrm{out} = a_\mathrm{in}-\sqrt{\kappa_\mathrm{in}}a$ [@Aspelmeyer2014], under the assumption of large power $|\bar{a}_\mathrm{L.O.}\gg|\bar{a}_\mathrm{out}|$. Here $\theta=\arg\bar{a}_\mathrm{L.O.}$ denotes the tunable local oscillator phase, with $\bar{a}_\mathrm{const}$ containing all time-independent contributions to the output field and $A_{i,\pm}^{(1)}$ denoting the sideband amplitudes from a,b. The homodyne signal amplitude $S(\omega) \propto |a_\mathrm{h.d.}|^2(\omega)$ at frequency $\Omega_m$ is found to be $$\begin{aligned} S(\Omega_m)\propto& \sqrt{\kappa_\mathrm{ex}}\bar{a}_\mathrm{L.O.}\big[e^{i\theta}(A_{o,+}^{(1)\ast}e^{-i\Omega_m t} + A_{o,-}^{(1)\ast} e^{i\Omega_m t}) + e^{-i\theta}(A_{o,+}^{(1)} e^{i\Omega_m t} + A_{o,-}^{(1)} e^{-i\Omega_m t})\big] \nonumber\\ &\simeq 2\sqrt{\kappa_\mathrm{ex}}\bar{a}_\mathrm{L.O.} \mathrm{Re}[e^{i\theta}B(t)], \end{aligned}$$ where $B(t) = A_{o,+}^{(1)\ast}e^{-i\Omega_m t} + A_{o,-}^{(1)\ast} e^{i\Omega_m t}$ and we have only slowly-oscillating terms. To optimise homodyne signal, we set $\theta$ such that, for $|B_\mathrm{max}| = \max_t (|B(t)|)$ and $B_\mathrm{max}$ the corresponding complex value, $e^{i\theta}B_\mathrm{max}$ is real. We then find that $S(\Omega_m) \propto \sqrt{\kappa_\mathrm{ex}} \bar{a}_\mathrm{L.O.} |B_\mathrm{max}|$. Given that $B(t)$ is the sum of two counterrotating complex amplitudes, its norm is largest when these have the same phase, thus if $|B_\mathrm{max}| = |A_{o,-}^{(1)}| + |A_{o,+}^{(1)}|$ and $$S(\Omega_m) \propto \sqrt{\kappa_\mathrm{ex}} \bar{a}_\mathrm{L.O.} \left(|A_{o,-}^{(1)}| + |A_{o,+}^{(1)}| \right).$$ This derivation demonstrates the metric we use is a measure of the signal amplitude in the optimal homodyne measurement.
--- author: - 'P. Ajith' - 'M. Hewitson' - 'J. R. Smith' - 'H. Grote' - 'S. Hild' - 'K. A. Strain' title: 'Physical instrumental vetoes for gravitational-wave burst triggers' --- Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful for support from PPARC and the University of Glasgow in the UK, and the BMBF and the state of Lower Saxony in Germany. The authors also thank Peter Shawhan and Peter Saulson for their detailed comments on the manuscript, and the members of the  group of the Albert Einstein Institute for useful discussions. This document has been assigned LIGO Laboratory document number LIGO-P070032-00-Z.
--- abstract: 'The optical response of spiral magnets is studied, with special attention to its electromagnon features. We argue that electromagnons in spiral magnets can produce, in addition to the observed peaks in the optical absorption of multiferroics, a (dynamically enhanced) optical rotation and a negative refractive index behavior.' author: - 'A. Cano' title: Electromagnon Resonances in the Optical Response of Spiral Magnets --- The strong interplay between magnetism and ferroelectricity observed in a new generation of ferroelectromagnets (or multiferroics) has prompted a renewed interest on magnetoelectric (ME) phenomena. In TbMnO$_3$, for example, the electric polarization can be flopped by applying a magnetic field [@Kimura03] and, conversely, the chirality of its magnetic structure can be changed by applying an electric field [@Yamasaki07]. The dynamic counterpart of these cross-coupling effects is the existence of hybrid magnon-polar modes, i.e., the so-called electromagnons, which also have been observed in the form of absorption peaks in optical experiments [@Pimenov06]. The relatively large magnitude of these ME effects makes this type of materials very attractive as novel memory elements, optical switches, etc. The ME response is known to be an important ingredient in the electrodynamics of conventional magnetoelectrics [@O'Dell; @Landau-Lifshitz_ECM; @Arima08; @Krichevtsov93]. In Cr$_2$O$_3$, for example, this response alone suffices to produce a nonreciprocal optical rotation [@Krichevtsov93]. In this paper we provide a continuum medium description of the dynamical ME effect in spiral magnets. We also discuss briefly the possibility of having a dynamically enhanced nonreciprocal optical rotation and a negative refractive index behavior due to such a genuine ME response. Let us begin by recalling that, at the static level, the most general linear response of a (homogeneous) medium to the electric and magnetic fields, $\mathbf E$ and $\mathbf H$ respectively, can be expressed by the constitutive relations [@O'Dell; @Landau-Lifshitz_ECM]: $$\begin{aligned} \mathbf P &=\hat \chi^{e}\mathbf E + \hat \alpha \mathbf H,\label{em_coupling} \\ \mathbf M &=\hat \alpha ^\text{T}\mathbf E + \hat \chi^{m}\mathbf H.\label{me_coupling}\end{aligned}$$ \[constitutive\] Here $\mathbf P$ and $\mathbf M$ represent the electric and magnetic polarization, $\hat \chi^{e}$ and $\hat \chi^{m}$ are the electric and magnetic susceptibilities, and $\hat \alpha$ is the ME tensor of the medium. The same tensor $\hat \alpha$ enters in these two equations ($\alpha_{ij}^\text{T} =\alpha_{ji}$) because it traces back to the same coupling $-\alpha_{ij}E_iH_j$ in the free energy of the system. Only a restricted number of magnetic symmetry classes allow for this linear coupling, in which case the system is termed as magnetoelectric. In the case of spiral magnets the inhomogeneous ME effect [@Baryakhtar83; @Cano08; @Mills08] is always at work. This effect describes the (universal) coupling between the electric polarization and nonuniform distributions of magnetization. For our purposes it can be taken as [@note] $$\begin{aligned} -f\mathbf P \cdot [\mathbf M ( \nabla \cdot \mathbf M) - (\mathbf M \cdot \nabla )\mathbf M]. \label{F_me}\end{aligned}$$ To extend Eqs. to the frequency domain we have to deal with the dynamics of the system. In this dynamics the coupling the hybridization of magnons with polar modes (see below). Let us compute these dynamical tensors. In the presence of an electromagnetic radiation, electric and magnetic polarizations will deviate with respect to the corresponding background distributions: $\mathbf P = \mathbf P ^{(0)} + \mathbf p$ and $\mathbf M = \mathbf M ^{(0)} + \mathbf m$. These deviations $\mathbf p$ and $\mathbf m$ are assumed to be small (proportional to the external fields), so the equations of motion for $\mathbf P$ and $\mathbf M$ can be linearized with respect to these quantities. For the sake of simplicity, we restrict ourselves to background magnetizations containing only one periodicity (i.e., with wavevectors $\pm \mathbf Q$). Thus, if we take the equation of motion for the electric polarization, in Fourier space we get $$\begin{aligned} &\hat A(\mathbf q,\omega) \mathbf p (\mathbf q,\omega) \approx \mathbf E(\mathbf q,\omega) + 2i f \sum _{\mathbf q' =\pm \mathbf Q } \big[\big(\mathbf q' \cdot \mathbf M^{(0)}(\mathbf q')\big) \mathbf m(\mathbf q -\mathbf q',\omega) - \mathbf M^{(0)}(\mathbf q')\big(\mathbf q' \cdot \mathbf m(\mathbf q - \mathbf q',\omega)\big) \big] \label{motion_p}\end{aligned}$$ in the limit $q \ll Q$. Here $\hat A$ represents the inverse electric susceptibility (in the absence of ME coupling $\hat A^{-1} \equiv \hat \chi^{e} $). In this equation we can see that, in fact, polar modes are linearly coupled with the deviations $\mathbf m (\mathbf q \pm \mathbf Q, \omega)$ of the magnetic structure by virtue of the modulation of this latter, i.e., we have electromagnons. Close to the the dynamics of the magnetization is expected to be described by the Landau-Lifshitz equation. Then, the nonlinear character of this equation, together with the non-uniform magnetic background $\mathbf M ^{(0)}$, make it possible the linear coupling between these excitations and long-wavelength external fields (see e.g. [@Cano08; @Belitz06]). $$\begin{aligned} \mathbf m (\mathbf q \pm \mathbf Q,\omega ) &\underset{q\ll Q}{=} \hat \chi^{m,\pm \mathbf Q}(\mathbf q,\omega) \mathbf H(\mathbf q,\omega), %\qquad (q \ll Q), \label{chi_m}\end{aligned}$$ where the poles of $\chi^{m,\pm \mathbf Q}$ are associated with the characteristic excitations of the modulated structure [@Cano08; @Belitz06]. Substituting this expression into we obtain $$\begin{aligned} \mathbf p (\mathbf q,\omega) &= \hat \chi^{e} (\mathbf q,\omega) \mathbf E (\mathbf q,\omega) + \hat \alpha (\mathbf q,\omega) \mathbf H(\mathbf q,\omega), \label{d-em_coupling}\end{aligned}$$ where $$\begin{aligned} \alpha_{ij} (\mathbf q,\omega) = 2i f \sum_{\mathbf q' = \pm \mathbf Q} q_k'M_{k'}^{(0)}(\mathbf q') ( \delta_{i'j'} \delta_{kk'} - \delta_{i'k'} \delta_{kj'} ) \chi_{ii'}^{e}(\mathbf q,\omega)\chi^{m,-\mathbf q'}_{j'j}(\mathbf q,\omega). \label{d-em-tensor}\end{aligned}$$ The constitutive equation replaces for dynamical processes. As mentioned before, the fact that polarization and magnetization dynamics are different produces a certain asymmetry in the dynamical ME response. Carrying out similar manipulations the equation of motion for the magnetization can be reduced to an expression analogous to : $$\begin{aligned} \mathbf m(\mathbf q,\omega) &=\hat \beta(\mathbf q,\omega) \mathbf E(\mathbf q,\omega) + \hat \chi^{m} (\mathbf q,\omega) \mathbf H(\mathbf q,\omega) , \label{d-me_coupling}\end{aligned}$$ where $$\begin{aligned} \beta_{ij}(\mathbf q,\omega) = i \gamma f \epsilon_{ii'i''} \sum_{\mathbf q' = \pm \mathbf Q} q'_k M_{k'}^{(0)}(\mathbf q')M_{i''}^{(0)}(-\mathbf q') ( \delta_{j'i'} \delta_{kk'} - \delta_{j'k'} \delta_{ki'} ) \chi_{j'j}^{e}(\mathbf q,\omega), \label{d-me-tensor}\end{aligned}$$ with $\gamma$ the gyromagnetic factor. As we see, this tensor $\hat \beta$ is not the mere transpose of the tensor $\hat \alpha$ given in and, in contrast to $\hat \alpha$, does not contain information about magnon excitations ($\hat \chi^{{m},\pm \mathbf Q}$ does not enter here). Let us now consider specific examples of magnetic structures. The first structure discovered with a long-period modulation was the helical one [@Izyumov84]: $$\begin{aligned} \mathbf M^{(0)} (\mathbf r)&=M_1 \cos ({\mathbf Q}\cdot {\mathbf r })\, \hat {\mathbf x} + M_3 \sin ({\mathbf Q}\cdot {\mathbf r })\,\hat {\mathbf z}, \label{helix}\end{aligned}$$ where $\mathbf Q = Q \, \hat {\mathbf y}$. This type of magnetic ordering is observed, for example, in CaFeO$_3$ [@Kawasaki98]. In this case, the inhomogeneous ME coupling is ineffective in producing an electric polarization since this structure does not break inversion symmetry. Nevertheless, it gives rise to a dynamical ME effect. To the lowest order (i.e., considering the external field as the effective field acting on the magnetization in the Landau-Lifshitz equation), the non-zero components of the susceptibility $\hat \chi^{{m},\pm \mathbf Q}$ are $$\begin{aligned} \chi^{{m},\pm \mathbf Q}_{xy}= -\chi^{{m},\pm \mathbf Q}_{yx} \propto M_z^{(0)}(\pm\mathbf Q),\\ \chi^{{m},\pm \mathbf Q}_{yz}= -\chi^{{m},\pm \mathbf Q}_{zy} \propto M_x^{(0)}(\pm\mathbf Q). \label{}\end{aligned}$$ If the electric susceptibility $\hat \chi^{e}$ is diagonal, this means that the non-zero components of the dynamical ME tensors are $\alpha_{xx}$, $\alpha_{zz}$, $\beta_{xx}$ and $\beta_{zz}$. The ME response generated dynamically in this case turns out to be analogous to the static one of the prototypical magnetoelectric Cr$_2$O$_3$ (see, e.g., [@O'Dell; @Krichevtsov93]). Another important class of magnetic distributions is the cycloidal one: $$\begin{aligned} \mathbf M^{(0)} (\mathbf r)&=M_2 \cos ({\mathbf Q}\cdot {\mathbf r })\, \hat {\mathbf y} + M_3 \sin ({\mathbf Q}\cdot {\mathbf r })\,\hat {\mathbf z}, \label{cycloidal}\end{aligned}$$ with $\mathbf Q = Q \, \hat {\mathbf y} $. The magnetization in $R$MnO$_3$ compounds, for example, develops this type of modulation, and its appearance is accompanied with ferroelectricity as we have explained above. The dynamic ME response in this case has the following features. The susceptibility $\hat \chi^{{m},\pm \mathbf Q}$ for the cycloidal has the non-zero components: $$\begin{aligned} \chi^{m,\pm \mathbf Q}_{xy}= -\chi^{m,\pm \mathbf Q}_{yx} \propto M_z^{(0)}(\pm\mathbf Q),\\ \chi^{m,\pm \mathbf Q}_{xz}= -\chi^{m,\pm \mathbf Q}_{zx} \propto M_y^{(0)}(\pm\mathbf Q), \label{}\end{aligned}$$ to the lowest order. In consequence, for a diagonal electric susceptibility, the only nonzero component of the ME tensor $\hat \alpha$ turns out to be $$\begin{aligned} \alpha_{xy} (\mathbf q, \omega) = 4i f Q M_2 \chi_{xx}^{e} (\mathbf q, \omega)\chi^{{m},-\mathbf Q}_{xy}(\mathbf q, \omega). \label{alpha}\end{aligned}$$ The fact that $\alpha_{xy} \not = \alpha _{yx}(=0)$ is a consequence of the inequivalence between the $x$ and $y$ directions in this magnetic structure. This reflects also in $\hat \beta$, whose non-zero component reduces to $\beta_{yx}$ as can be seen from . The optical response of $R$MnO$_3$ compounds with this type of cycloidal magnetization shows absorption peaks at frequencies too small to be connected with pure phonon modes ($\sim \text{THz}$) [@Pimenov06; @Sushkov07; @Takahashi08; @Kida08]. In TbMnO$_3$, in particular, these peaks have been correlated with the low-lying excitations of the cycloidal observed by inelastic neutron scattering experiments [@Senff07]. So they are interpreted as due to the electromagnon excitations naturally expected from the coupling (see, e.g., [@Katsura07]). The electromagnon response to an external ac electric field is computed in [@Katsura07] as the (fluctuation) contribution to the electric permittivity due to the cycloidal excitations, and these results are further used to derive certain selection rules for the above optical experiments (see e.g. [@Takahashi08]). One has to realize that, however, this is not the whole story. Electromagnons actually react to both electric and magnetic components of the external radiation due to their hybrid character, so their final response can be more complex \[Eqs. and , in general, do not reduce an effective permittivity\]. Let us illustrate this point by computing the reflection coefficient for a vacuum-cycloidal magnet interface. For the sake of concreteness we restrict ourselves to the case of normal incidence and linear polarization along the principal axes of the magnet \[which are assumed to be the axes of the cycloidal in the following\]. The result still depends on the orientation of the incident field with respect to the cycloidal. If the wavevector of the cycloidal $\mathbf Q$ is parallel to the interface the process is insensitive to the dynamical ME effect \[$\hat \alpha$ and $\hat \beta$ do not enter the reflection coefficient, which is given by the standard Fresnel formula (see e.g. [@Landau-Lifshitz_ECM])\]. The same happens if $\mathbf Q$ is perpendicular to the interface and the electric field is along the polar axis of the cycloidal. However, if the electric field is perpendicular to the polar axis (i.e., the incident fields are $\mathbf E^i \parallel \hat {\mathbf x}$ and $\mathbf H^i \parallel \hat {\mathbf z}$), the ME effect comes into play. In this case, the dispersion law for the light propagating through the magnet is $$\begin{aligned} ck = \pm {\sqrt{\Big(\varepsilon - {\alpha \beta \over \mu}\Big)\mu} }\, \omega, \label{}\end{aligned}$$ and the reflection coefficient is found to be $$\begin{aligned} r = {1 - \sqrt{\Big(\varepsilon - {\alpha \beta \over \mu}\Big){1 \over \mu}} \over 1 + \sqrt{\Big(\varepsilon - {\alpha \beta \over \mu}\Big){1 \over \mu}}} \label{}\end{aligned}$$ (hereafter we drop subindices since we are dealing with the only non-zero components of the ME tensors). In these expressions we see that the dynamic ME effect results in a new (effective) permittivity $\varepsilon_\text{eff} = \varepsilon - {\alpha \beta \over \mu}$ that now has poles at the magnon frequencies because of $\alpha \sim \chi ^{m,\mathbf Q}$ \[see Eq. \], i.e., this effective permittivity has electromagnon features. This is in tune with [@Katsura07] and the general interpretation of the experimental data (see e.g. [@Kida08]). For other orientations, however, the actual situation turns out to be a bit more subtle. If, for example, the plane of the cycloidal is parallel to the interface and the electric field is directed along the polar axis ($\mathbf E^i \parallel \hat {\mathbf z}$ and $\mathbf H^i \parallel \hat {\mathbf y}$), the ME coupling effectively result in the modification of the magnetic permeability (not the electric permittivity as before). This is not captured in [@Katsura07] because only the influence of the electric field is taken into account. Experimentally, however, no electromagnon feature seems to be observed for this orientation [@Takahashi08]. Furthermore, if the electric field is perpendicular to the plane of the cycloidal and this plane is perpendicular to the interface ($\mathbf E^i \parallel \hat {\mathbf x}$ and $\mathbf H^i \parallel \hat {\mathbf y}$), the dispersion law is obtained from the equation: $$\begin{aligned} \Big({ck\over \omega} - \alpha\Big)\Big({ck\over \omega} - \beta\Big) = \varepsilon\mu. \label{me-dispersion}\end{aligned}$$ In this case the ME effect plays a genuine role, not reducible to a mere modification of the electric permittivity (or magnetic permeability) as before. The waves associated with the two solutions of , for example, have different phase velocities. This possibility of removing the degeneracy between forward and backward waves is known long ago in genuine magnetoelectrics [@O'Dell; @Arima08]. It is worth mentioning that the field transmitted into the magnet can acquire a longitudinal component due to the ME effect. In the case $\mathbf E^i \parallel \hat {\mathbf x}$ and $\mathbf H^i \parallel \hat {\mathbf z}$, for example, the transmitted field is such that $$\begin{aligned} {H_y^{t} \over H_z^{t}} = -{\beta \over \big(\varepsilon - {\alpha \beta\over \mu}\big){1\over \mu}}.\end{aligned}$$ This possibility is also known for ordinary magnetoelectrics. To probe experimentally this longitudinal component can be somewhat difficult, but there is a related aspect of the ME effect whose experimental verification is, at least conceptually, much easier. It is the possibility of having a ME rotation of the reflected light. This possibility is quite obvious for the helical structure taking into account that its ME response is analogous to that of Cr$_2$O$_3$ as we have seen. A similar rotation (see, e.g., [@Krichevtsov93]) is therefore expected, with the particularity that in helical magnets like CaFeO$_3$ it can be significantly enhanced due to the resonant behavior of the ME response. Let us now explore yet another phenomenon that might benefit from these features. The ME effect has been pointed out as an interesting possibility to achieve a negative refractive index behavior [@Pendry04; @Tretyakov05], recently demonstrated experimentally [@Zhang09]. The key point in these experiments is the fabrication of metamaterials with chiral constituents. As in ordinary negative index metamaterials, the achievement of a negative index regime relies on the resonant response of the resulting system. In these cases, one basically deals with the resonances of the constituent particles [@Tretyakov05]. This imposes severe limits to the range of frequencies at which the corresponding negative index behavior can be achieved. In the case of spiral magnets, on the contrary, it is the collective behavior of the system what gives rise to the ME effect. But the resulting resonant behavior \[see Eqs. and \] is basically the same than in chiral metamaterials [@Tretyakov05]. Consequently these type of magnetic structures may also result in a negative index behavior, now at the frequencies of the corresponding electromagnons ($\sim$ THz for natural compounds). It is worth mentioning that spatial dispersion effects [@Landau-Lifshitz_ECM; @Agranovich] can also be dynamically amplified in spiral magnets. Generally spatial dispersion produces minute effects in optical experiments and, in practice, the response of the system is well described by the limiting $q\to 0$ behavior of the electric and magnetic susceptibilities $\hat \chi^{e(m)}$ [@Landau-Lifshitz_ECM; @Agranovich]. Accordingly, in the computation of the ME response tensors $\hat \alpha $ and $\hat \beta$ we have neglected terms $\mathcal O (q)$ coming from the inhomogeneous ME coupling \[see Eq. \]. Close to the electromagnon resonances, however, this neglection might be unjustified since these terms are dynamically enhanced by the same resonant mechanism that operates for $\hat \alpha (q =0)$ and $\hat \beta (q =0)$. The inhomogeneous ME coupling then has to be considered in its full extent [@note], and spatial dispersion effects may compete with the dynamic $q=0$ ME response in a similar way that it does, for example, with the static ME effect in Cr$_2$O$_3$ [@Krichevtsov93]. In summary, we have shown that the optical response of spiral magnets exhibits electromagnon features encoded in the form of a resonant magnetoelectric response. Spiral ordering does not have to be accompanied with multiferroicity (and/or a static magnetoelectric effect) to have these features. We have discussed the role of this dynamical response in optical experiments on multiferroics, showing that the observed electromagnon features not always can be reduced to an effective electric permittivity. We also have argued that electromagnon resonances in spiral magnets can amplify spatial dispersion and nonreciprocal effects. These resonances may also permit to achieve a negative refractive index behavior. I acknowledge P. Bruno, A. Levanyuk and especially E. Kats for very fruitful discussions. T. Kimura [*et al.*]{}, Nature (London) [**426**]{}, 55 (2003). Y. Yamasaki [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 147204 (2007). A. Pimenov [*et al.*]{}, Nature Phys. [**2**]{}, 97 (2006). T.H. O’Dell, [*The Electrodynamics of Magnetoelectric Media*]{} (Noth-Holland, Amsterdam, 1970). L.D. Landau and E.M. Lifshitz, [*Electrodynamics of Continuous Media*]{} (Pergamon Press, Oxford, 1984). T. Arima, J. Phys.: Condens. Mat. [**20**]{}, 434211 (2008). B.B. Krichevtsov [*et al.*]{}, J. Phys.: Condes. Mat. [**5**]{}, 8233 (1993); I. Dzyaloshinskii and E.V. Papamichail, Phys. Rev. Lett. [**75**]{}, 3004 (1995). A. Pimenov [*et al.*]{}, Phys. Rev. B [**74**]{}, 100403(R) (2006); A.B. Sushkov [*et al.*]{}, , 027202 (2007); R. Valdes-Aguilar [*et al.*]{}, Phys. Rev. B [**76**]{}, 060404(R) (2007); A. Pimenov [*et al.*]{}, Phys. Rev. B [**77**]{}, 014438 (2008). A.B. Sushkov [*et al.*]{}, J. Phys.: Condes. Mat. [**20**]{}, 434210 (2008); R. Valdes-Aguilar [*et al.*]{}, , 047203 (2009); A. Pimenov [*et al.*]{}, Phys. Rev. Lett. [**102**]{}, 107203 (2009). Y. Takahashi [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 187201 (2008). N. Kida [*et al.*]{}, Phys. Rev. B [**78**]{}, 104414 (2008); A. Pimenov [*et al.*]{}, J. Phys.: Condes. Mat. [**20**]{}, 434209 (2008). V.G. Bar’yakhtar, V.A. L’vov and D.A. Yablonskii, Pis’ma Zh. Eksp. Teor. Fiz. [**37**]{} 565 (1983) \[JETP Lett. [**37**]{}, 673 (1983)\]. A. Cano and E.I. Kats, Phys. Rev. B [**78**]{}, 012104 (2008). D.L. Mills and I.E. Dzyaloshinskii, Phys. Rev. B [**78**]{}, 184422 (2008). The most general form of the (isotropic) inhomogeneous ME coupling is $- f_1 {\mathbf P} \cdot {\mathbf M}(\nabla \cdot {\mathbf M}) - f_2 {\mathbf P} \cdot [{\mathbf M}\times (\nabla \times {\mathbf M})]$. For the dynamical effect we consider, only its antisymmetric part is relevant in the long-wavelength limit \[$f= f_1+f_2$ in Eq. \]. This is not the case, however, close to wavevectors of the magnetic modulation and/or if we are interested in spatial dispersion effects, to which the two terms above contribute separately [@Cano08]. H. Katsura, N. Nagaosa and A.V. Balatsky, Phys. Rev. Lett. [**95**]{} 057205 (2005); I.A. Sergienko and D.E. Dagotto, Phys. Rev. B [**73**]{} 094434 (2006); M. Mostovoy, Phys. Rev. Lett. [**96**]{}, 067601 (2006). M. Kenzelmann and A.B. Harris, Phys. Rev. Lett. [**100**]{}, 089701 (2008); M. Mostovoy, Phys. Rev. Lett. [**100**]{}, 089702 (2008). D. Belitz, T.R. Kirkpatrick and A. Rosch, Phys. Rev. B [**73**]{}, 054431 (2006); S. Tewari [*et al.*]{}, Phys. Rev. B [**78**]{}, 144427 (2008). Yu.I. Izyumov, Usp. Fiz. Nauk [**144**]{}, 439 (1984) \[Sov. Phys. Usp. [**27**]{}, 845 (1984)\]. S. Kawasaki [**et al.**]{}, J. Phys. Soc. Jpn. [**67**]{}, 1529 (1998). P.M. Woodward [*et al.*]{} Phys. Rev. B [**62**]{}, 844 (2000). D. Senff [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 137206 (2007). H. Katsura, A.V. Balatsky and N. Nagaosa, Phys. Rev. Lett. [**98**]{}, 027203 (2007). R.M. Hornreich and S. Shtrikman, Phys. Rev. [**171**]{}, 1065 (1968). The frequency window in which the rotation is expected to vary with the frequency is determined by the electromagnon damping. The variation can be $\pi /4$ for very sharp electromagnon peaks. For a ME response like that observed in spiral multiferroics rotations of $\sim \pi/40$ can be expected within a frequency window of $\sim 10 \, \text{cm}^{-1} $ about the corresponding electromagnon peaks. J.B. Pendry, Science [**306**]{}, 1353 (2004); C. Monzon and D.W. Forester, Phys. Rev. Lett. [**95**]{}, 123904 (2005). S. Tretyakov, A. Sihvola and A. Jylha, Photonics Nanostruct. Fundam. Appl. [**3**]{}, 107 (2005). S. Zhang [*et al.*]{}, Phys. Rev. Lett. [**102**]{}, 023901 (2009); E. Plum [*et al.*]{}, Phys. Rev. B [**79**]{}, 035407 (2009). V.M. Agranovich and V.L. Ginzburg, [*Crystal Optics with Spatial Dispersion, and Excitons*]{} (Springer-Verlag, Berlin, 1984).
--- abstract: 'Cell polarization plays a central role in the development of complex organisms. It has been recently shown that cell polarization may follow from the proximity to a phase separation instability in a bistable network of chemical reactions. An example which has been thoroughly studied is the formation of signaling domains during eukaryotic chemotaxis. In this case, the process of domain growth may be described by the use of a constrained time-dependent Landau-Ginzburg equation, admitting scale-invariant solutions [[[*à la*]{}]{}]{} Lifshitz and Slyozov. The constraint results here from a mechanism of fast cycling of molecules between a cytosolic, inactive state and a membrane-bound, active state, which dynamically tunes the chemical potential for membrane binding to a value corresponding to the coexistence of different phases on the cell membrane. We provide here a universal description of this process both in the presence and absence of a gradient in the external activation field. Universal power laws are derived for the time needed for the cell to polarize in a chemotactic gradient, and for the value of the smallest detectable gradient. We also describe a concrete realization of our scheme based on the analysis of available biochemical and biophysical data.' address: - '$^1$ Politecnico di Torino and CNISM, Corso Duca degli Abruzzi 24, Torino, Italy' - '$^2$ INFN, via Pietro Giuria 1, 10125 Torino, Italy' - '$^3$ Kavli Institute for Theoretical Physics, Santa Barbara, CA 93106-4030, USA ' - '$^4$ Landau Institute for Theoretical Physics, Kosygina 2, 119334 Moscow, Russia' author: - 'A. Gamba$^{1,2,3}$, I. Kolokolov$^4$, V. Lebedev$^4$, and G. Ortenzi$^1$' title: Universal features of cell polarization processes --- Introduction ============ Biophysical processes of cell polarization have attracted large interest in recent times. It has been observed that intriguing similarities exist in the polarization of such diverse biological systems as cells of the immune system, social amoebas, budding yeast, and amphibian eggs [[@WL03]]{}. This suggests that cell polarization may be a highly universal phenomenon. One of the best studied examples of the role of biochemical cell membrane polarization in eukaryotic cells is chemotaxis. Chemotaxis is the ability of cells to sense spatial gradients of attractant factors, governing the development of all superior organisms. Eukaryotic cells are endowed with an extremely sensible chemical compass allowing them to orient toward sources of soluble chemical signals. This mechanism is the result of billion years of evolution, and multicellular organisms would not exist without it. Slight gradients in the external signals produced by the environment induce the formation of oriented domains of signaling molecules on the cell membrane surface. Afterwards, these signaling domains induce differentiated polymerization of the cell cytoskeleton in their proximity, inducing the formation of a growing head and a retracting tail, and eventually directed motion towards the attractant source. It has been suggested in the biological literature that domains of signaling molecules are self-organized structures [[@PRG+04]]{}. In this paper we confirm that this expectation may be substantiated by the use of statistical mechanical methods, leading to the prediction that universal features typical of coarsening processes in phase-ordering systems should be observable in polarizing cells. We also describe here a concrete realization of our scheme in the process of eukaryotic chemotaxis, based on the analysis of available biochemical and biophysical data. Part of the results presented here have been briefly reported in a previous letter [[@GKL+07]]{}. Cell polarity ============= Stochastic reaction-diffusion systems are a natural paradigm for describing in physical terms the biochemical processes taking place in the living cell, since the cytosol and cell membrane are inherently diffusive environments[[^1]]{}. Although active transport processes also take place in the cell, they regard mainly vesicles, organelles and large multiprotein complexes, while smaller cell constituents move diffusively. Thermal agitation and the intrinsic stochasticity in the advancement of chemical reactions provide natural sources of noise. Most reactions in the cellular environment would be very slow if they were not favored by the action of catalysts. Small numbers of enzymatic molecules ($10^3$–$10^5$ per cell) control the speed of chemical reactions involving much larger numbers of substrate molecules ($10^5$–$10^6$ per cell.) Often, the substrate concentration in its turn controls the catalyst activity, so that the response of the system becomes nonlinear. Most biochemically relevant reactions involve enzyme-substrate couples and are part of networks of interconnected autocatalytic reactions. Nonlinearities allow in principle the system to realize several stable biochemical phases, characterized by different concentrations of chemical factors [[@Kam07]]{}. Transitions between different phases in reaction-diffusion systems have been observed in purely physical settings, such as the adsorption and reaction of gases on catalytic surfaces [[@SHV+01; @WHS+05]]{}. Recently, it has been shown that a similar process of nonequilibrium phase separation may be at the heart of directional sensing in higher eukaryotes [[@GCT+05; @GKL+07]]{}. In eukaryotic directional sensing cells exposed to shallow gradients of external attractant factors polarize accumulating the phospholipidic signaling molecule [[[*phosphatidylinositol trisphosphate*]{}]{}]{} (PIP$3$) and the PIP3-producing enzyme [[[*phosphatidylinositol 3-kinase*]{}]{}]{} (PI3K) on the cell membrane side exposed to the highest attractant concentrations, while [[[*phosphatidylinositol bisphosphate*]{}]{}]{} (PIP2) and the PIP2-producing enzyme [[[*phosphatase and tensin homolog*]{}]{}]{} (PTEN) accumulate on the complementary side [[@PD99]]{} (see \[app:lattice\] for a more abstract description of the relative roles of these signaling molecules.) Accurate quantitative experiments [[@SNB+06; @SMO06]]{} performed by exposing [[[*Dictyostelium*]{}]{}]{} cells to controlled attractant gradients showed that uniform concentrations of external attractant factor induce a predominant, uniform concentration of PIP3 and PI3K on the cell membrane, and do not immediately result in cell polarization and motion. However, slight gradients in the distribution of the attractant factor induce the formation of two complementary domains, one rich in PIP3 and PI3K, and one rich in PIP2 and PTEN, in times of the order of a few minutes. This early breaking of the spherical symmetry of the cell membrane induces cell polarization and motion [[@PD99]]{}. Uniformly stimulated cells observed over longer timescales (of the order of 1 hour) are seen to polarize stochastically and move in random directions. Numerical simulations of a stochastic reaction-diffusion model of the process suggest that both the early, large amplification of slight attractant gradients and the separate phenomenon of late, random polarization under uniform stimulation are explained by the proximity of the system to a spontaneous phase separation driven by non-linear autocatalytic interactions [[@GCT+05]]{}. In this framework, cell polarization is the final result of a nucleation process by which domains rich in PIP3 and PI3K are created in a sea rich in PIP2 and PTEN, or vice versa, depending on initial conditions and activation patterns. The polarization process is accomplished when pure PIP2 and PIP3 rich domains grow to sizes comparable to the size of the cell. Gradient activation patterns strongly influence the kinetic of domain growth and coalescence, taking advantage of the underlying phase-separation instability. This way, the peculiar reaction-diffusion dynamics taking place on the surface of the cell membrane works as a powerful amplifier of slight anisotropies in the distribution of the external chemical signal. In this statistical mechanical point of view, random and gradient-driven polarization appear as two faces of the same coin, in good agreement with some of the existing biological intuition [[@WL03]]{}. To better understand the process of spontaneous and gradient-driven cell polarization from a physical point of view it is convenient to describe the corresponding signaling network in abstract terms, [[[*i.e.*]{}]{}]{} forgetting about the particular nature of the molecules involved and considering only the general structure of the network. This approach has the potential to provide a unified description of polarization phenomena in distant biological systems. In our abstract signaling network (Fig. \[fig:one\]) a system or receptors transduces an external distribution of chemical attractant into an internal distribution of activated enzymes $h$, which catalyze the switch of a signaling molecule between two states, that we denote here as $\varphi^-$ and $\varphi^+$. A counteracting enzyme $u$ transforms the $\varphi^+$ state back into $\varphi^-$. The molecule $\varphi^-$ in turn activates $u$, thus realizing a positive feedback loop. The signaling molecules $\varphi^+$, $\varphi^-$ are permanently bound to the cell surface $S$ and perform diffusive motions on it, while the $u$ enzymes are free to shuttle between the cytosolic reservoir and the membrane. In a more complete description we should consider that also the $h$ enzymes are shuttling from the cytosol to the membrane [[@GCT+05; @GKL+07]]{}. Here however for simplicity we represent with $h$ only the receptor-bound fraction, which we identify with the external activation field. The diffusivity of $u$ enzymes in the cytosol is much higher than the diffusivity of $\varphi^+, \varphi^-$ molecules on the cell membrane, therefore membrane-bound $u$ enzymes may be assumed to be in approximate equilibrium with the $\varphi^+, \varphi^-$ concentration field. This fact leaves only the $\varphi^+, \varphi^-$ surface molecule concentration as relevant dynamic variables. Moreover, since the $\varphi^+, \varphi^-$ molecules may only be converted into each other, we are left with only one relevant degree of freedom, their difference $\varphi \equiv \varphi^+ - \varphi^-$. The model of Fig. \[fig:one\] was initially introduced to describe chemotactic polarization in higher eukaryotes [[@GCT+05; @GKL+07]]{}. In that case, we identify $\varphi^-$ and $\varphi^+$ with PIP2 and PIP3, $u$ with activated PTEN, and $h$ with activated PI3K. Recently, it has been proposed that polarization of budding yeast (a lower eukaryote) may be the result of an amplifying feedback loop similar to the one described in Fig. \[fig:one\] [[@AAW+08]]{}. In our language $\varphi^-$ and $\varphi^+$ represent there the activated and unactivated states of the Cdc42 [[[*small GTPase*]{}]{}]{} (see \[app:lattice\]), while $u$ would be identified with the activating factor Cdc24. The model of Ref. [[@AAW+08]]{} lacks a counteracting enzyme playing the role of $h$ in the scheme of Fig. \[fig:one\], and is therefore not bistable. For this reason, it can reproduce only stochastic, intermittent polarization, as is observed at the border of the bistability region in the case of chemotactic polarization [[@GCT+05; @PRG+04]]{}. However, in a recent work [[@TGH+07]]{} a counteracting Cdc42 deactivating factor that could play the role of $h$ has been described. This suggests that polarization of budding yeast cells may be driven by a bistable potential allowing the realization of stable polarization, similarly to the case of chemotactic polarization of higher eukaryotes. \[sec:macroscopic\]Macroscopic description of cell polarization =============================================================== Cell polarization is a macroscopic effect, emerging from the stochastic dynamics of a network of chemical reactions taking place in occasion of the random encounters of specific signaling molecules which perform diffusive motions and shuttle between the cell cytosol and membrane [[@RSB+03; @LH96; @WL03]]{}. A large amount of information has been collected in recent years about the biochemical aspects of cell polarization in higher [[@RSB+03; @LH96; @WL03; @PD99; @Par04; @HKK07; @PRG+04]]{} and lower [[@WAW+03; @WWS+04; @MWL+07]]{} eukaryotes. However, available data cannot be considered yet complete or quantitative to a satisfactory degree. This kind of situation is typical of present efforts to derive macroscopic aspects of cell behavior from noisy and yet poorly quantitative data about the relevant microscopic interactions. It is therefore extremely important that a sensible macroscopic description of cell polarization can be given, starting only from the knowledge of a few robust properties of the biophysical system. In this Section we develop such a description. In the next Section we show how known examples of cell polarization fit in our general scheme. Let us start here by assuming that we have knowledge only about the following robust properties of the cell polarization process: [[[*Single component order parameter:*]{}]{}]{} The state of the system may be effectively described in terms of the configurations of a single-component concentration field $\varphi$ describing the distribution of a set of signaling molecules on the cell membrane $S$. [[[*Bistability:*]{}]{}]{} The underlying chemical reaction networks allows the realization of distinct, locally stable chemical phases. [[[*Self-tuning:*]{}]{}]{} A global feedback mechanism controls the metastability degree $\psi$ of the system and drives it towards a state of phase coexistence. [[[*Non-conserved field:*]{}]{}]{} There are no local constraints on the values assumed by the field $\varphi$. The present set of properties stems from the abstraction of known properties of eukaryotic polarization (see also the next Section). In particular, Property 4. is the consequence of the fast diffusion of $u$ enzymes across the cytosolic volume [[@GKL+07]]{}. (It is worth mentioning here that our framework would still hold, although with a few differences, also in the case that Property 4. be substituted by a local conservation condition.) [[[*Property 1.*]{}]{}]{} implies that the evolution of the state of the system can be described by a single stochastic reaction-diffusion equation. Studies of non-equilibrium statistical mechanics have shown that a few classes of nonlinear stochastic equations may emerge from the coarse-graining of microscopic dissipative dynamical systems, depending on general properties, such as the number of field components and the presence, or absence, of local conservation laws [[@HH77; @Bra94]]{}. [[[*Property 4.*]{}]{}]{} leads us to select the [[[*time-dependent Landau-Ginzburg model*]{}]{}]{} $$\begin{aligned} \partial_t \varphi (\mathbf{r}, t) & = & - \frac{\delta \mathcal{F}_{\psi, h} [\varphi]}{\delta \varphi (\mathbf{r}, t)} + \Xi (\mathbf{r}, t) \label{langinz}\end{aligned}$$ (or [[[*model A*]{}]{}]{} in the classification of Ref. [[@HH77]]{}) where $$\begin{aligned} \mathcal{F}_{\psi, h} [\varphi] & = & \int_S \left[ \frac{D}{2} \left| \nabla \varphi \right|^2 + V_{\psi, h} (\varphi) \right] {}{\mathrm{d}}\mathbf{r} \label{eq:freeen}\end{aligned}$$ is an effective free energy functional, $h$ is an external activation field, $D$ is a diffusion constant, $V_{\psi, h}$ is an effective potential, and $\Xi$ is a noise term taking into account the effect of thermal agitation and chemical reaction noise. [[[*Property 2.*]{}]{}]{} implies that the effective potential $V_{\psi, h}$ has two potential wells, corresponding to a couple of distinct, stable chemical phases $\varphi_+$ and $\varphi_-$[[^2]]{}. The kinetic advantage of transforming a region of $\varphi_+$ phase into a region of $\varphi_-$ phase is measured by the metastability degree $$\begin{aligned} \psi & = & V_{\psi, h} (\varphi_+) - V_{\psi, h} (\varphi_-)\end{aligned}$$ The polarized state corresponds to the stable coexistence of the $\varphi_+$ and $\varphi_-$ phases in complementary regions of the cell membrane. [[[*Property 3.*]{}]{}]{} implies that $\psi$ is an integral functional of the field configuration, going to zero for large times under stationary conditions. A reasonable analiticity assumption then leads to the following system of equations, describing the dynamics of cell polarization in the presence of a stationary external activation field: $$\begin{aligned} \partial_t \varphi (\mathbf{r}, t) & = & D \nabla^2 \varphi (\mathbf{r}, t) - \frac{\partial V_{\psi, h}}{\partial \varphi} \left[ \varphi (\mathbf{r}, t) \right] + \Xi (\mathbf{r}, t) \label{eq:evo}\\ \psi (t) & \propto & \int_S \varphi (\mathbf{r}, t) {\mathrm{d}}\mathbf{r}- \int_S \varphi (\mathbf{r}, \infty) {\mathrm{d}}\mathbf{r}, \hspace{2em} t \rightarrow \infty \label{eq:constr}\end{aligned}$$ Model free energy ================= It is possible to derive a concrete realization of the scheme described in the previous Section in the case of the signaling network of Fig. \[fig:one\] by using the law of mass action, the quasistationary approximation for enzymatic kinetics, and the limit of fast cytosolic diffusion (see \[app:one\]). In this case, the state of the system can be described by the single concentration field $\varphi = \varphi^+ - \varphi^-$, thus giving [[[*Property 1*]{}]{}]{} of the previous Section. The $\varphi$ field is not constrained by a local conservation law because $\varphi^+$ molecules can be freely converted into $\varphi^-$ molecules and back on any point of the cell surface. This fact corresponds to [[[*Property 4.*]{}]{}]{} The evolution of the $\varphi$ field is described by the equation $$\begin{aligned} \partial_t \varphi & = & D \nabla^2 \varphi - k_{\mathrm{cat}} K_\mathrm{ass} f \frac{c^2 - \varphi^2}{2 K + c + \varphi} + 2 k_\mathrm{cat} h \frac{c - \varphi}{2 K + c - \varphi} + \Xi\end{aligned}$$ where $f = u_{\mathrm{free}}$ is the volume concentration of free cytosolic $u$ enzymes (which is approximately uniform as a consequence of fast cytosolic diffusion), $h$ is a surface activation field, $K_{\mathrm{ass}}$ is the association constant of $u$ enzymes to $\varphi^-$ signaling molecules, $k_{\mathrm{cat}}$ is a catalytic rate, $K$ is a saturation (Michaelis-Menten) constant. The corresponding effective potential has the form $V_{f, h} (\varphi) = fV_1 (\varphi) + hV_2 (\varphi)$ (see \[app:one\]). The metastability degree $\psi$ is therefore a function of $h$ and $f$. If $h = h (\mathbf{r}, t)$ is not uniform ([[[*e.g.*]{}]{}]{} if the cell is exposed to a chemical activation gradient) $\psi$ takes on different values in different points of the membrane surface. We consider however for the moment the simplest case where the activation field is uniform in space and constant in time. A simple analysis shows (\[app:one\]) that there are regions of parameter values such that $V_{\psi, h}$ is bistable, with two potential wells $\varphi_+$ and $\varphi_-$ corresponding to stable phases respectively rich in the $\varphi^+$ and $\varphi^-$ signaling molecules. Thus [[[*Property 2*]{}]{}]{} is verified. In the present problem, the volume concentration $f$ of free enzymes varies in time (but not in space). More information about its values can be gotten in the limit (realized for small membrane diffusivities and large times) when the interface between the $\varphi_+$ and the $\varphi_-$ phase is much smaller than the typical domain size, allowing us to use the so-called [[[*thin wall approximation*]{}]{}]{}. Then, the value of $f$ is simply linked (see \[app:one\]) to the area covered by the $\varphi_-$ phase: $$\begin{aligned} f (t) - f (\infty) & \propto & \int_S \varphi (\mathbf{r}, t) {\mathrm{d}}\mathbf{r}- {}\int_S \varphi (\mathbf{r}, \infty) {\mathrm{d}}\mathbf{r} \propto \psi (t)\end{aligned}$$ showing that [[[*the metastability degree is proportional to the excess fraction of free cytosolic $u$ enzymes*]{}]{}]{} with respect to their value at equilibrium. The presence of this global feedback mechanism corresponds to [[[*Property 3.*]{}]{}]{} The present situation is closely reminiscent of the decay of the uniform, metastable state of a [[[*supersaturated solution*]{}]{}]{} with the formation of precipitate grains [[@LP81]]{}. In that case, the metastability degree is proportional to the excess solute concentration with respect to its equilibrium value. The main difference with the present case is that in the case of precipitation, the density field $\varphi$ is locally constrained by the law of particle conservation [[@Bra94]]{}, and its evolution is described by Model B of Ref. [[@HH77]]{}, instead than by model A. \[sec:kin\]Phase separation kinetics ==================================== In polarization experiments cells are exposed to uniform or gradient distributions of attractant factors and polarize either spontaneously, or in the direction of the attractant gradients [[@WL03]]{}. The properties of the model free energy described in the previous Section and numerical simulations of a model system [[@GCT+05]]{} suggest that the introduction of an external attractant distribution moves the system in a region of bistability, where the uniform phase realized at initial time becomes metastable and germs of a new phase are nucleated. Depending on the way the system is prepared at initial time, the metastable phase can be either a $\varphi^+$ rich or a $\varphi^-$ rich phase. The process of decay of a metastable state in physical systems described by systems of equations similar to (\[eq:evo\], \[eq:constr\]) has been extensively studied in the framework of the theory of first-order phase transitions [[@LP81; @Bra94]]{}. The process passes through successive stages of nucleation, coarsening, and coalescence (Fig. \[fig:three\]). In the first stage, approximately circular germs of the new, stabler phase are produced in the sea of the metastable phase by random fluctuations, or by the presence of nucleation centers. In the second stage, a process of coarsening is observed, where larger domains of the new phase grow at the expense of smaller ones, the average size of domains grow, and the average number of domains decreases. In a finite system, the process is concluded when a state of phase coexistence is reached. In this final state, the two phases are in equilibrium and are polarized in two large complementary domains. For our purposes, a detailed knowledge of the initial, nucleation stage[[^3]]{} is not necessary, as long as its characteristic time $t_0$ is so fast that a large number of germs of the new phase is nucleated all over the cell surface, well before the coarsening stage starts[[^4]]{}. To understand the subsequent, coarsening state we have to focus on the laws by which the domains of the new phase either grow or shrink. We consider here the case when the new phase is a minority phase, so that we can restrict our consideration to approximately circular domains, which are dominating because they minimize the linear tension between the two phases. For simplicity, we shall also restrict to domains which are small enough that membrane curvature may be neglected. An approximate equation for the growth of a circular domain of size $r$ may be derived from (\[eq:evo\]) in the thin wall approximation. Inserting the approximate propagating solution $\varphi (\mathbf{R}, t) = \phi (R - r (t))$ (Fig. \[fig:quattro\]a) for the radial domain profile in (\[eq:evo\]) and integrating over $S$ we get $$\begin{aligned} \frac{\partial \mathcal{F}_{\psi, h} [\phi]}{\partial r} & = & - \frac{\partial Q}{\partial \dot{r}} + \xi' \label{eq:diss}\end{aligned}$$ where $$\begin{aligned} Q & = & \frac{\dot{r}^2}{2} \int_S (\phi')^2 {\mathrm{d}}\mathbf{R}\end{aligned}$$ is a dissipation function [[@LL80]]{} and $\xi'$ is a noise term. For a circular domain of radius $r$, $Q \simeq \gamma \pi r \dot{r}^2 $, where $$\begin{aligned} \gamma & = & \int_0^{\infty} (\phi')^2 {\mathrm{d}}R = \int_{\varphi_-}^{\varphi_+} \sqrt{2 V_{\psi, h} (\phi) / D} {\mathrm{d}}\phi {}\end{aligned}$$ is a kinetic coefficient [[@Bra94]]{}. On the other hand the effective free energy for a circular domain or radius $r$ is[[@Bra94]]{}: $$\begin{aligned} \mathcal{F}_{\psi, h} & = & 2 \pi \sigma r - \pi r^2 \psi \label{eq:freeencirc}\end{aligned}$$ where $\sigma = D \gamma$ is a linear tension. From (\[eq:diss\], \[eq:freeencirc\]) we get the following approximate equation for the growth of a circular domain of size $r$: $$\begin{aligned} \gamma \dot{r} & = & \psi - \frac{\sigma}{r} + \xi \label{eq:domain}\end{aligned}$$ where $\xi$ is a noise term. Eq. (\[eq:domain\]) shows that domains smaller than the critical radius $$\begin{aligned} r_c & = & \frac{\sigma}{\psi}\end{aligned}$$ are mainly dissolved by diffusion, while germs with $r > r_c$ mainly survive and grow because of the overall gain in free energy (Fig. \[fig:quattro\]b). ----------- -- ----------- -- -- -- [**a**]{} [**b**]{} ----------- -- ----------- -- -- -- During the nucleation stage the noise term produces a population of germs of the new phase of size close to $$\begin{aligned} r_0 & \sim & r_c \sim \delta\end{aligned}$$ in a characteristic time $t_0$. For domains with $r > r_c$ the noise term in (\[eq:domain\]) may be neglected and domain growth is an almost deterministic process. It is interesting to estimate $r_0 \sim \delta$ in terms of observable parameters. The thickness $\delta$ can be estimated as $$\begin{aligned} \delta & \sim & \sqrt{D / b}\end{aligned}$$ where $b$ is the potential barrier separating the two phases [[@Bra94]]{}. The height of the potential barrier may in its turn be estimated dimensionally from (\[ilpotenziale\]) as $b \sim k_{\mathrm{cat}} hc$, giving $$\begin{aligned} r_0 & \sim & \delta \sim \sqrt{\frac{D}{k_{\mathrm{cat}}} \frac{c}{h}} \label{eq:stima}\end{aligned}$$ Using realistic parameter values ($D \sim 1 \mu m^2 / s$, $k_{\mathrm{cat}} \sim 1 s^{- 1}$, $c / h \sim 10$) we get $r_0 \sim 1 \mu m$. \[sec:coarsening\]The coarsening stage ====================================== When domains of the new phase occupy an appreciable fraction of the membrane surface $S$ a coarsening stage sets on. Domain growth makes the degree of metastability $\psi$ decrease and renders further growth of the new phase more and more difficult. The critical radius $r_c$ grows with time, so that domains that earlier had size larger than $r_c$ become undercritical and shrink, and larger domains grow at the expense of smaller ones. In a large system $r_c$ soon becomes the main length scale in the problem, leading to the appearance of a scaling distribution of domains of size $r$. The population of coarsening domains of size $r$ can be described in terms of the size distribution function [ $n (r, t)$]{}, such that $n (r, t) \Delta r$ is the average number of domains with size comprised between $r$ and $r + \Delta r$, and the total number of domains at time $t$ is given by $$\begin{aligned} N (t) & = & \int_0^{\infty} n (r, t) {\mathrm{d}}r\end{aligned}$$ The time evolution of $n (r, t)$ implied by (\[eq:domain\]) is described by a standard Fokker-Planck equation [[@Kam07]]{}. If we restrict our consideration to supercritical domains we can neglect the diffusive part of the Fokker-Planck equation since for them the noise term $\xi$ is negligible. This means that the stochastic nature of the problem enters mainly in the formation of the initial distribution of germ sizes $n (r, t_0)$, while for $r > r_c$ the time evolution of $n (r, t)$ is dictated by the deterministic part of (\[eq:domain\]). Thus, we are left with the following kinetic equation: [ $$\begin{aligned} \gamma \frac{\partial n (r, t)}{\partial t} + \frac{\partial}{\partial r} \left[ \left( \psi (t) - \frac{\sigma}{r} \right) n (r, t) \right] & = & 0 \label{eq:kin}\end{aligned}$$]{} Eq. (\[eq:kin\]) contains the unknown function $\psi (t)$, and is therefore not closed. We obtain a closed system by complementing (\[eq:kin\]) with the asymptotic law [ $$\begin{aligned} \psi (t) & \propto & A_{\infty} - \int_0^{\infty} \pi r^2 n (r, t) {\mathrm{d}}{\color{black} } r {}\label{eq:area}\end{aligned}$$]{} obtained from (\[eq:constr\]) in the thin wall approximation. Here $$\begin{aligned} A_{\infty} & = & \int_0^{\infty} \pi r^2 n (r, \infty) {\mathrm{d}}{\color{black} } r {}\end{aligned}$$ is the area occupied by the new phase at equilibrium. For large times a scaling distribution of domain sizes can be found explicitly (\[app:scaling\] and Fig. \[fig:pdf\]): $$\begin{aligned} n (r, t) {\mathrm{d}}r & = & \frac{CA_{\infty}}{r_c^2} p (r / r_c) {\mathrm{d}}(r / r_c), \hspace{2em} \psi (t) = \frac{\sigma}{r_c} \nonumber\\ r_c & \equiv & r_c (t) = r_0 (t / t_0)^{1 / 2} \label{eq:loscaling}\end{aligned}$$ where $$\begin{aligned} p (\rho) & = & {\color{black} \frac{8 {\mathrm{e}}^2 \rho}{(2 - \rho)^4} \exp \left( - \frac{4}{2 - \rho} \right)}, \hspace{2em} t_0 = \frac{2 \gamma r_0^2}{\sigma}, \label{eq:laprob}\end{aligned}$$ where $r_0$ is the characteristic domain size at the beginning of the coarsening stage and $C \simeq 0.11$. [l]{}\ The total number of domains decreases in time due to the evaporation of small domains. Using the explicit solution (\[eq:loscaling\], \[eq:laprob\]), we easily find: $$\begin{aligned} N (t) = \int_0^{\infty} n (r, t) {\mathrm{d}}r & = & \frac{CA_{\infty}}{r_c^2} = \frac{CA_{\infty} / r_0^2}{t}\end{aligned}$$ Similarly, it is possible to compute explicitly the value of the average domain size, which is found to coincide exactly with the critical radius: $$\begin{aligned} \langle r \rangle & = & r_c\end{aligned}$$ \[sec:spont\]Spontaneous and gradient-induced polarization ========================================================== The coarsening theory exposed in the previous Section allows to deduce a simple scaling law for the time needed for spontaneous cell polarization. If the cell has size $R$, the growth of domains according to (\[eq:loscaling\]) comes to a stop at the time $t_{\ast}$ when the average patch size $\langle r \rangle$ becomes of the order of the cell size $R$. From (\[eq:loscaling\]) we get $$\begin{aligned} t_{\ast} & \sim & t_0 \left( R / r_0 \right)^2\end{aligned}$$ At the end of the process the cell is polarized in a random direction. The actual direction of polarization is the result of the initial random unbalance in the germ distribution. The typical time for random polarization is of the order of $10^3\,\mathrm{s}$ [[@GKL+07]]{}. Together with the estimate (\[eq:stima\]) this gives $t_0 \sim 10\,\mathrm{s}$. Let us now consider the case where a source of external attractant is present at some distance from the cell, in such a way that a gradient of external attractant is created by diffusion close to the cell surface (Fig. \[fig:source\]). The inhomogeneity in the distribution of attractant induces a similarly inhomogeneous distribution of activated enzymes $h$. This way, the degree of metastability $\psi$ takes on different values on different points of the cell surface. If the cell membrane has a nearly spherical form and a radius $R$ much smaller than the characteristic scale of the attractant distribution, and if the gradient component of the activation field is small with respect to the background component on the scale $R$, the metastability degree $\psi$ at the beginning of the coarsening process may be written as the sum of a uniform component $\psi$ and a small space-dependent perturbation: $$\begin{aligned} & & \psi + \delta \psi \hspace{2em} \mathrm{with} \hspace{2em} \delta \psi = - \epsilon \psi_0 \cos \theta\end{aligned}$$ where $\psi_0$ is the value of the uniform component at the beginning of the coarsening process and $\epsilon$ is the relative gradient on the scale $R$. The perturbation modifies the equation of domain growth (\[eq:domain\]) as follows: $$\begin{aligned} \gamma \dot{r} & = & \psi - \frac{\sigma}{r} - \epsilon \psi_0 \cos \theta + \xi \label{cinque}\end{aligned}$$ where $\theta$ is an azimuthal angle defined in Fig. \[fig:source\]. The uniform component $\psi$ varies in time together with the (approximately) uniform concentration of $u$ molecules in the cell volume. On the other hand, the perturbation $\delta \psi$ is constant in time, but not uniform in space, being proportional to the external attractant distribution. As long as $\epsilon \psi_0 \ll \psi$, the effect of the perturbation is negligible, so domain growth proceeds according to the law (\[eq:loscaling\]) and the uniform component $\psi$ decays as $t^{- 1 / 2}$. In a large cell there is a crossover time $t_{\epsilon}$ when the perturbation becomes of the same order of the uniform component: $$\begin{aligned} \psi (t_{\epsilon}) & = & \epsilon \psi_0\end{aligned}$$ Using the scaling law (\[eq:loscaling\]) we get $$\begin{aligned} t_{\epsilon} & = & \frac{t_0}{\epsilon^2}\end{aligned}$$ After $t_{\epsilon}$ domain growth enters in a new stage, where the growth becomes anisotropic. Domains in the front and back of the cell get different average sizes (Fig. \[fig:aniso\]). Indeed, for $t > t_{\epsilon}$ the leading term in (\[cinque\]) is the perturbation $\epsilon \psi_0 \cos \theta$, implying that in the region closer to the source of the perturbation ($\cos \theta > 0$) the $\varphi_-$ phase evaporates, and in the region away from the source ($\cos \theta < 0$) it condenses. At the end of the process,  complete polarization is realized (Fig. \[fig:aniso\]). In this final stage domains grow approximately linearly in time, thus the total time $t_{\epsilon}'$ to reach polarization is still a quantity of order $t_{\epsilon}$ (using definition (\[smallestgr\]) from the next Section it can be estimated as $\frac{1}{2} \left(1+ \epsilon / \epsilon_{\mathrm{th}} \right) t_{\epsilon}$). The above scheme is valid as soon as the initial nucleation time $t_0$ is significantly smaller than $t_{\epsilon}$, an assumption which is compatible with the observation of real [[@PRG+04]]{} and numerical [[@GCT+05]]{} experiments. Gradient sensitivity ==================== The second stage of domain evolution described in the previous Section occurs only if $t_{\ast} > t_{\epsilon}$. Otherwise, the presence of a gradient of attractant becomes irrelevant and only the stage of isotropic domain growth actually occurs. This condition implies that a [[[*[[[*smallest detectable gradient*]{}]{}]{}*]{}]{}]{} exists, such that directional sensing is impossible below it. The threshold value [ $\epsilon_{\mathrm{th}}$]{} for $\epsilon$ is found by the condition. Since the product $\psi r_c$ is a time-independent constant, we can simply compare its value at initial and final time when $\epsilon = \epsilon_{\mathrm{th}}$, obtaining that the [[[*threshold detectable gradient*]{}]{}]{} is $$\begin{aligned} \epsilon_{\mathrm{th}} & = & \frac{r_0}{R} \label{smallestgr}\end{aligned}$$ Using the estimates from Sections \[sec:kin\] and \[sec:spont\], and the typical value $R \sim 10\, \mu \mathrm{m}$, we get $\epsilon \sim 10\%$, a value which is compatible with the observations [[@SNB+06]]{}. An interesting speculation is that the bound (\[smallestgr\]) may explain why spatial directional sensing was developed only in large eukaryotic cells and not in smaller prokaryotes, whose directional sensing mechanisms rely instead on the measurement of temporal variations in concentration gradients [[@ASB+99]]{}. By solving (\[smallestgr\]) in terms of the size $R$ we get the following bound for the size of a cell which may be able to sense a relative gradient $\epsilon$: $$\begin{aligned} R & > & \frac{r_0}{\epsilon}\end{aligned}$$ Our bound goes in the same direction as the size criterion formulated in [[@BP77]]{}, but it’s independent of it, since the criterion of Ref. [[@BP77]]{} is based on estimates of signal-to-noise ratios, while our bound stems from the intrinsic properties of polarization dynamics. External fluctuations ===================== One may wonder whether a cell may become polarized by transient gradients produced by a spontaneous fluctuation in the external distribution of attractant molecules, or fluctuations in receptor-ligand binding, as has been suggested in the literature [[@LH96]]{}. Since eukaryotic cells typically carry 10$^4$–10$^5$ receptors for attractant factors, one expects spontaneous fluctuations in the fraction of activated receptors to be of the order of 10$^2$, a value which is comparable to observed anisotropy thresholds. However, to actually produce directed polarization the fluctuation should sustain itself for several minutes, i.e. for a time comparable to the characteristic polarization time (such as $t_{\epsilon}$ ). Such an event has very low probability of being observed since the correlation time of the fluctuations determined by attractant diffusion at the cell scale and the characteristic times of receptor-ligand kinetics are much less than the polarization time. Indeed, the diffusion time is $\sim 1\,$s at the typical cell size $10\,\mu$m, and the characteristic times of receptor-ligand kinetics are also $\sim 1\,$s (see online supporting information to Ref. [[@SNB+06]]{}). Therefore, the direction of cell polarization in the case of a homogeneous distribution of attractant can only be determined by the inhomogeneity in the initial distribution of the positions of PIP2-rich germs produced by thermal fluctuations. Conclusions =========== By using standard statistical mechanical methods we have shown that the dynamics of signaling domains in cell polarization is independent on the nature of the signaling molecules and the values of kinetic rate constants, as long as some very general conditions are met: \[prop:a\]Timescale separation allows to describe the polarization process in terms of a single concentration field of signaling molecules on the cell membrane[[^5]]{}. \[prob:b\]The underlying chemical reaction network is bistable. \[prop:c\]A global feedback mechanism drives the system towards phase coexistence. \[prop:d\]The cell is sufficiently larger than the size of nucleating germs of the new phase. These conditions allow the cell to work as a detector of slight gradients of external stimulation gradients. The property of [[[*universality*]{}]{}]{} arising from our analysis cannot be underestimated. Presently, several efforts are made to understand the dynamical behavior of living beings starting from microscopic informations provided by molecular biology. However, these informations are mostly incomplete and poorly quantitative, and theories that depend in a sensitive way on them are likely to be of little utility. But if some behavior happens to be [[[*universal*]{}]{}]{}, a consistent physical theory of it may be built, which can be compared to experiments. The universal properties of cell polarization emerge from properties of domain growth which have been extensively studied in first-order phase transitions[[@Bra94]]{}. The similarity of the two problems follows from the fact that fast degrees of freedom of chemical kinetics are in approximate equilibrium with slower degrees of freedom, which can be described by means of an effective free energy functional. It is worth observing that in the biological system studied here, there is no direct interaction between signaling molecules, similar to the one observed in solid state system such as binary alloys, but only an effective interaction mediated by enzyme activity, binding, unbinding and diffusion processes. Our theoretical scheme allows to shed light on some non trivial questions, such as the mechanism of directional sensing and the effect of random fluctuations of the medium on the polarization process. Random polarization appears as the result of the intrinsic stochasticity of the process of domain nucleation and not of random fluctuations of the medium. Random and gradient-induced polarization appear as the two sides of a same coin. Our scheme provides an explanation of why spatial directional sensing is not observed in the small prokaryotic cells, and provides asymptotic estimates for polarization times and threshold detectable gradients. An important component of our picture is the existence of a global coupling of the degree of metastability to the state of the system [[@GKL+07; @FCG+08]]{}. [ The constrained phase-ordering dynamics tunes the system towards phase coexistence]{}, similarly to what happens in the case of a precipitating supersaturated solution. The global control allowing self-tuning to phase-coexistence is realized by shuttling of enzymes from the cytosol to the cell membrane and backwards. Some of the features that we have observed in cell polarization have been considered in previous works, such as the fact that equations of the form (\[langinz\]) are relevant for the description of systems of bistable chemical reactions [[@Sch72; @Kam07]]{}, and that global couplings in activator-inhibitor reaction-diffusion systems may lead to the formation of stable spatiotemporal patterns [[@GM72; @Sch00]]{}. The peculiar properties of this kind of systems have led to the use of the term of [[[*excitable*]{}]{}]{} or [[[*active*]{}]{}]{} media.  Using this same language, we can say that the cell membrane acts as an active medium responding to the stimulation with the formation of domains of a new phase. Our work proposes that directional sensing results from the peculiar, universal features of the phase-ordering dynamics of these domains. From a biological point of view, the universality of the polarization process allows the cell to behave in a robust, predictable way, independent on microscopic peculiarities such as the precise values of reaction rates and diffusion constants. We first proposed that chemotactic cell polarization may result from the simple ingredients of bistability induced by a positive local feedback loop in a signaling network and global control induced by shuttling of enzymes between the cytosol and the membrane in our previous works [[@GCT+05; @GKL+07; @CGC+07]]{}. Other authors have proposed similar models, either independently [[@SLN05]]{} or subsequently [[@MXA+06]]{} (a review of models of chemotactic polarization can be found in Ref. [[@ID07]]{}). Some of these models try to take into account computationally the interactions of a large numbers of chemical factors, while retaining the essential role of a feedback loop as generator of a phase-separation instability. However, most of the reaction rates that should be provided to perform such computations are known with very poor accuracy. Our framework suggests however that such a detailed description may be not necessary, as long as properties \[prop:a\]),...,\[prop:d\]) are met. Aspects of the bistable mechanism of eukaryotic polarization firstly introduced in Ref. [[@GCT+05]]{} (supporting material) have been considered in recent papers [[@BAB08; @MJE08]]{} as relevant to polarization phenomena. A similar mechanism, out of the bistability region, has been proposed to explain intermittent polarization in budding yeast [[@AAW+08]]{}. These works suggest that the combination of bistability and global control [[@GCT+05; @GKL+07]]{} is providing a useful paradigm for the understanding of cell polarization phenomena. #### Acknowledgments We thank Guido Serini for many inspiring discussions. This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164. \[app:lattice\]Lattice gas description of cell polarization =========================================================== The signaling molecules PIP2 and PIP3 are different phosphorylation states of the [[[*phosphatidylinositol*]{}]{}]{} molecule, [[[*i.e.*]{}]{}]{}, they carry a different number of phosphate groups attached (2 and 3, respectively). Enzymes which catalyze phosphorylation of their substrate, [[[*i.e.*]{}]{}]{} the addition of a phosphate group, are called [[[*kinases*]{}]{}]{}, while dephosphorylating enzymes are called [[[*phosphatases*]{}]{}]{}. It is natural to visualize the state of a chemical system such as the one described in Fig. \[fig:one\] in terms of two families of classical spins on a twodimensional lattice, taking on values -1 (PIP2, PTEN), 0 (an empty site), +1 (PIP3, PI3K) [[@FCG+08]]{}. Taking into account fast cytosolic diffusion, the enzyme family becomes slaved to the substrate family [[@FCG+08]]{}. In this lattice-gas description the existence of a cytosolic enzymatic reservoir exchanging enzymes with the cell membrane is represented by a chemical potential for enzyme creation and destruction (actually, adsorption and desorption to/from the cell membrane), globally coupled to the lattice configuration [[@FCG+08]]{}. The PIP2 and PIP3 molecules constitute approximately 1% of the total number of membrane phospholipids, and the number of PI3K and PTEN enzymes are at least one order of magnitude lower, thus, both the substrate and the enzyme population should be thought as diluted gases. Two-state (or multistate) molecules such as PIP2 and PIP3 are all but an exception in cell biology. Another example is given by [[[*small GTPases*]{}]{}]{}, such as the Cdc42 molecule involved in the polarization of budding yeast, which can be found either in the activated GTP state or in the deactivated GDP state. The switch between the two phosphorylation states is catalyzed by a couple of activating (GEF) and deactivating (GAP) enzymes [[@AJL+07]]{}. \[app:one\]Mean-field equations for eukaryotic polarization =========================================================== We derive here mean-field equations for eukaryotic polarization using standard methods of chemical kinetics, including Michaelis-Menten saturation terms for the enzymatic components[[^6]]{}. We make use of the fact that the diffusivity $D_{\mathrm{vol}}$ of $u$ enzymes in the cytosol is much faster than the diffusivity $D$ of $\varphi$ molecules on the cell membrane: this fact allows to considerably reduce the number of dynamical degrees of freedom. We describe the macroscopic state of the cell using surface concentration fields of membrane-bound molecules (Fig. \[fig:one\]) and the volume concentration field $f \equiv u_{\mathrm{free}}$ of free $u$ enzymes. The chemical kinetic equations for the signaling network of eukaryotic polarization are: $$\begin{aligned} \partial_t \varphi^+ & = & D \nabla^2 \varphi^+ {-}k_{\mathrm{c} \mathrm{at}} \frac{u \varphi^+}{K + \varphi^+} + k_{\mathrm{\mathrm{cat}}} \frac{h \varphi^-}{K + \varphi^-} \label{phipiu}\\ \partial_t \varphi^- & = & D \nabla^2 \varphi_- + k_{\mathrm{c} \mathrm{at}} \frac{u \varphi^+}{K + \varphi^+} - k_{\mathrm{c} \mathrm{at}} \frac{h \varphi^-}{K + \varphi^-} - \partial_t u \label{phimeno}\\ \partial_t u & = & k_{\mathrm{a} \mathrm{ss}} f \varphi^- - k_{\mathrm{{diss}}} u \label{psidot}\\ \partial_t f & = & \nabla \cdot (D_{\mathrm{vol}} \nabla f), \hspace{2em} \hspace{2em} \label{psivol}\end{aligned}$$ They must be complemented with the boundary condition $$\begin{aligned} J & \equiv & D_{\mathrm{vol}} \frac{\partial f}{\partial \mathbf{n}} = \partial_t u \label{outflux}\end{aligned}$$ where $\partial / \partial \mathbf{n}$ is the derivative along the outward normal to the membrane surface $S$. Condition (\[outflux\]) expresses the fact that the flux of $u$ enzymes leaving the cytosolic volume equals the flux of enzymes being bound to the cell membrane. For simplicity, we consider here identical catalytic, association and dissociation rates ($k_{\mathrm{cat}}$, $k_{\mathrm{ass}}$, $k_{\mathrm{diss}}$) and Michaelis-Menten constants $K$ for the $\varphi^+ \rightarrow \varphi^-$ and $\varphi^- \rightarrow \varphi^+$ processes. This is compatible with existing information about these processes, suggesting that reaction rates differ by factors of order1[[@GCT+05]]{} and allows to easily study the equations analytically. Typical values for surface and cytosolic diffusivity are $D \sim 1 \mu m^2 / s$, $D_{\mathrm{vol}} \sim 10 \mu m^2 / s$[[@GCT+05]]{}. Typical values for rate constants are: $k_{\mathrm{cat}} \sim k_{\mathrm{diss}} \sim 1 s^{- 1}$, $k_{\mathrm{ass}} \sim 0.05 s^{- 1} \mathrm{nM}^{- 1}$; for the total number of $\varphi^+$ and $\varphi^-$ molecules, and the total number of $u$ and $h$ enzymes: $N_{\varphi} \sim 10^6$, $N_u \sim N_h \sim 10^4$–$10^5$. Observe that $N_u / N_{\varphi} \ll 1$. The usual definition of macroscopic fields such as $u$ is as follows. For each point $\mathbf{r}$ in space we choose a volume $v$ centered in $\mathbf{r}$, containing $n (v)$ molecules, and we compute concentrations as $u (\mathbf{r}) = \lim_{v \rightarrow 0} n (v) / v$. This implies that the number of molecules of the relevant chemical factors is so large that $v$ can be chosen much smaller than the size of the system, but large enough that the resulting field $\varphi (\mathbf{r})$ is approximately continuous. This hypothesis is not always acceptable, since enzymatic molecules are present in the cell in very small numbers. We shall therefore assume that real concentrations are described as the sum of an average part $u$, described by mean field equations of the kind (\[phipiu\]–\[outflux\]), and a fluctuating part $\delta u$ taking into account both the discrete character of the concentration field and thermal disorder. The fluctuations $\delta u$ due to random adsorption and desorption processes are at the origin of the noise term $\Xi$ in (\[eq:evo\]) (see \[app:thermal\]). Since enzyme diffusion in the cytosol is faster than phospholipidic diffusion on the membrane, during the characteristic times of the dynamics of membrane-bound factors, $f (\mathbf{r}, t)$ relaxes to the approximately uniform value $$\begin{aligned} f (t) & = & f_0 - \frac{1}{V} \int_S u_- (\mathbf{r}, t) {\mathrm{d}}\mathbf{r} \label{meanpsi}\end{aligned}$$ where $f_0 = N_u / V$, while $u$ relaxes to the local equilibrium value $$\begin{aligned} u & = & K_{a \mathrm{ss}} f \varphi^- \label{equcond}\end{aligned}$$ where $K_\mathrm{ass} = k_\mathrm{ass} / k_{\mathrm{diss}}$. On the other hand, by summing (\[phipiu\]) and (\[phimeno\]) we get $$\begin{aligned} \partial_t \left( \varphi_+ + \varphi_- \right) & = & D \nabla^2 (\varphi_+ + \varphi_-) - \partial_t u_- \label{diffusionesemplice}\end{aligned}$$ Since $N_u / N_{\varphi} \ll 1$ we neglect the term $\partial_t u$. Then, (\[diffusionesemplice\]) shows that the sum $c = \varphi^+ + \varphi^-$ tends to be approximately uniform and constant in time. By subtracting (\[phipiu\]) and (\[phimeno\]) and introducing the difference concentration field $\varphi = \varphi^+ - \varphi^-$ we get $$\begin{aligned} \partial_t \varphi & = & D \nabla^2 \varphi - k_{\mathrm{cat}} \frac{2 u (c + \varphi)}{2 K + c + \varphi} + k_{\mathrm{cat}} \frac{2 h (c - \varphi)}{2 K + c - \varphi} \label{sempluno}\end{aligned}$$ and using the local equilibrium condition (\[equcond\]) we end up with $$\begin{aligned} \partial_t \varphi & = & D \nabla^2 \varphi - k_{\mathrm{cat}} K_{\mathrm{ass}} f \frac{c^2 - \varphi^2}{2 K + c + \varphi} + 2 k_{\mathrm{cat}} h \frac{c - \varphi}{2 K + c - \varphi} \label{endup}\end{aligned}$$ Only values $- c \leqslant \varphi \leqslant \varphi$ correspond to positive concentrations and are therefore physical. From (\[endup\]), (\[psidot\]) and (\[meanpsi\]) we get the following system: $$\begin{aligned} \partial_t \varphi (\mathbf{r}, t) & = & - \frac{\delta \mathcal{F}_{f, h} [\varphi]}{\delta \varphi (\mathbf{r}, t)} \label{reactdiff}\\ \dot{f} (t) & = & - V^{- 1} k_{\mathrm{ass}} f (t) \int_S \varphi^- {}{\mathrm{d}}\mathbf{r}+ k_{\mathrm{diss}} \left( f_0 - f (t) \right) \label{betadot}\end{aligned}$$ where $$\begin{aligned} \mathcal{F}_{f, h} [\varphi] & = & \int_S {}\left[ \frac{1}{2} D \left| \nabla \varphi \right|^2 + V_{f, h} (\varphi) \right] {\mathrm{d}}\mathbf{r} \label{free}\\ V_{f, h} (\varphi) & = & \hspace{1em} 2 k_{\mathrm{cat}} hc \left[ - \phi - 2 \kappa \ln \left( 2 \kappa + 1 - \phi \right) \right] \label{ilpotenziale}\\ & & + \frac{1}{2} k_{\mathrm{cat}} K_{\mathrm{ass}} fc^2 [- \phi^2 / 2 + \left( 2 \kappa + 1 \right) \phi \nonumber\\ & & \left. - 4 \kappa (\kappa + 1) \ln \left( 2 \kappa + 1 + \phi \right) \right] \nonumber\end{aligned}$$ and we make use of the nondimensional variables $\phi = \varphi / c$, $\kappa = K / c$. The quantity [ $\mathcal{F}_{f, h}$ plays the role of a generalized free energy for the system]{}, and can be used to study its approximate equilibria as long as the characteristic times of variation of $f$ are longer than the characteristic times of variation of the $\varphi$ field. We are interested in parameter values such that (\[ilpotenziale\]) is bistable. In what follows we consider the case of constant and uniform activation field $h$, and constant $f$. The critical points of the effective potential $V_{f, h}$ are $$\begin{aligned} \phi_- & = & \kappa - \lambda / 2 - \sqrt{\left( \kappa - \lambda / 2 \right)^2 - (\lambda - 1) (2 \kappa + 1)}\\ \phi_u & = & \kappa - \lambda / 2 + \sqrt{\left( \kappa - \lambda / 2 \right)^2 - (\lambda - 1) (2 \kappa + 1)}\\ \phi_+ & = & 1\end{aligned}$$ where $$\begin{aligned} \lambda & = & \frac{4 h}{K_{a \mathrm{ss}} f}\end{aligned}$$ The potential $V_{f, h}$ is bistable when the three critical points are all real and physical. In that case, (\[endup\]) describes a dynamical system that may locally favor either a $\varphi^-$ rich or a $\varphi^+$ rich stable phase (Fig. \[fig:dyn\]). The two roots $\phi_- < \phi_u$ are real if $$\begin{aligned} \lambda & < & 2 (3 \kappa + 1) - 4 \sqrt{\mathcal{\kappa} (2 \mathcal{\kappa} + 1)}, \hspace{1em} \lambda > 2 (3 \mathcal{\kappa} + 1) + 4 \sqrt{\mathcal{\kappa} (2 \mathcal{\kappa} + 1)} \label{radicireali}\end{aligned}$$ The l.h.s. condition defines the right boundary of the bistability region of parameter space (Region III of Fig. \[fig:two\]). The two roots are physical ($- 1 \leqslant \phi_- < \phi_u \leqslant 1$) when $$\begin{aligned} \kappa & \leqslant & \frac{\lambda}{2 - \lambda} \hspace{1em} \mathrm{and} \hspace{1em} \kappa \leqslant 1 + \frac{\lambda}{2} \label{radicifisiche}\end{aligned}$$ The l.h.s. condition defines the left boundary of the bistability region (Region III in Fig. \[fig:two\]). The inequality $\phi_- \geqslant - 1$ on the other hand is always verified if $\lambda < 2$. The left and right boundaries of Region III meet in the triple point $$\begin{aligned} \lambda & = & 1 - \sqrt{5}, \hspace{2em} \kappa = (1 + \sqrt{5}) / 2\end{aligned}$$ So, the $\lambda$–$\kappa$ plane can be divided into three regions (Fig. \[fig:two\] and supplementary text of Ref. [[@GCT+05]]{}). In Region III, the system has to stable minima $\varphi_+$ and $\varphi_-$, separated by the unstable equilibrium $\varphi_u$. Outside Region III the potential has a single minimum, either rich in $\varphi^-$ (Region I) or rich in $\varphi^+$ (Region II). Region III may be divided in two parts, depending on which phase is stabler. In Region III$_a$ (Fig. \[fig:two\]) the stabler phase is $\varphi_-$, while in Region III$_b$ it is $\varphi_+$. The two subregions are separated by the phase-coexistence curve $\psi \equiv V_{f, h} (\varphi_+) - V_{f, h} (\varphi_-) = 0$, where the two stable equilibria $\varphi_+$ and $\varphi_-$ have the same energy. Close to the phase coexistence curve $\psi$ is much smaller than the potential barrier separating the two minima. In this region $$\begin{aligned} \psi & \simeq & 2 k_{\mathrm{cat}} hc \left[ \phi_+ - \phi_- + 2 \kappa \ln \left( 1 + \frac{\phi_+ - \phi_-}{2 \kappa} \right) \right] \left( \frac{f}{f_{\infty}} - 1 \right) \label{deltavapprox}\end{aligned}$$ where the factor $f / f_{\infty} - 1$ represents the excess fraction of free $u$ enzymes at a given time, with respect to the equilibrium value. Observe that an actual excess of free enzymes renders the $\varphi_-$ phase more stable, while a negative excess (a deficit) stabilizes the $\varphi_+$ phase. If the $\varphi_-$ phase is the more stable one, it tends to occupy larger and larger regions of the cell surface, thus decreasing $f$([[[*cf.*]{}]{}]{} the quasi-equilibrium conditions (\[meanpsi\]) and (\[equcond\])) and its own stability relative to the $\varphi_+$ phase. A symmetric situation is encountered if $\varphi_+$ is the more stable phase at initial time. Thus, the process of growth of any of the two phases decreases the metastability degree $\psi$ and drives the system towards a condition of phase-coexistence ([[[*i.e.*]{}]{}]{} towards a polarized state). We may wonder whether uniform equilibrium states also exist, that may compete with polarized states. Looking for stable uniform equilibria $\varphi = \varphi_-$ in Region III$_a$ gives the algebraic conditions $$\begin{aligned} \lambda & = & \frac{- \phi^2 + 2 \kappa \phi + (2 \kappa + 1)}{\phi + (2 \kappa + 1)} = 2 \frac{N_h}{N_u} \left[ \left( 1 + \frac{2 K_{\mathrm{ass}}}{VN_{\varphi}} \right) - \phi \right] \label{cond1}\\ \varphi & \leqslant & 2 \sqrt{\kappa (2 \kappa + 1)} - (2 \kappa + 1) \label{cond3}\end{aligned}$$ which may be studied graphically, showing that uniform equilibria are impossible in a large part of Region III, and in particular if $$\begin{aligned} \kappa & < & \frac{1}{2} \hspace{1em} \mathrm{and} \hspace{1em} 2 \frac{N_h}{N_u} \left( 1 + \frac{2 K_{\mathrm{ass}}}{VN_{\varphi}} \right) > 1 \label{ruleout}\end{aligned}$$ Uniform equilibria do not exist in this region because the total number of $u$ enzymes is not large enough to stabilize a uniform $\varphi_-$ phase extended along the whole membrane surface. Instead, uniform equilibria with $\varphi = \varphi_+$ exist, and correspond to configurations where all $u$ enzymes are free. Thermal and chemical noise {#app:thermal} ========================== Up to this point we have neglected fluctuations in the number of membrane-bound enzymes, so that every local minimum of $V_{f, h}$ corresponds to a stable phase having an infinite lifetime. However, since the number of bound enzymes molecules in the real system fluctuates locally, the field $\varphi (\mathbf{r}, t)$ should be seen as a stochastic field. The fluctuations $\delta f$ around the equilibrium enzyme concentration $f_{\infty}$ in the volume $V$ due to membrane adsorption and desorption processes induce fluctuations $\delta u$ around the local equilibrium value (\[equcond\]) in the[ ]{} concentration of membrane-bound enzymes. To derive quantitative relations we have to compute the encounter rates of a free $u$ particle fluctuating in the volume $V$ and a $\varphi_-$ binding site on the surface $S$. The adsorption-desorption process can be described by a simple master equation[[@Gar83]]{}. Let us consider that a reservoir of volume $V$ contains a number $N^{\mathrm{free}} \leqslant N^{\mathrm{tot}}$ of molecules, which can be adsorbed and desorbed by a small surface element $\Sigma$ containing $N^\mathrm{b.s.}$ binding sites. One has the mean-field kinetic equation $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t} N^{\mathrm{bound}} & = & k_{\mathrm{ass}} V^{- 1} N^\mathrm{b.s.} N^{\mathrm{free}} - k_\mathrm{diss} N^{\mathrm{bound}}\end{aligned}$$ which at equilibrium gives $$\begin{aligned} N^{\mathrm{bound}} & = & \alpha N^{\mathrm{free}} = \frac{\alpha}{1 + \alpha} N^{\mathrm{tot}} \hspace{2em} \left( \alpha = K_{\mathrm{ass}} V^{- 1} N^\mathrm{b.s.} \right)\end{aligned}$$ Let $P_N$ be the probability to observe $N^{\mathrm{bound}} = N$, and $r^{\pm}_N$ the time rates of the processes $N \rightarrow N \pm 1$. Then the process is described by the master equation $$\begin{aligned} \dot{P_N} & = & r^+_{N - 1} P_{N - 1} - (r^+_N + r_N^-) P_N + r^-_{N + 1} P_{N + 1}\end{aligned}$$ which has the stationary solution $$\begin{aligned} P_N & = & \prod_{j = 0}^{N - 1} \frac{r^+_j}{r^-_{j + 1}} P_0\end{aligned}$$ where $P_0$ is a normalizing factor. Letting $$\begin{aligned} r^+_N & = & c\,k_\mathrm{ass} (N^{\mathrm{tot}} - N), \hspace{2em} r^-_N = k_\mathrm{diss} N\end{aligned}$$ one finds a binomial distribution with $$\begin{aligned} \langle N^{\mathrm{bound}} \rangle & = & {\color{black} \frac{\alpha N^{\mathrm{tot}}}{1 + \alpha} } = \alpha N^{\mathrm{free}}\\ \langle \left( N^{\mathrm{bound}} \right)^2 \rangle - \langle N^{\mathrm{bound}} \rangle^2 & = & {\color{black} \frac{\alpha N^{\mathrm{tot}}}{\left( 1 + \alpha \right)^2} } = \frac{N^{\mathrm{bound}} N^{\mathrm{free}}}{N^{\mathrm{tot}}}\end{aligned}$$ By identifying $f = N^{\mathrm{free}} / V$ in (\[equcond\]) we can model the adsorption-desorption noise with a Gaussian noise term $\Xi$ with zero mean and the correct variance: $$\begin{aligned} \langle \Xi (\mathbf{r}, t) \Xi (\mathbf{r}', t') \rangle & = & 2 \Gamma \delta (\mathbf{r}-\mathbf{r}') \delta (t - t')\end{aligned}$$ where $$\begin{aligned} \Gamma & = & \frac{k_{\mathrm{diss}}}{k_{\mathrm{cat}}} \frac{\varphi^+}{K + \varphi^+} \frac{f}{f_0} (K_{\mathrm{ass}} f \varphi^-)\end{aligned}$$ \[app:scaling\]Scale invariant size distribution ================================================ In the domain coarsening stage described in Section \[sec:coarsening\], the characteristic size $r_c (t)$ of domains grows with time, and soon becomes the largest scale, so that a scaling distribution of domain sizes arises. In the asymptotic regime (for large times) it is possible to derive a self similar solution of the system of equations (\[eq:kin\], \[eq:area\]): [ $$\begin{aligned} \gamma \frac{\partial n (r, t)}{\partial t} + \frac{\partial}{\partial r} \left[ \left( \psi (t) - \frac{\sigma}{r} \right) n (r, t) \right] & = & 0 \label{eq:kin1}\\ \psi (t) \propto A_{\infty} - \int_0^{\infty} \pi r^2 n (r, t) {\mathrm{d}}{\color{black} } r {}& \rightarrow & 0 \hspace{2em} \mathrm{for} \hspace{2em} t \rightarrow \infty \label{eq:kin2}\end{aligned}$$]{} We start by looking for a solution in the form $$\begin{aligned} n (r, t) & = & \left[ r_c (t) \right]^k g (r / r_c (t)) \label{scalsol}\end{aligned}$$ It is easy to verify that $k$ must be given the value $- 3$ in order that (\[eq:kin2\]) may attain its asymptotic limit. Substituting (\[scalsol\]) in (\[eq:kin1\]), reexpressing the result in terms of the nondimensional variable $$\begin{aligned} \rho & = & r / r_c\end{aligned}$$ and balancing terms in the resulting equation, we find that an asymptotic solution for large times may exist only if $$\begin{aligned} \psi (t) & = & \frac{\sigma}{r_c (t)}, \hspace{2em} r_c (t) = r_0 \left( t / t_0 \right)^{1 / 2}\end{aligned}$$ and $$\begin{aligned} \left[ {\color{black} - \sigma \rho + \sigma \rho^2 - \frac{1}{2} \frac{\gamma r_0^2}{t_0} \rho^3} \right] g' (\rho) + \left[ {\color{black} \sigma - \frac{3}{2} \frac{\gamma r_0^2}{t_0} \rho^2} \right] g (\rho) & = & 0 \label{eq:grho}\end{aligned}$$ A smooth, positive, normalizable solution of (\[eq:grho\]) may be found only when two of the poles of $g' (\rho) / g (\rho)$ coalesce, which gives $$\begin{aligned} t_0 & = & \frac{2 \gamma r_0^2}{\sigma} \label{eq:separatrice}\end{aligned}$$ and finally[[^7]]{} $$\begin{aligned} g (\rho) & = & \left\{ \begin{array}{ll} CA_{\infty} \frac{8 {\mathrm{e}}^2 \rho}{(2 - \rho)^4} \exp \left( - \frac{4}{2 - \rho} \right) & \mathrm{for} \hspace{1em} 0 \leqslant \rho \leqslant 2\\ 0 & \mathrm{elsewhere} \end{array} \right.\end{aligned}$$ with $$\begin{aligned} C & = & \frac{1}{4 \pi [1 + 2 {\mathrm{e}}^2 \mathrm{Ei} (- 2)]} \simeq 0.11\end{aligned}$$ a normalization factor and Ei the exponential integral function [[@AS65]]{} . The resulting size distribution function is peaked around $r_c \sim t^{1 / 2}$ and there are no domains with sizes larger than $2 r_c$ (Fig. \[fig:pdf\]). The physical meaning of (\[eq:separatrice\]) can be understood by rewriting the deterministic part of the equation of domain growth (\[eq:domain\]) using $\rho$: $$\begin{aligned} \frac{\gamma r_c^2}{\sigma} \dot{\rho} & = & - \frac{\frac{\gamma r_0^2}{2 \sigma t_0} \rho^2 - \rho + 1}{\rho} \label{eq:dinamico}\end{aligned}$$ The analysis of the fixed points of (\[eq:dinamico\]) shows that when condition (\[eq:separatrice\]) is not satisfied, either the total domain area grows to infinity, or shrinks to zero[[^8]]{}. In both cases, the asymptotic condition (\[eq:kin2\]) cannot be satisfied. Therefore, condition (\[eq:separatrice\]) provides the correct asymptotic distribution of domain sizes by selecting the separatrix which divides those two extreme cases. [10]{} M. Abramowitz and I. Stegun. [[*[Handbook of mathematical functions : with formulas, graph, and mathematical tables]{}*]{}]{}. Dover, 1965. B. Alberts, A. Johnson, J. Lewis, M. Raff, K. Roberts, and P. Walter. [[*[Molecular Biology of the Cell]{}*]{}]{}. Garland Science, 2007. U. Alon, M.G. Surette, N. Barkai, and S. Leibler. Robustness in bacterial chemotaxis. [[*[Nature]{}*]{}]{}, 397:168–171, 1999. S.J. Altschuler, S.B. Angenent, Y. Wang, and L.F. Wu. On the spontaneous emergence of cell polarity. [[*[Nature]{}*]{}]{}, 454:886–890, 2008. H.C. Berg and E.M. Purcell. Physics of chemoreception. [[*[Biophys. J.]{}*]{}]{}, 20:193–219, 1977. C. Beta, G. Amselem, and E. Bodenschatz. A bistable mechanism for directional sensing. [[*[New J. of Phys.]{}*]{}]{}, 2008. A.J. Bray. Theory of phase ordering kinetics. [[*[Adv. Phys.]{}*]{}]{}, 43:357–459, 1994. A. Ciliberto, F. Capuani, and J. J. Tyson. Modeling networks of coupled enzymatic reactions using the total quasi-steady state approximation. [[*[PLoS Comput Biol]{}*]{}]{}, 3:e45, 2007. A. de Candia, A. Gamba, F. Cavalli, A. Coniglio, S. Di Talia, F. Bussolino, and G. Serini. A simulation environment for directional sensing as a phase separation process. [[*[Sci. STKE]{}*]{}]{}, 378:pl1, 2007. T. Ferraro, A. de Candia, A. Gamba, and A. Coniglio. Spatial signal amplification in cell biology: a lattice-gas model for self-tuned phase ordering. [[*[Europh. Lett.]{}*]{}]{}, 83:50009–1–5, 2008. A. Gamba, A. de Candia, S. Di Talia, A. Coniglio, F. Bussolino, and G. Serini. Diffusion limited phase separation in eukaryotic chemotaxis. [[*[Proc. Nat. Acad. Sci. U.S.A.]{}*]{}]{}, 102:16927–16932, 2005. A. Gamba, I. Kolokolov, V. Lebedev, and G. Ortenzi. Patch coalescence as a mechanism for eukaryotic directional sensing. [[*[Phys. Rev. Lett.]{}*]{}]{}, 99:158101–1–4, 2007. C.W. Gardiner. [[*[Handbook of stochastic methods for physics, chemistry and the natural sciences]{}*]{}]{}. Springer, New York, 1983. A. Gierer and H. Meinhardt. A theory of biological pattern formation. [[*[Kybernetik]{}*]{}]{}, 12:30–39, 1972. P.C. Hohenberg and B.I. Halperin. Theory of dynamic critical phenomena. [[*[Rev. Mod. Phys.]{}*]{}]{}, 49:436–479, 1977. P.A. Iglesias and P.N. Devreotes. Navigating through models of chemotaxis. [[*[Curr. Op. Cell Biol.]{}*]{}]{}, 20:1–6, 2007. L.D. Landau and E.M. Lifshitz. [[*[Statistical Physics (Part I)]{}*]{}]{}, volume 5 of [[*[Course of Theoretical Physics]{}*]{}]{}. Pergamon Press, third edition, 1980. D. A. Lauffenburger and A. F. Horwitz. Cell migration: a physically integrated molecular process. [[*[Cell]{}*]{}]{}, 84:359–369, 1996. E.M. Lifshitz and L.P. Pitaevskii. [[*[Physical Kinetics]{}*]{}]{}, volume 10 of [[*[Course of Theoretical Physics]{}*]{}]{}. Pergamon Press, first edition, 1981. E. Marco, R. Wedlich-Soldner, R. Li, S. J. Altschuler, and L. F. Wu. Endocytosis optimizes the dynamic localization of membrane proteins that regulate cortical polarity. [[*[Cell]{}*]{}]{}, 129:411–22, 2007. M. Meier-Schellersheim, X. Xu, B. Angermann, E.J. Kunkel, T. Jin, and R.N. Germain. Key role of local regulation in chemosensing revealed by a new molecular interaction-based modeling method. [[*[Plos Comp. Biol.]{}*]{}]{}, 2:710–24, 2006. Y. Mori, A. Jilkine, and L. Edelstein-Keshet. Wave-pinning and cell polarity from a bistable reaction-diffusion system. [[*[Biophys J]{}*]{}]{}, 2008. C. Parent. Making all the right moves: chemotaxis in neutrophils and Dictyostelium. [[*[Curr. Opin. Cell Biol.]{}*]{}]{}, 16:4–13, 2004. C.A. Parent and P.N. Devreotes. A cell’s sense of direction. [[*[Science]{}*]{}]{}, 284:765–769, 1999. M. Postma, J. Roelofs, J. Goedhart, H.M. Loovers, A.J. Visser, and P.J. Van Haastert. Sensitization of Dictyostelium chemotaxis by phosphoinositide-3-kinase-mediated self-organizing signalling patches. [[*[J. Cell Sci.]{}*]{}]{}, 117:2925–35, 2004. A.J. Ridley, M.A. Schwartz, K. Burridge, R.A. Firtel, M.H. Ginsberg, G. Borisy, J.T. Parsons, and A.R. Horwitz. Cell migration: integrating signals from front to back. [[*[Science]{}*]{}]{}, 302:1704–9, 2003. C. Sachs, M. Hildebrand, S. Völkening, J. Wintterlin, and G. Ertl. Self-organization in a surface reaction: from the atomic to the mesoscopic scale. [[*[Science]{}*]{}]{}, 293:1635–1638, 2001. A. Samadani, J. Mettetal, and A. van Oudenaarden. Cellular asymmetry and individuality in directional sensing. [[*[Proc. Nat. Acad. U.S.A.]{}*]{}]{}, 103:11549–11554, 2006. F. Schlögl. [[*[Z. Phys.]{}*]{}]{}, 253:147, 1972. E. Schöll. [[*[Stochastic Processes in Physics, Chemistry, and Biology]{}*]{}]{}, volume 557 of [[*[Lecture Notes in Physics]{}*]{}]{}, pages 437–451. Springer, 2000. C. Sire and S.N. Majumdar. Coarsening in the q-state Potts model and the Ising model with globally conserved magnetization. [[*[Phys. Rev. E]{}*]{}]{}, 52:244, 1995. R. Skupsky, W. Losert, and R. J. Nossal. Distinguishing modes of eukaryotic gradient sensing. [[*[Biophys. J.]{}*]{}]{}, 89(4):2806–23, 2005. L. Song, S. M. Nadkarni, H. U. Bodeker, C. Beta, A. Bae, C. Franck, W. J. Rappel, W. F. Loomis, and E. Bodenschatz. Dictyostelium discoideum chemotaxis: threshold for directed motion. [[*[Eur. J. Cell Biol.]{}*]{}]{}, 85:981–9, 2006. Z. Tong, X. D. Gao, A. S. Howell, I. Bose, D. J. Lew, and E. Bi. Adjacent positioning of cellular structures enabled by a Cdc42 GTPase-activating protein-mediated zone of inhibition. [[*[J. Cell Biol.]{}*]{}]{}, 179:1375–84, 2007. P. J. van Haastert, I. Keizer-Gunnink, and A. Kortholt. Essential role of PI3-kinase and phospholipase A2 in Dictyostelium discoideum chemotaxis. [[*[J. Cell Biol.]{}*]{}]{}, 177:809–16, 2007. N.G. van Kampen. [[*[Stochastic Processes in Physics and Chemistry]{}*]{}]{}. North-Holland, third edition, 2007. R. Wedlich-Soldner, S. Altschuler, L. Wu, and R. Li. Spontaneous cell polarization through actomyosin-based delivery of the Cdc42 GTPase. [[*[Science]{}*]{}]{}, 299:1231–5, 2003. R. Wedlich-Soldner and R. Li. Spontaneous cell polarization: undermining determinism. [[*[Nat. Cell Biol.]{}*]{}]{}, 5:267–70, 2003. R. Wedlich-Soldner, S.C. Wai, T. Schmidt, and R. Li. Robust cell polarity is a dynamic state established by coupling transport and GTPase signaling. [[*[J. Cell Biol.]{}*]{}]{}, 166:889–900, 2004. S. Wehner, P. Hoffmann, D. Schmei[ß]{}er, H.R. Brand, and J. Kuppers. Spatiotemporal patterns of external noise-induced transitions in a bistable reaction-diffusion system: photoelectron emission microscopy experiments and modeling. [[*[Phys. Rev. Lett.]{}*]{}]{}, 95:038301–1–4, 2005. [^1]: For general facts regarding cell biology we refer to Ref. [[@AJL+07]]{}. [^2]: We are using a slightly different notation to distinguish the values $\varphi_+$, $\varphi_-$ assumed by the $\varphi$ field from the names of the concentration fields $\varphi^+$, $\varphi^-$ of signaling molecules. [^3]: And therefore of the precise characteristics of the noise term $\Xi$ which is its driving force. [^4]: The converse case, where $t_0$ is the largest timescale of the problem and polarization is the result of the rare nucleation of a solitary domain, cannot provide a mechanism of gradient sensing which is at the same time insensitive to the uniform component of the attractant field, and highly sensitive to its gradient component. Indeed, the nucleation of a single domain could provide a mechanism of gradient sensing only if the gradient would induce significantly different domain nucleation rates in different points of the cell membrane. But in that case, also variations in the uniform component of the attractant field would produce large variations in the typical polarization times, while the converse has been reported. [^5]: We should consider adding here the condition that the concentration field is not locally constrained by a conservation law. However, also the converse case of a locally conserved field can be treated in a similar way without substantially changing the present scheme. [^6]: Michaelis-Menten saturation terms arise from timescale separation in enzymatic kinetics, which allows to make use of a quasi-stationary approximation[[@CCT07]]{}. [^7]: We thank Alan Bray for pointing out to us that this problem has been discussed in a different context in Ref. [[@SM95]]{}. [^8]: See Refs. [[@LP81; @Bra94]]{} for the analogous discussion in the case of a locally conserved field.
--- abstract: 'Econophysics is a new area developed recently by the cooperation between economists, mathematicians and physicists. It’s not a tool to predict future prices of stocks and exchange rates. It applies idea, method and models in Statistical Physics and Complexity to analyze data from economical phenomena. In this paper, three examples from three active main topics in Econophysics are presented first. Then using these examples, we analyze the role of Physics in Econophysics. Some comments and emphasis on Physics of Econophysics are included. New idea of network analysis for economy systems is proposed, while the actual analysis is still in progress.' author: - | Yougui Wang$^1$, Jinshan Wu$^2$, Zengru Di$^1$[[^1]]{}\ 1. Department of System Science at School of Management,\ Beijing Normal University, Beijing, 100875, P.R.China\ 2. Department of Physics, Simon Fraser University, Burnaby, B.C. Canada, V5A 1S6 title: Physics of Econophysics --- Introduction ============ Econophysics is a developing field in recent years. It’s a subject applying and proposing idea, method and models in Statistical Physics and Complexity into analyzing data coming from economical phenomena. Economics is a subject about human behavior related with the management of the resources, finances, income, the production and consumption of goods and services. So Economics is usually regraded as a social science. But in some ways, the laws in Economics are similar with natural science. Although it has to deal with incentive and human decision, but sometimes the collective behavior can be described by determinant process, at least in a statistical way. So the aim of Econophysics is to apply the idea of natural science as far as well into economics. Maybe this will disentangle natural laws and human behaviors in economical phenomena, and end with a new Economics. Also because of the plenty data records of different systems in our economy behavior, it’s a treasure to physicists, especially to the one being interested in Complex Systems, in which many subsystems and many variables interact together. And the development of Economics also provide many open questions, like stock price, exchange rate and risk management, which may require technics dealing with mass data and complex systems. Physics tries to construct a picture of the movement of the whole nature. Mechanism is the first topic cared by physicists. So trying to describe and understand the phenomena is the first step of econophysicists facing the mass data in economical phenomena. Till now, we have to say, the most works in Econophysics are empirical study of different phenomena to discover some universal or special laws, and also some initial effort about models and mechanism. Therefor, in this talk, we will begin with three examples of empirical works in Econophysics, and discuss very shortly about the corresponding models and mechanism. Focus will be on the Physics of Econophysics, to present the power of Physics to Econophysics and some benefit which Physics will get from Econophysics. Three main topics of Econophysics ================================= Recent works in Econophysics mainly in three objects. First one is the time series of stock prices, exchange rates and prices of goods. Size of firms, GDP, individual wealth and income are the second topic, which can be regarded as wealth of different communities. The third one is network analysis of economical phenomena. Fluctuation of stock prices and exchange rates ---------------------------------------------- The prices of stocks are recorded every minute or every few second everyday in stock market all over the world. The price of a stock is driven by many factors, such as the whole economy environment, achievement of enterprise, the prices of other stocks, and by the buying or selling activity of stockholders. At the same time, the behavior of stockholders is effected by the price, and further more, everyone has his/her own decision which is different with each other, but effected each other. So such phenomena seem complex. While every enterprise has its own characters, and every stockholder decide his/her behavior on his/her own knowledge, information and belief, and every stock market has its own environment, the empirical study shows some common stylized facts valid for almost all stocks. A typical time series of stock price, S$\&$P500 index, denoted as $S\left(t\right)$, is showed in figure \[figdata\]. Actually S$\&$P500 is a stock index, which is a weighted mean value of stocks in a market, can be used as a indicator of stock price. Some papers use the data of indexes, some use individual stock, and also some paper investigate all stocks in a market as an ensemble of stocks. In this talk, we just use analysis of individual stock as examples. ![Typical time series of stock price and return](data.eps){width="4in"} [extracted from [@correlation], time series of stock price. The first two figures at bottom are time series of return while the last one is a Gaussian noise. ]{} \[figdata\] Because economy is in growth, so the time series of stock price has a long term trend to increase. This means it’s nonstationary. So other than the original price, other quantities like different and return may be better to use as analysis object. The difference is defined as $$D_{\Delta t}\left(t\right)=S\left(t+\Delta t \right)-S\left(t\right),$$ in which $\Delta t$ is the time step to sample the time series. It can be the time step of record, or a large time scale. Return is defined as $$G_{\Delta t}\left(t\right)=\ln\left(S\left(t+\Delta t \right)\right)-\ln\left(S\left(t\right)\right).$$ It’s equivalent with $\frac{D_{\Delta t}\left(t\right)}{S\left(t\right)}$ when $\Delta t$ is small enough. Most works use return as object time series. The figures in the lower part of figure \[figdata\] show examples of $G\left(t\right)$, while the last one is a Gaussian noise signal for comparison. A statistical analysis of one time series can be classified as two parts, the distribution properties which dismiss the time information, and the autocorrelation analysis which mainly takes time into account. ### Distribution properties The frequency account of a data set formed by collecting all the return values will give us the distribution, as shown in figure \[figreturn\]. Detailed fitting shows the central part is a log-normal distribution ($p(x)\sim e^{-\ln^{2}x}$) while the tail is a power law distribution ($p(x)\sim x^{-\alpha}$). The more important thing here is the universality. The distribution shape is independent on the time scale ($\Delta t$), and is a common distribution for different stocks in different markets, even in different countries. When an empirical statistical result is universal, we have to ask for the common nature behind it. ![distribution of return](return_1.eps "fig:"){width="4in"} ![distribution of return](return_2.eps "fig:"){width="4in"} [extracted from [@correlation], Log-normal for central part and power law heavy tail. ]{} \[figreturn\] Another distribution properties is about the volatility of stock, which is related with risk. So its characters is important for risk management. Usually it’s defined by local variation, $$V_{T}\left(t\right)=\sum^{t+T}_{\tau=t}\left(G\left(\tau\right)-\bar{G}_{T}\right)^2$$ in which $T=n\Delta t$ is a time window moving along with the time, and $\bar{G}_{T}$ is the mean value of $G\left(t\right)$ in the window, as $$\bar{G}_{T}=\frac{1}{n}\sum^{t+T}_{\tau=t}G\left(\tau\right).$$ In some papers, volatility is also defined as $$V_{T}\left(t\right)=\sum^{t+T}_{\tau=t}\left|G\left(\tau\right)\right|,$$ in which absolute value is equivalent with square, and we don’t care about the mean value of $V_{T}$, which can be set to be zero when we analysis the distribution function or autocorrelation. Also it was found that the distribution function is universal for different stocks in different market during different time. Similarly the center part is log-normal, while the tail is power law, which is shown in figure \[figvola\]. ![distribution of volatility](volatility_1.eps "fig:"){width="4in"} ![distribution of volatility](volatility_2.eps "fig:"){width="4in"} [extracted from [@volatility], Log-normal for central part and power law heavy tail. ]{} \[figvola\] ### Autocorrelation Besides the distribution property, most information of time series is included in time. Now we present the result of autocorrelation analysis of return and volatility[@correlation]. The autocorrelation of a stationary time series is defined as $$C\left(\tau\right) = \frac{\langle G\left(t+\tau\right)G\left(t\right)\rangle-\langle G\left(t+\tau\right)\rangle\langle G\left(t\right)\rangle}{\langle G^{2}\left(t\right)\rangle-\langle G\left(t\right)\rangle^{2}}$$ It can be investigated by spectrum analysis. But for a nonstationary one, a recently developed detrend fluctuation analysis (DFA)[@dfa] is commonly used. In figure \[figauto\], the autocorrelation functions of return and volatility are plotted together. We can find an exponential drop off in return with a time scale of minute, while a power law decrease in volatility without a finite time scale. Think about this phenomenon, a time series almost without an autocorrelation, but a extremely high autocorrelation in absolute value, or local variation. It’s amazing. The fast dropping off guarantee the validness of Efficient Market Hypothesis, while the long time autocorrelation in volatility make it possible to construct a theory of risk management. So such works will boost the development of risk management, even a reformation. ![Autocorrelation of return and volatility](auto_cor.eps){width="4in"} [extracted from [@volatility], Exponential decay for return while power law decay for volatility]{} \[figauto\] ### Price and volume All the analysis above looks like kinetics, which solve the question how to describe the movement and what’s the movement. The next question in the tradition of Physics is how can such movement happen. So next step, let’s think about what are the factors effect the stock price. And again, we may try to keep our eyes on empirical study as far as we can. Demand and supply decide the price is a central law in Economics. Although we know it’s for price of goods, maybe it will still be valid for stocks. So it leads us to empirical study of order book of stocks, which study the relation between difference of prices and the transaction volume[@mastercurve; @demand]. For a individual stock, they recorded transaction volume $\omega$ as the total volume of every transaction before the price changed, and define the difference of the logarithm of the price now and price before such change as price shift, $$\Delta p\left(t_{i+1}\right)=\ln S\left(t_{i+1}\right) - \ln S\left(t_{i}\right).$$ Then plots of price shift ($\Delta p\left(t\right)$) vs transaction volume ($\omega\left(t\right)$) are presented in the up part of figure \[figdemand\]. In the lower part, the authors found all curves can be collapse onto a common line by rescale. So it’s also a universal law for all stocks. ![Master Curve for the impaction from volume to price](demand.eps){width="5in"} [extracted from [@mastercurve], data collapse onto a master curve]{} \[figdemand\] From the above results, it seems that stock price is only determined by transaction volume. But it’s sad to say, the transaction volume is also decided by price. No direct way to predict transaction volume. It should be decided together with price by other predictable or known variables. So let’s say if we have only one stock, and the whole history of this stock is already known, the achievement and activity of the enterprise is also predictable by other ways, and so is the external economy environment, at least in a statistical way, which means if they are random variables we know the distribution and correlation, in such condition, is the future of this stock determinant, and is it predictable or chaotic? Or at least we can reproduce the similar data with the same statistical characters as the empirical data? If it’s possible, what’s the central variables, and how it can be generalized into a stock ensemble, not only one stock? ### Toy models The questions above ask for a mechanism model of stocks. Maybe it’s not very possible to reproduce the exact time series, but if the stylized statistical facts are reproduced, the model is well done in physical sense, although not in a sense of making money. Let’s check what’s the central variable left after so many things settled down by us including initial condition (even history), boundary condition (only one stock), external variables (enterprise and environment). So the only one thing left here is how do stockholders buy and sell the stock under different price and how does the price effected by the transaction. The first idea here is activities of all stockholder are effected each other. Such interaction maybe is indirectly through the price and market, or by external way such as personal relationship. As a tradition in Physics, a first approximation is treat every stockholder independently, so they will only effected each other through market. Like in spin model, every stockholder will has a unit volume can buy or sell every time. Buying will improve the price while selling lowers the price. Everyone is trying to make more money in this game. So till now, a toy model has been constructed for mechanism of stock price. When the detail of benefit evaluation of every player and the effect on price by one unit volume is set, this toy model will evolute in its own way, of course when some specific behavior of all external variables are also settled down. Although it’s only a toy model, we also can test some fundamental knowledge, such as benefit and rational agent, and also try different form of external variables. For example, we can take for granted that external variables are random signals with fixed distribution and without autocorrelation of any order. So our task will be how can we construct our model to reproduce the autocorrelation behavior in empirical study from no autocorrelation input external data. And then, if the output data is totally incomparable, maybe we have to add something we dismissed, like the relation between stocks. You know a phone call from your close friend may change your decision. So it’s very possible we have to take such interaction into consideration. The model in [@spin], is a representative one of such toy models. Although many different interaction forms we can try, or even we can coevolute the interaction strength together with the stock price, it’s possible that the output data is still incomparable with empirical one. Then, we will have to include the interaction between different stocks, and maybe further more a coevolution system including the behavior of enterprises. Oh, no, wait a minute, this is not on the way of physics now. More and more variables, more and more subsystems, uglier and uglier picture. It shouldn’t. The Physics of Complex System tells us maybe only a few ones rule the system. So the toy model maybe imply something valuable. Now we come back to empirical study and toy model, but in another way, the way keeping Physics in mind. ### Goods, options and others Not only the stocks, also exchange rates, goods and options are under analysis nowadays. However, the universal results for stocks seems not valid for other goods and options. The empirical study of land price[@land] gives the high skew and heavy tail distribution of price ($S\left(t\right)$) and power law distribution for relative price ($r\left(t\right)=\frac{S\left(t+1\right)}{S\left(t\right)}$. And empirical study of return of options shows unsymmetrical power law distribution[@option]. And not only the prices, waiting time can also be take into consideration. In a real stock market, transactions do not always happen in every minute or every half minute. It’s also a random variable. And the prices change depending on the transactions. This is the so-called continuant time stochastic process. Empirical and model analysis just started up[@time1; @time2; @time3]. Distribution of firm sizes, GDP, personal income and wealth ----------------------------------------------------------- Interaction between different communities such as trade, cooperation and competition, plays important roles in economy. As a result of such activities, the wealth distribution carries some valuable information for researchers to investigate the properties of such interaction. So the second active main topic in Econophysics is about the size distribution. For a firm, size can be measured by employee, sales or capitals, while GDP for a country, income or wealth for a person. ### Distribution of size In [@firmaxtell], the author collected data including more firms especially small firms in a longer history than the database in [@zipf], so the result of power law distribution seems more convincing than the log-normal distribution in the later. And the important character about such distribution is the universality. Different measurement of size such as total employees, sales, assets and capitals give the same distribution. And it doesn’t depend on the time, even during the years of significant change of working force. Further investigation[@firmsincountries] shows it’s also a common law in different countries. A typical distribution is shown in figure \[figfirmsize\]. ![Zipf plot of sizes of firms](firm_size.eps "fig:"){width="3in"} ![Zipf plot of sizes of firms](firm_size_2.eps "fig:"){width="3in"} [extracted from [@firmaxtell; @firmsincountries], universal power law distribution.]{} \[figfirmsize\] Similar results have been get for GDP of countries all over the world. Power law distribution of GDP per capita of different counties has been revealed in [@GDP1; @GDP2]. For individual such distribution can be analyzed by personal income or wealth. A typical result[@wealth1; @wealth2] is shown in figure \[figwealth\]. The lower income seems like exponential distribution while the higher part is power law. From experience of ideal gas, we know, the equilibrium energy distribution of a random exchange system is exponential. So maybe in the lower income community, the cooperation and competition between individuals is in a way similar with random exchange. But for the higher income part, different interaction like preferential attachment part more important role. ![Distribution of individual income and wealth](income.eps "fig:"){width="3.5in"} ![Distribution of individual income and wealth](wealth.eps "fig:"){width="3.5in"} [extracted from [@wealth1], Exponential distribution for lower part while power law distribution for high tail.]{} \[figwealth\] ### Growth rate Growth rate of firm size is defined similarly with return as $$r\left(t\right)=ln\left(S\left(t+1\right)/S\left(t\right)\right).$$ Then in an ensemble of firms, every firm has its own track, and at every time, we have a cross-section data set including all firms. In a tradition of Statistical Physics, analysis can be done along two ways, keeping eyes on individual time series or just dealing with cross-section data. In an ensemble consist of identical systems, those two ways will give the same result. However, although here we can make an assumption that all firms act in a common way, which is the way we want to find, our ensemble is not consist of identical systems. So the compromise here was to treat the firms with the same size as identical systems, and to dismiss the time information and mix them together. At last, we will have conditional distribution function for different size as $p\left(r|s_{0}\right)$, where $s_{0}$ is the initial form size. Actually, such analysis is on the first way we mentioned, keeping eyes on fixed firm, so we get $p\left(r|s_{i0}\right)$, where $i$ is the label of firm. But here, a little further we go, the tracks starting at the same size are combined together. The distribution of growth rate is shown in figure \[figrate\] as Laplace distribution, $$p\left(r|s_{0}\right)=\frac{1}{\sqrt{2}\sigma\left(s_{0}\right)}\exp{\left(-\frac{\sqrt{2}\left|r-\bar{r}\right|}{\sigma\left(s_{0}\right)}\right)}.$$ ![Growth rate distribution of firm sizes and GDPs ](firm_growth.eps "fig:"){width="3in"} ![Growth rate distribution of firm sizes and GDPs ](GDP_growth.eps "fig:"){width="3in"} [extracted from [@firmgrowth; @GDP2], universal Laplace distribution]{} \[figrate\] Similar growth rate analysis has also been done for GDP. The gross growth rate of GDP is defined as $$p_{i}\left(t\right)=\ln\left(\frac{G_{i}\left(t+1\right)}{G_{t}\left(t\right)}\right).$$ But since the long term growth trend of economy, when we want to analysis the fluctuation information, such endogenous unknown trend has to been excluded. In [@GDP2], the author suggested to use a decomposition as below, $$p_{i}\left(t\right) = \delta_{i} + \phi\left(t\right) + r_{i}\left(t\right),$$ where $\delta_{i}$ is the long term expected endogenous growth rate, $\phi\left(t\right)$ is a common fluctuation to all counties, and $r_{i}\left(t\right)$ is the residual which represents fluctuation, the one we want to investigate. It shows the same Laplace distribution as shown in figure \[figrate\]. ### Relation between fluctuation and size From experience in Statistical Physics, relation between fluctuation and size usually gives important information of the underlying processes[@firmgrowth]. Like in idea gas with independent particles, the magnitude of fluctuation is invert of the square root of the system size as $$\sigma\left(N\right)\sim N^{-\frac{1}{2}}.$$ Corresponding analysis can be done for growth rate of firm size and GDP. A power law relation but with exponent other than $-\frac{1}{2}$ has been revealed by researchers[@firmgrowth; @GDP2]. And further more, such exponents is universal for different measurement of firm size, independent on time and locations, and the values for firm size and GDP is very close. Results are shown in figure \[figfluct\]. So maybe this implies some common mechanism for firms and GDPs. ![Relation between variation and size](firm_fluct.eps "fig:"){width="3.5in"} ![Relation between variation and size](GDP_fluct.eps "fig:"){width="3.5in"} [extracted from [@firmgrowth; @GDP2], Power law with similar exponent near $0.15$.]{} \[figfluct\] Complex Networks of economy systems ----------------------------------- Economy is a many-body system including agents as individuals, firms, countries, goods as produce, production and service, and subsystems as financial system, manufacturing, agriculture, service industry. And all of them interact with each other. A general way developed recently to describe such system is Complex Networks. In a complex network, every agent is represented by a vertex and the interaction between any two agents is described by a link between the two corresponding vertexes. Further more, the weight of links can be used as the strength of the interaction and a directed link can be used when the interaction is not symmetrical. A recent such development is the web of trade[@tradeweb1; @tradeweb2], in which vertexes are the countries and links are the inport/outport relation. The basic structure and efficiency has been analyzed, like high clustering coefficient, scale-free degree distribution. Another widely used network of economy system is the interaction between stock agents. Every stockholder is a vertex in the network, and the effect from decision of one agent to another is a directed link from the former vertex to the later. So the network acts as a whole system to drive the stock price. The geometrical character of such network will have some important effects on the dynamical behavior of stock price. Therefor, such investigation maybe will reveal the interaction pattern between stock agents. The third proposed works about network of economy systems is the network analysis of product input/output table. Like the Predator-Prey Relationships in food web, every product made from other products or raw materials, and also become input of other products. So the input/output relationship between products forms a network. Actually the input/output table analysis in Economics has the same spirit but in a highly clustered level and asking for different questions. So, although a database of product relation is what we need, a clustered group product relation data set will also be able to be used here as an beginning analysis of basic structure characters. And further works will require detailed data on input/output relation of products. Construction and analysis of such clustered product network is in progress[@klaus]. Characters on degree distribution, clustering coefficient, weight and weight distribution, average shortest distance have been gotten, but questions about the universality of such properties need to be tested on more networks. The links between products can be regarded as technology. So a score analysis such as link betweenness will show the relative importance of different technics, therefor it may imply some new direction of development of technology. Further questions about the robustness of such networks can be asked as how many total product will be lost if one or several inter-products were in shortage, or when the resource distribution was changed, or as how many total product will be lost when one or several link (technics) were dismissed, or inversely, if a new link was invented how many product will grow in total. Such investigation will relate traditional questions in economics such as resource allocation, social welfare, and effect of new technology with network analysis of product. It can have a far-reaching effect both on economics and network analysis. Why is Econophysics? ==================== We took a review of Econophysics including empirical study and models on three topics above. Now we try to discuss the question why is Econophysics? Since dynamics of Stock price is also a topic of Mathematical financial, what’s the difference between this and Econophysics? If Physics provide insightful tools for this new field, can Physics also benefit from it? Physics as tools ---------------- First, let’s discuss the role of Physics as tools in Econophysics, the application of concepts, models and method developed in Physics into Econophysics. ### Physics as analysis method There is many-years experience to deal with many-body system and complex system in Physics. The concepts and technics such as ensemble statistics, correlation and self-correlation analysis, have been widely used to reveal the property of economy phenomena. And the more important thing here is experience in Physics helps to understand what the properties imply. For instance, a power law is usually related with critical phenomena in Physics, including critical point of equilibrium and non-equilibrium phase transition. And so does a long range order and a high self-correlation. Also as pointed in [@firmgrowth], relation between variation and size imply the form of interaction. Another central analysis method transplanted from Physics is Data Collapse and Universality. If relation curves from different systems can be collapsed onto a master curve by scaling, it’s very possible to find some common mechanism from such systems. And if an empirical or theoretical relation is independent on time period, some different detail of objects, it’s called universality. When a universal law is found for different systems, the systems must be equivalent in some ways. So it implies common mechanism and others can be understood if we know one of them very well. Therefor, it open a new way to investigate such systems, especially when some models with similar properties in Physics and other fields can be used here as a reference model for economy phenomena. ### Physics as reference model Spin model is widely used to describe human decision in stock market[@spin] or other economy activity[@division]. Usually, status of an agent can be one of $\left\{1,0,-1\right\}$, which is interpreted as buying, waiting and selling, or one of $\left\{1,-1\right\}$. So the status space of the whole system with $N$ agents is $\left(S_{1},S_{2},S_{3},\cdots,S_{N}\right)$. The benefit of every agent is determined by a payoff function $E\left(\vec{S},\vec{J},IEs\right)$, in which $\vec{J}$ are the interaction constants of all orders and $IEs$ are the internal variables as stock price, or external information like environment and behavior of enterprise. Everyone intend to maximize its own benefit in a statistical way[@spin; @division] like $$\omega_{i}\left(S_{i}\left(t\right)\rightarrow S_{i}\left(t+1\right)\right)\sim e^{\frac{\Delta E_{i}}{T}},$$ in which $T$ is an average evaluation coefficient, which means the effect on ones decision for a unit benefit. Actually such form of human decision comes from the ensemble distribution in Statistical Mechanics. In metropolis simulation of a spin system, the probability for a spin to transfer its status is overruled by a similar form. An ensemble distribution here means in statistical way, in a many body system, although everyone try to stay on maximum position, but the end status is much like an ensemble distribution. Such application gives some reasonable results, although it may be not totally equivalent with assumption in Economics, where every agent must stay on its maximum point, not a distribution function. In Mechanics, the status of physical object is determined by Newton’s equations or minimum action principle, but for a many-body system in Statistical Mechanics, ensemble distribution is used instead. Although it’s not deduced from first principle, it works widely. Maybe similar approach can be developed in Economics. Ideal gas is another reference model widely used in Econophysics[@money]. In a first order approximation model of competition and cooperation between firms, or between individuals, every agent can be regarded as random exchange wealth with each other, like random exchange energy in ideal gas. So the equilibrium distribution will take the exponential form. It’s amazing that the central part of personal wealth is actually exponential form. Further possible model can be generalized random interaction model, including not random exchange, but also random increase or decrease process, or extended model with bias exchange model, like preferential exchange, in which rich one has higher probability to get richer. Economics as Physics -------------------- In above section, we discussed the role of Physics in Econophysics. In this section, we ask for the inverse question, what Physics can benefit from Econophysics, not only as an insightful tool. ### Economy systems as physical objects Frankly, physicists are kinda aggressive, so is Physics. When a question asking for reason of a phenomenon in a common sense, or in a fundamental way, according to physicists it’s a physical question. Econophysics is such an example. It choose the special phenomena from Economics, and ask for the reason, or mechanism in physical language. Most phenomena concerned in Econophysics exhibit universality independent on time period, detail of systems, and even different economical structure of countries. So such question is likely very much to ask the behavior of a system with known interactions, or the interaction form of a system with known behaviors. It’s a typical physical question. Like DFA method proposed by researchers in Statistical Physics from works in DNA sequence and physiologic signals, new technics can also be invented from Econophysics. Hopefully, not only technics, but also concepts and fundamental approach may also be proposed. For instance, effect of geometrical property such as dimension and curvature on dynamical behavior is an important question in Physics. Actually it’s widely studied in Physics including Relativity, Quantum Physics and especially Phase Transition and Critical Phenomena. So if geometrical quantities can be defined in Complex System, and the effect on dynamical process is known, it will partially predictable just through grasping the geometrical properties of such systems. For example, in principle, the make-from relationship between all products is tractable. So the network can be explicitly constructed, and even part of the history is known, like the things changed when reformation of technics happened. So Economics provide some nearly perfect treasure for Physics. And further more, the special character of such network will definitely require new quantities or technics to describe the effect. This will maybe boost the development of Physics. ### Natural parts of human behavior Not all human behaviors are rational or determinant, like impulsion and inspiration, but some of them, are determined by environment at least in a statistical way. Personal character affects human decision. But if all other factors could be determined by physical way like a dynamical equation, and the statistical properties of personal characters of the system were known, it will be easy to predict the behavior of the system. So the most valuable question left here is that whether we can describe economy system and human behavior by physical way as far as possible and leave something unknowable in physical sense. If it’s possible, how to do it. I think, Econophysics is a good try in such sense. Economics is a science of human behavior, but it’s fortunate that Economics is not totally a science of human creativity and inspiration like fine art. This means that some part, even most part according mathematicians working in Economics, of Economics can be modelled in an abstract or mathematical form. It’s interesting to point out that it’s Physics the most famous masterpiece applying Mathematics into nature, not any other field of Applied Mathematics. So it’s natural to incorporate Physics into Economics like to imitate masterpieces. And through such exploration, it’s possible that Physics will be widely used in social science. This will greatly extend the scope of Physics, and maybe will help Physics to deal with some hard topic such as turbulence, or more general complex systems. Conclusion – Is Econophysics a subject of Physics? ================================================== At lease, Econophysics provides, invents and develops tools for analysis of Economy phenomena, and investigation of economy system generalizes the scope of Physics. But will Econophysics effect concepts and thought in Physics? It depends on the future. However, we are sure that both Economics and Physics can benefit from such exploration. Therefor, as a researcher in physics in the new century, or a potential economist, should we learn from each other? Acknowledgement =============== Thanks is given to Fukang Fang and Zhanru Yang for their simulating discussion, and to graduate students 2002 in System Science Department for their warm discussion and good questions. The author Wu want to give thanks to Qian Feng for her encouragement and understanding. This work is partially supported by National Natural Science Foundation of China under the Grant No. 70371072 and No. 70371073. [99]{} R. N. Mantegna and H. E. Stanley, [*An Introduction to Econophysics: Correlations and Complexity in Finance*]{} (Cambridge University Press, Cambridge, England, 1999). J.-P. Bouchaud and M. Potters, [*Theory of Financial Risk*]{} (Cambridge University Press, Cambridge, England, 1999). R. Cont, Empirical properties of asset returns: stylized facts and statistical issues, Quantitive Finance, [**1**]{}, 223-236(2001). H.E. Stanley, P. Gopikrishnan, V. Plerou, L.A.N. Amaral, Quantifying uctuations in economic systems by adapting methods of statistical physics, Physica A [**[287]{}**]{}, 339-361(2000). P. Gopikrishman, V. Plerou, Y.liu, L.A.N. Amaral, X. Gabaix and H.E. Stanley, Scaling and correlation in financial time series, Physica A [**287**]{}, 362-373(2000) V. Plerou, P. Gopikrishman, L.A.N. Amaral, M. Meyer and H.E. Stanley, Scaling of the distribution of financial market indices, Phys. Rev. E [**60**]{}, 5305-5316(1999). V. Plerou, P. Gopikrishman, L.A.N. Amaral, M. Meyer and H.E. Stanley, Scaling of the price fluctuations of the individual companies, Phys. Rev. E [**60**]{}, 6519-6529(1999). Rogério L. Costa, G.L. Vasconcelos, Long-range correlations and nonstationarity in the Brazilian stock market, Physica A [**[329]{}**]{}, 231-248(2003). Y. Liu, P. Gopikrishman, P. Cizeau, M. Meyer, C. Peng and H.E. Stanley, Statistical properties of the volatility of price fluctuations, Phys. Rev. E [**60**]{}, 1390-1400(1999). A. Krawiecki, J.A. Holyst and D. Helbing, Volatility clustering and scaling for financial time series due to attactor bublling, Phys. Rev. Lett. [**89**]{}, 158701 (2002). M. Raberto, R. Gorenflo, F. Mainardi, E. Scalas, Scaling of the waiting-time distribution in tick-by-tick financial data (Poster presented during the international workshop Economics Dynamics from the Physics Point of View held in Bad Honnef (Germany) on March 2000). L. Sabatelli, S. Keating, J. Dudley, and P. Richmond, Waiting time distributions in financial markets, Eur. Phys. J. B [**[27]{}**]{}, 273-275(2002). E. Scalas, R. Gorenflo, F. Mainardi, Fractional calculus and continuous-time finance, Physica A [**284**]{}, 376-384 (2000). M. A. Serrano and M. Boguna, Topology of the world trade web, Phys. Rev. E [**[68]{}**]{},015101(2003). X. Li, Y.Y. Jin, and G. Chen, Complexity and synchronization of the World trade Web, Physica A [**[328]{}**]{}, 287-296(2003). F. Lillo, J.D. Farmer and R.N. Mantagna, Master curve for price-impact function, Nature [**[421]{}**]{}, 129-130(2003). V. Plerou, P. Gopikrishman and H.E.Stanley, Quantify stock-price response to demand fluctuations, Phys. Rev. E [**[66]{}**]{}, 027104(2002). F. Lillo and R. N. Mantegna, Ensemble properties ofsecurities traded in the NASDAQ market, Physica A [**[299]{}**]{}, 161-167(2001). F. Lillo and R.N. Mantagna, Variety and volatility in financial markets, Phys. Rev. E [**62**]{}, 6126-6134(2000). J.R. Iglesias, S. Goncalves, S. Pianegonda, J.L. Vega and G. Abramson, Wealth redistributioninour small world, Physica A [**[327]{}**]{}, 12-17 (2003). M. Stanley, S. Buldyrev, S. Havlin, R. Mantegna, M. Salinger, H.E. Stanley, Zipf Plot and the Size Distribution of Firms. Economics Letters [**[49]{}**]{}, 453-457. R.L. Axtell, Zpif distribution of U.S. firm sizes, Sience, [**293**]{}, 1818-1820(2001). L.A.N. Amaral, S.V. Buldyrev, H. Leschhorn, P. Maass, M. A. Salinger, H.E. Stanley and M.H.R. Stanley, Scaling bahavior in Economics: I. empirical results for company, J. Phys. I France [**7**]{}, 621-633(1997). J.J. Ramsden and Gy. Kiss-Haypál, Company size distribution in different countries, Physica A [**[277]{}**]{}, 220-227(2000). L.A.N. Amaral, S.V. Buldyrev, S. Havlin, M.A. Salinger and H.E. Stanley, Power law scaling for a system interacting units with complex internal structure, Phys. Rev. Lett [**80**]{}, 1385-1388(1998). Y. Lee, L.A.N. Amaral, D. Canning, M. Meyer, and H.E. Stanley, Universal Features in the Growth Dynamics of Complex Organizations, Phys. Rev. Lett. [**[81]{}**]{}, 3275(1998). Di Guilmi, Corrado, Edoardo Gaffeo, and Mauro Gallegati, Power Law Scaling in the World Income Distribution, Economics Bulletin, Vol. [**[15]{}**]{}, No. 6 pp. 1-7(2003). D. Canning , L.A.N. Amaral , Y. Lee , M. Meyer , H.E. Stanley, Scaling the volatility of GDP growth rates, Economics Letters [**[60]{}**]{}, 335-341(1998). A.A. Drǎgulescu and V.M. Yahovenko, Exponential and power-law probability distributions of wealth and imcome in the United Kingdom and The Unite States, Physica A [**299**]{}, 213-221(2001). A.A. Drǎgulescu and V.M. Yahovenko, Evidence for the exponential distribution of income in the USA, Eur. Phys. J. B [**20**]{}, 585-589(2001). T. Kaizoji, Scaling behavior in land markets, Physica A [**[326]{}**]{}, 256-264(2003). A.A. Drǎgulescu and V.M. Yahovenko, Statistical mechanics of money, Eur. Phys. J. B [**17**]{}, 723-729(2000). J.L. McCauley and G.H. Gunaratne, An empirical model of volatility of returns and option pricing, Physica A [**[329]{}**]{}, 178-198(2003). G. Bonanno, G. Caldarelli, F. Lillo, and R.N. Mantegna,Topology of correlation based minimal spanning trees in real and model markets, arXiv:cond-mat/0211546 (2002). C.-K. Peng, S.V. Buldyrev, S. Havlin, M. Simons, H.E. Stanley, A.L. Goldberger, Mosaic organization of DNA nucleotides. Phys Rev E [**[49]{}**]{}, 1685-1689(1994). J. Wu, Z. Di, and Z.R. Yang, Division of labor as the result of phase transition, Physica A [**[323]{}**]{}, 663-676(2003). H. Klaus, Jinshan Wu, Zengru Di, and Jiawei Chen, Structure of production networks, in preparation. [^1]: Email: zdi@bnu.edu.cn
--- abstract: 'We use the swap Monte Carlo algorithm to analyse the glassy behaviour of sticky spheres in equilibrium conditions at densities where conventional simulations and experiments fail to reach equilibrium, beyond predicted phase transitions and dynamic singularities. We demonstrate the existence of a unique ergodic region comprising all the distinct phases previously reported, except for a phase-separated region at strong adhesion. All structural and dynamic observables evolve gradually within this ergodic region, the physics evolving smoothly from well-known hard sphere glassy behaviour at small adhesions and large densities, to a more complex glassy regime characterised by unusually-broad distributions of relaxation timescales and lengthscales at large adhesions.' author: - 'Christopher J. Fullerton' - Ludovic Berthier bibliography: - 'sticky\_spheres.bib' title: 'Glassy behaviour of sticky spheres: What lies beyond experimental timescales?' --- Steeply repulsive particles with very short-range attractive forces (‘sticky spheres’) are experimentally realised with colloids [@hunter2012physics; @gonzalez2016colloidal]. When the attraction range is small compared to the particle size, the physics of sticky spheres differs qualitatively from that of atomic liquids [@baxter1968percus; @noro2000extended; @sciortino2002one]. Sticky spheres thus represent a unique paradigm for the statistical mechanics of soft materials and simple fluids, motivating a large number of theoretical studies and experiments. The phase diagram of sticky spheres is explored by changing the volume fraction and the adhesion strength, showing interesting behaviour at low (clustering and phase separation [@post1986cluster]) and large (crystallisation [@bolhuis1997isostructural; @lee2008effect], glassy dynamics [@pham2004glasses]) volume fractions. Over the last two decades, the glass transition of sticky spheres received considerable attention. This effort gathered momentum when the mode-coupling theory (MCT) of the glass transition [@gotze2008complex] was applied to the square-well potential to predict the phase behaviour and glassy dynamics of sticky spheres [@bergenholtz1999nonergodicity; @dawson2000higher; @dawson_2001; @gotze2003higher; @sperl_2004]. The predicted existence of two types of glass transition, of reentrant glassy dynamics, and of a glass-glass phase transition line ending at a singular critical point giving rise to non-trivial relaxation patterns triggered massive theoretical [@geissler2005short; @sellitto2013thermodynamic; @ghosh2019microscopic; @ghosh2020microscopic], numerical [@zaccarelli2001mechanical; @puertas_2002; @zaccarelli_2002; @foffi2002evidence; @zaccarelli_2003; @sciortino_2003; @zaccarelli_2004; @saika_voivod_2004; @reichman2005comparison; @moreno_2006] and experimental [@mallamace2000kinetic; @pham2002multiple; @eckert2002re; @pham2004glasses; @kaufman2006direct; @buzzaccaro_2007; @lu_2008; @zhang2011cooperative] efforts, which continue to this day. Published work is often torn between successes and failures of these MCT predictions. Two recent computational studies [@zaccarelli_2009; @royall_2018] offer contradicting conclusions even on basic features of glassy sticky spheres and important physical questions are left unanswered. There is a broad agreement on the existence of reentrant dynamics along isochores, non-trivial dynamic correlation functions at intermediate adhesion, and increasingly localised particle motion at large adhesion. On the other hand the existence and nature of the MCT liquid-glass and glass-glass lines, of various phases (equilibrium gel, attractive, repulsive, bonded and non-bonded glasses), and the interplay between gelation, glassiness and phase separation remain debated. Resolving these questions is technically difficult as large relaxation timescales plague both computer simulations and experiments, and prohibit the exploration of the equilibrium phase diagram. Informative non-equilibrium aging studies at large densities have been performed instead [@foffi2004aging; @zaccarelli_2004; @zaccarelli_2009]. Here we show that the swap Monte Carlo algorithm, which has recently provided an equilibration speedup larger than $10^{11}$ in several three-dimensional model glass-formers [@ninarello_2017] (including hard spheres [@berthier_2016; @coslovich2018local; @berthier2019bypassing]), performs equally well for dense sticky spheres. This decisive computational advance allows us to perform a complete exploration of the equilibrium phase diagram of sticky spheres, including regions at large densities where distinct phases were predicted or numerically reported. Our simulations instead reveal the existence of a broad ergodic fluid phase limited at large adhesions by a phase-separated region where non-equilibrium gelation may occur. Within the ergodic fluid, the dynamics is reentrant along isochores, and evolves smoothly between the well-known hard sphere limit to a more complex sticky glassy dynamics characterised by a broad hierarchy of relaxation timescales and lengthscales, but this appears distinct from the predicted MCT phases and singularities, which we do not observe. We describe sticky spheres using the well-studied system of hard spheres decorated with a short-range attractive square-well. Particles separated by $r_{ij}$ have interaction energy $V(r_{ij} \leq \sigma_{ij}) = \infty$, $V(\sigma_{ij} < r_{ij} < \lambda\sigma_{ij}) = -u$, and $V( r_{ij} > \sigma_{ij}) = 0$ where $(\lambda-1)\sigma_{ij}$ defines the width of the attractive well, and $\sigma_{ij} = (\sigma_i+\sigma_j)/2$. To compare with other studies, we use the relative width $\epsilon=(\lambda-1)/\lambda = 0.03$ of the square well [@zaccarelli_2002; @zaccarelli_2003; @zaccarelli_2004]. This value is often used as for it MCT predicts the existence of an $A_3$ singularity within the glass phase, close enough to affect the system’s dynamics at points where it can be equilibrated on accessable timescales. We use a continuous distribution of particle diameters, $P(\sigma_{\mathrm{min}} \leq \sigma \leq \sigma_{\mathrm{max}}) = A/\sigma^3$, where $A$ is a normalisation constant. We choose $\sigma_{\mathrm{min}}$ and $\sigma_{\mathrm{max}}$ to provide a polydispersity of $\Delta = \sqrt{\langle\sigma^2\rangle - \langle \sigma \rangle^2}/ \langle \sigma \rangle = 23\%$. This choice simultaneously prevents crystallisation and makes the swap Monte Carlo algorithm efficient without adhesion [@berthier_2016]. We use Monte Carlo dynamics to explore the structure and dynamics of the system. Equilibration is achieved using swap Monte Carlo, with details as in [@berthier_2016; @ninarello_2017; @fullerton_2017]. To analyse the dynamics, we perform conventional Monte Carlo simulations, which describe glassy dynamics equivalently to Brownian and Molecular Dynamics [@berthier2007monte]. We simulate $N=1000$ particles in a periodic cubic box of volume $V$. The packing fraction is $\phi = \pi N \langle \sigma^3 \rangle/ (6V)$. We fix the temperature $k_B T = 1$ and vary the well depth and packing fraction to explore the $(u,\phi)$ phase diagram. Additional simulations with $N=8000$ are performed to investigate the phase separation boundary at large $u$. Times are measured in units of Monte Carlo steps, where a step represents $N$ attempted Monte Carlo moves (swap or translational), and distances in units of the average particle diameter $\langle \sigma \rangle$. To quantify dynamics, we calculate the mean-squared displacement (MSD) defined as $\langle r^2(t) \rangle = (1/N) \sum_i |{\bf r}_i(t) - {\bf r}_i(0)|^2$, where ${\bf r}_i(t)$ is the position of particle $i$ at time $t$. The MSD is the second moment of the van Hove distribution of single particle displacements: $G_s(x,t) = \langle \delta (x -|x_i(t)-x_i(0) | ) \rangle$, for displacements along the $x$-direction (later averaged over all directions). We define the self-part of the incoherent scattering function: $f({q}, t) = (1/N) \sum_j e^{i {\bf q}. ( {\bf r}_j(t) - {\bf r}_j(0))}$. We perform a spherical average at $|{\bf q}| = 7.8$, close to the first peak of the static structure factor, and define the structural relaxation time $\tau_{\alpha}$ as $f(|{\bf q}|=7.8, \tau_{\alpha}) = e^{-1}$. When the system is nearly arrested, we fit these functions using $f(q,t) = f_q + h_q [B_q^{(1)}\ln(t/\tau) +B_q^{(2)}\ln^2(t/\tau)]$ [@puertas_2002; @sciortino_2003], mainly to extract the non-ergodicity parameter $f_q$. To ensure efficient equilibration at large $\phi$, we use swap Monte Carlo simulations. At each state point, we define $\tau^{\rm swap}_\alpha$ via $f(q,t)$ measured in the presence of swap moves. Note that all particles (small and large) need to relax for this function to decay, which ensures full ergodicity. We consider our system as adequately equilibrated if it has been simulated longer than $4 \tau_{\alpha}^{\rm swap}$ [@ninarello_2017]. We collect independent equilibrium configurations at many state points $(u,\phi)$ to study static behaviour, and from these we launch many independent, conventional Monte Carlo simulations lasting up to $t_s = 5 \times 10^8$ MC steps to analyse the equilibrium dynamics over a broad time window, including at conditions where the physical relaxation time $\tau_\alpha$ is much larger than $t_s$. This is only possible thanks to the combined use of swap and conventional Monte Carlo. ![a) Equilibrium phase diagram $(u,\phi)$ of sticky spheres, with a large ergodic fluid (blue) and a phase separated region (green). The isochrone $\tau^{\mathrm{swap}}_{\alpha} = 10^7$ MC steps (full cyan line) limits the ergodic region at large $\phi$, whereas the isochrone $\tau_{\alpha} = 10^7$ MC steps (black dashed line) marks the limit of conventional simulations. The avoided MCT singularities are mapped in orange ending at the $A_3$ singularity (symbol). Grey areas are unexplored and the black cross corresponds to Fig. \[fig:logdecay\](a). b) Potential energy as a function of the time after a quench along the isochore $\phi=0.5$. c) Relaxation times with (blue) and without (red) swap at $u = 3.0$.[]{data-label="fig:PD"}](figure1.pdf){width="8.5cm"} The decisive progress provided by the swap algorithm can be appreciated in Fig. \[fig:PD\](a), which shows the equilibrium $(u,\phi)$ phase diagram. We distinguish two regions. The large blue area comprises state points where we achieved thermal equilibrium. This region extends to arbitrarily low $\phi$, and is limited at large $u \gtrsim 3.5$ by a phase-separated region. The ergodic region is limited at large $\phi$ by our ability to reach equilibrium. We empirically define the right-most boundary as the isochrone where $\tau_\alpha^{\rm swap} =10^7$ MC steps, but densities even larger than this $\phi \approx 0.65-0.66$ empirical boundary could presumably be explored by performing longer simulations. There is a single phase transition in Fig. \[fig:PD\](a), which is not described within the MCT approach. Holding $\phi$ constant and increasing $u$, the system phase separates into two phases with distinct densities. The potential energy after quenching from $u=0$ to different points along the isochore $\phi=0.5$ is shown in Fig. \[fig:PD\](b). The large jump in energy at long times between $u = 3$ and $u = 5$ indicates phase separation. The slow decrease at long times at high $u$ shows that the system coarsens [@testard2011influence]. The phase separation line in Fig. \[fig:PD\](a) is positioned by also considering the heterogeneous structure of the system at long times, which can be done up to large volume fractions near $\phi=0.6$ [@royall_2018]. At larger $\phi$, the amount of low-density phase becomes too small and the coarsening too slow to identify the phase separation clearly. Everywhere in the blue area of Fig. \[fig:PD\](a), the system is an ergodic fluid. The significance of this conclusion comes when considering the physical dynamics of the system. Increasing $\phi$ at constant $u$, the relaxation time $\tau_{\alpha}$ increases very fast and the system becomes arrested on the observational timescales, as shown for $u=3.0$ in Fig. \[fig:PD\](c). This figure also illustrates the giant speedup afforded by swap Monte Carlo at high $\phi$ for sticky particles. We report in Fig. \[fig:PD\](a) the isochrone $\tau_\alpha =10^7$ MC steps, which marks the limit where conventional simulations equilibrate. The isochrones with and without swap are parallel, but separated by a large gap $\Delta \phi \simeq 0.05$, nearly independent of $u$. This gap is the wide new territory being explored in equilibrium for the first time here. Crucially, all distinct phases reported previously for this system belong to the same ergodic fluid phase. We conclude that none of these phases actually exists as such, and the phase diagram is much simpler than anticipated [@dawson2000higher; @pham2002multiple; @zaccarelli_2009] with only two phases separated by the well-known discontinuous liquid-gas thermodynamic instability. Deep inside the phase separating region, coarsening towards a fully demixed state may become slow, but is never arrested [@testard2011influence]. Given the time window accessible to colloidal experiments, the phase separation is never complete and the system behaves as a colloidal gel [@foffi2005arrested; @manley2005glasslike; @lu_2008; @zaccarelli2008gelation], with physical properties that are slowly aging. This represents the non-equilibrium route to colloidal gelation [@zaccarelli2007colloidal]. The ergodic region contains all sharp features theoretically predicted by MCT which we map in Fig. \[fig:PD\](a) by following earlier work fitting our measured relaxation times to MCT power law predictions. The liquid-glass and glass-glass transition lines ending at the $A_3$ singularity all belong to the ergodic fluid. Therefore, they represent, at best, smooth physical crossovers [@ghosh2019microscopic]. Our demonstration that all ideal MCT singularities disappear in physical systems of sticky spheres echoes equivalent findings for molecular glasses [@berthier2011theoretical] and colloidal hard spheres [@brambilla2009probing]. ![$f(q,t)$ at various $q$ measured at a) the black cross $(\phi=0.630,u=2.5)$ or b) the yellow star $(\phi = 0.654, u=2.5)$ in Fig. \[fig:PD\]a. The near-logarithmic decay highlighted in a) for $q=26.3$ is no longer present closer to the putative $A_3$ singularity in b).[]{data-label="fig:logdecay"}](figure2.pdf){width="8.5cm"} Is the concept of an avoided $A_3$ singularity useful? Our model displays the physical behaviour expected for a system with competing attractive and repulsive interactions. The banana-shaped iso-$\tau_\alpha$ line in Fig. \[fig:PD\](a) implies reentrant glassy dynamics as $u$ varies along isochores. Reentrance is mathematically described by MCT via the existence of two distinct glass transition lines, but these are not required to explain it [@ghosh2019microscopic]. Much less trivial is the observation of a transient ‘logarithmic’ decay of $f(q,t)$ at well-chosen state points approaching the $A_3$ point [@zaccarelli_2002; @sciortino_2003] where the relaxation should become purely logarithmic [@gotze2003higher]. In Fig. \[fig:logdecay\](a), we show $f(q,t)$ at $(\phi =0.630,u=2.5)$ (black cross in Fig. \[fig:PD\](a)) for a range of wavevectors $q$. The decay time increases with decreasing $q$, showing that the system remains mobile on short length scales but is frozen on long length scales. At intermediate $q$-values, a nearly logarithmic time dependence holds over about 5 decades, a behaviour clearly distinct from the conventional two-step decay observed in most glassy materials [@berthier2011theoretical]. Previous work attributed this unsual dynamics to proximity to the $A_3$ singularity [@foffi2002evidence; @zaccarelli_2002; @sciortino_2003]. We can test this hypothesis directly by measuring the equilibrium dynamics much closer to the $A_3$ singularity, as in Fig. \[fig:logdecay\](b). We find that all hints of logarithmic behaviour are gone, the dynamics now being consistent with a simpler two-step decay. (At timescales much larger than those shown here, structural relaxation will eventually take place.) These data suggest that the existence of an $A_3$ singularity may not be the best physical way to interpret the unconventional dynamics in Fig. \[fig:logdecay\](a). It was shown, for instance, that by numerically tuning the strength of competing attractive and repulsive interactions [@zaccarelli_2003; @chaudhuri2010gel; @chaudhuri2015relaxation], a near-logarithmic decay may appear or disappear, or be replaced by a simpler multi-step decay. Our results dispel the possibility that several distinct phases characterize dense sticky spheres [@pham2002multiple; @zaccarelli_2009]. No sharp distinction exists between attractive, repulsive, bonded and non-bonded glasses. Instead, we now show that increasing adhesion smoothly changes the physics between two qualitatively-distinct types of glassy dynamics. To see this, we explore the large $\phi$ region using several paths in the phase diagram changing either $u$ or $\phi$. ![Evolution of the MSD with packing fraction for a) $u=0.$ and b) $u =3.0$. The equilibrium glassy physics at short timescales and lengthscales for the adhesive system is different from that of hard spheres.[]{data-label="fig:MSD"}](figure3.pdf){width="8.5cm"} Glassy dynamics is encountered for any $u \lesssim 3.5$ as $\phi$ is increased, see Fig. \[fig:MSD\]. In all cases, the diffusion constant drops by several orders of magnitude as $\phi$ increases, until diffusion becomes too slow to be observed. However, interesting differences can be seen between repulsive and sticky particles. When $u=0$, the MSD displays a well-defined plateau, whose amplitude decreases smoothly with $\phi$. For $u=3.0$ no well-defined plateau can be seen, even for packing fractions as large as $\phi=0.66$ (remember that all data are taken in equilibrium). The plateau is replaced by a slow subdiffusive regime that extends over 7 decades in time. At large $u$, the physics at short timescales and lengthscales is different from, and much more complex than, that of hard sphere glasses. This differs from ideas of an attractive [@dawson2000higher; @zaccarelli_2003] or a bonded [@zaccarelli_2009] glass, and is not to be confused with non-equilibrium gelation either [@royall_2018]. The sharp distinction between attractive and repulsive glasses is nonexistent, but in the regime $u \approx 2.5-3.5$ between phase separation and hard spheres the system exhibits unusual glassy dynamics. We characterize this regime further in Fig. \[fig:nonerg\_and\_MSD\] by changing $u$ along the $\phi=0.65$ isochore, which crosses the (putative) glass-glass line very close to the $A_3$ singularity. This isochore lies in the region where equilibration can only be achieved using swap. In Fig. \[fig:nonerg\_and\_MSD\](a), we show the non-ergodicity parameter. At all wavevectors $f_q$ is higher at $u = 3.5$ than it is at $u = 0$, showing that stronger adhesion means less mobility at all lengthscales. The change in $f_q$ is greatest for large $q$ (short lengthscales). When $u$ is small, particles are free to move within the hard sphere cages but are immobilised on long lengthscales. As $u$ increases, the attractive well can trap (or ‘bond’ [@zaccarelli_2009]) particles at much shorter distances. Attractive interactions also destabilise the hard sphere glass, which results in a slight non-monotonic behaviour of $f_q$ at small $q$ near $u=1.5$. Again, $f_q$ varies smoothly with $u$ (this is true across a range of $\phi$) in contrast to the sharp jump predicted across the MCT glass-glass line. ![Evolution of a) the non-ergodicity parameter, b) the mean-squared displacement, c) the intermediate scattering function, d) the van Hove function along the isochore $\phi=0.65$ at equilibrium.[]{data-label="fig:nonerg_and_MSD"}](figure4.pdf){width="8.5cm"} The marked (but gradual) evolution along the $\phi=0.65$ isochore is further illustrated in Figs. \[fig:nonerg\_and\_MSD\](b,c) showing the time dependence of $\langle r^2(t) \rangle$ and $f(q,t)$. These functions change dramatically in the range $u \in [0, 3.5]$. At small $u$ a well-developed plateau exists: the particles are caged by repulsive interactions with their neighbours. The approach to this long-lived (6 decades in time) plateau is fast. As $u$ increases clear signs of a structural relaxation speedup appear at long times, together with a weakening of the plateau. Increasing $u$ further the fast approach to a plateau gets replaced by a slow sub-diffusion (in $\langle r^2(t) \rangle$), or a slow decay (in $f(q,t)$). This shows that at large $u$ particles are neither caged nor bonded, but instead get arrested over multiple lengthscales, ranging from very short corresponding to the attractive well width to larger than the hard sphere cage size, which is no longer relevant. This differs from the picture of a bonded glass [@zaccarelli_2009], but leaves room for a glass transition where adhesion is relevant, at odds with [@royall_2018]. Rather they demonstrate that the structure and short-time dynamics of sticky spheres at large $u$ is highly heterogeneous [@reichman2005comparison; @kaufman2006direct; @zhang2011cooperative], and involves a very broad hierarchy of timescales and lengthscales long before structural relaxation. The increasing heterogeneity of the glassy structure of sticky spheres is finally confirmed by the evolution of the van Hove distribution in Fig. \[fig:nonerg\_and\_MSD\](d). A near-Gaussian distribution is observed at small $u$, confirming the pertinence of a description of the hard sphere glass with a typical cage size [@charbonneau2012dimensional]. By contrast the van Hove distribution is much broader and strongly non-Gaussian at large $u$, with both a large peak at very small displacements and a fat non-Gaussian tail at large displacements, suggesting enhanced dynamic heterogeneity [@reichman2005comparison]. Using swap Monte Carlo, we have explored the complete equilibrium phase diagram of dense sticky spheres. A simple physical picture emerges with three distinct regimes of slow dynamics. At large adhesions, $u \geq 3.5$, the system phase separates at least up to $\phi=0.60$ and discontinuously enters a slowly coarsening aging regime leading to non-equilibrium gelation. At small $u \leq 1.5$ and large $\phi$ the system displays well-known hard sphere glassy dynamics, characterised by a two-step decay of correlation functions and a well-defined cage size at intermediate times. Finally, in the regime $u=1.5-3.5$ and large $\phi$ unusual glassy dynamics are observed, characterised by a broad distribution of relaxation timescales and length scales and a short-time dynamics quite different from hard spheres. We are aware of no atomic or molecular experimental analog of this unsual glassy behaviour, which involves multiple (time and length) scales and extended sub-diffusion long before the structural relaxation. The sharp distinction predicted by MCT between two types of glassy dynamics is invalidated by the data, which also do not support the physical relevance of an avoided $A_3$ singularity to interpret the dynamics. The transient logarithmic time decay has a simpler interpretation and is not seen on approaching the $A_3$ location. The very unsual time correlation functions we report are instead observed at a much larger adhesion strength, away from the avoided $A_3$ singularity. The proposed clarification of the phase behaviour and dynamics of dense sticky systems should help reinterpreting past experiments and suggest new ones. Future numerical work could also help understand better the rheological behaviour [@zaccarelli2001mechanical; @pham2006yielding; @pham2008yielding; @altieri2018microscopic] in adhesive colloidal glasses. We thank M. Cates, P. Royall and E. Zaccarelli for useful exchanges. This work was supported by a grant from the Simons Foundation (Grant No. 454933, L. B.).
--- abstract: 'We can attach a local constant to every finite dimensional continuous complex representation of a local Galois group of a non-archimedean local field $F/{\mathbb{Q}}_p$ by Deligne and Langlands. Tate [@JT1] gives an explicit formula for computing local constants for linear characters of $F^\times$, but there is no explicit formula of local constant for any arbitrary representation of a local Galois group. In this article we study Heisenberg representations of the absolute Galois group $G_F$ of $F$ and give invariant formulas of local constants for Heisenberg representations of dimension prime to $p$.' address: | School of Mathematics and Statistics\ University of Hyderabad\ Hyderabad, 500046\ India author: - '**Sazzad Ali Biswas**' title: Local constants for Heisenberg representations --- **Introduction** ================ Let $F$ be a non-archimedean local field (i.e., finite extension of the $p$-adic field $\mathbb{Q}_p$, for some prime $p$). Let $\overline{F}$ be an algebraic closure of $F$, and $G_F:=\rm{Gal}(\overline{F}/F)$ be the absolute Galois group of $F$. Let $\rho:G_F\to \mathrm{Aut}_{\mathbb{C}}(V)$ be a finite dimensional continuous complex representation of the Galois group $G_F$. For this $\rho$, we can associate a constant $W(\rho)$ with absolute value $1$ by Langlands (cf. [@RL]) and Deligne (cf. [@D1]). This constant is called the **local constant** (also known as local epsilon factor) of the representation $\rho$. Langlands also proves that these local constants are weakly extendible functions (cf. [@JT1], p. 105, Theorem 1). The existence of this local constant is proved by Tate for one-dimensional representation in [@JT3] and the general proof of the existence of the local constants is proved by Langlands (see [@RL]). In 1972 Deligne also gave a proof using global methods in [@D1]. But in Deligne’s terminology this local constant $W(\rho)$ is $\epsilon_{D}(\rho,\psi_F,\mathrm{dx},1/2)$, where $\mathrm{dx}$ is the Haar measure on $F^{+}$ (locally compact abelian group) which is self-dual with respect to the [**canonical**]{} (i.e., coming through trace map from $\psi_{{\mathbb{Q}}_p}(x):=e^{2\pi ix}$ for all $x\in{\mathbb{Q}}_p$, see [@JT1], p. 92) additive character $\psi_F$ of $F$. Tate in his article [@JT2] denotes this Langlands convention of local constants as $\epsilon_{L}(\rho,\psi)$. According to Tate (cf. [@JT2], p. 17), the Langlands factor $\epsilon_{L}(\rho,\psi)$ is $\epsilon_{L}(\rho,\psi)=\epsilon_{D}(\rho\omega_{\frac{1}{2}},\psi,\mathrm{dx_{\psi}})$, where $\omega$ denotes the normalized absolute value of $F$, i.e., $\omega_{\frac{1}{2}}(x)=|x|_{F}^{\frac{1}{2}}=q_{F}^{-\frac{1}{2}\nu_{F}(x)}$ which we may consider as a character of $F^\times$, and where $\mathrm{dx_{\psi}}$ is the self-dual Haar measure corresponding to the additive character $\psi$ and $q_F$ is the cardinality of the residue field of $F$. According to Tate (cf. [@JT1], p. 105) the relation among three conventions of the local constants is: $$W(\rho)=\epsilon_{L}(\rho,\psi_F)=\epsilon_{D}(\rho\omega_{\frac{1}{2}},\psi_F,\mathrm{dx_{\psi_F}}).$$ In Section 2, we discuss all the necessary notations and known results for this article. In Section 3 we study the arithmetic description of Heisenberg representations and their determinants (cf. Proposition \[Proposition arithmetic form of determinant\]) of the absolute Galois group $G_F$ of a local field $F/{\mathbb{Q}}_p$. In particular, the Heisenberg representations of dimension prime to $p$ are important for this article. In Subsection 3.2, we study the various properties (e.g., Artin conductors, Swan conductors, dimension) of Heisenberg representations of dimension prime to $p$. In Section 4, firstly, we give an invariant formula of local constant for a Heisenberg representation $\rho$ of the absolute Galois group $G_F$ of a non-archimedean local field $F/{\mathbb{Q}}_p$ (cf. Theorem \[Theorem invariant odd\]). In Theorem \[invariant formula for minimal conductor representation\], we give an invariant formula of local constant of a minimal conductor Heisenberg representation $\rho$ of dimension prime to $p$. And when $\rho$ is not minimal conductor but dimension is prime to $p$, we have Theorems \[Theorem invariant for non minimal representation\], \[Theorem using Deligne-Henniart\]. In Section 5, we also discuss Tate’s root-of-unity criterion, and by applying this Tate’s criterion we give some information about the dimension and Artin conductor of a Heisenberg representation (cf. Proposition \[Proposition 4.12\]). **Notations and Preliminaries** =============================== Abelian Local Constants ----------------------- We have explicit formula of abelian local constants due to Tate (cf. [@JT1], pp. 93-94). Let $F$ be a non-archimedean local field. Let $O_F$ be the ring of integers of the local field $F$ and $P_F=\pi_F O_F$ be a prime ideal in $O_F$, where $\pi_F$ is a uniformizer, i.e., an element in $P_F$ whose valuation is one, i.e., $v_F(\pi_F)=1$. The order of the residue field of $F$ is $q_F$. Let $U_F=O_F-P_F$ be the group of units in $O_F$. Let $P_{F}^{i}=\{x\in F:v_F(x){\geqslant}i\}$ and for $i{\geqslant}0$ define $U_{F}^{i}=1+P_{F}^{i}$ (with proviso $U_{F}^{0}=U_F=O_{F}^{\times}$). Let $\chi$ be a character of $F^\times$ with conductor $a(\chi)$, i.e., the smallest integer such that $\chi$ is trivial on $U_{F}^{a(\chi)}$. Let $\psi$ be an additive character of $F$ with conductor $n(\psi)$, i.e., $\psi$ is trivial on $P_{F}^{-n(\psi)}$, nontrivial on $P_{F}^{-n(\psi)-1}$. Then the local constant of $\chi$ is (cf. [@JT1], p. 94): $$\label{eqn 2.9} W_F(\chi,\psi)=\chi(c)q_{F}^{-\frac{a(\chi)}{2}}\sum_{x\in \frac{U_F}{U_{F}^{a(\chi)}}}\chi^{-1}(x)\psi(\frac{x}{c}),$$ where $c=\pi_{F}^{a(\chi)+n(\psi)}$. Let $K/F$ be a finite separable extension of non-archimedean local field $F$. We define the **inverse different (or codifferent)** $\mathcal{D}_{K/F}^{-1}$ of $K$ over $F$ to be $\pi_{K}^{-d_{K/F}}O_K$, where $d_{K/F}$ is the largest integer (this is the exponent of the different $\mathcal{D}_{K/F}$) such that $\mathrm{Tr}_{K/F}(\pi_{K}^{-d_{K/F}}O_K)\subseteq O_F$, where $\rm{Tr}_{K/F}$ is the trace map from $K$ to $F$. Then the **different** is defined by: $\mathcal{D}_{K/F}=\pi_{K}^{d_{K/F}}O_K$ and the **discriminant** $D_{K/F}$ is $D_{K/F}=N_{K/F}(\pi_{K}^{d_{K/F}})O_F$. If $K/F$ is tamely ramified, then $$\label{eqn 2.2} \nu_K(\mathcal{D}_{K/F})=d_{K/F}=e_{K/F} - 1.$$ Extendible functions -------------------- Let $G$ be any finite group. We denote $R(G)$ the set of all pairs $(H,\rho)$, where $H$ is a subgroup of $G$ and $\rho$ is a virtual representation of $H$ . The group $G$ acts on $R(G)$ by means of $(H,\rho)^g=(H^g,\rho^g)$, $g\in G$,\ $\rho^g(x)=\rho(gxg^{-1})$, $x\in H^g:=g^{-1}Hg$ Furthermore we denote by $\widehat{H}$ the set of all one dimensional representations of $H$ and by $R_1(G)$ the subset of $R(G)$ of pairs $(H,\chi)$ with $\chi\in \widehat{H}$. Here character $\chi$ of $H$ we mean always a **linear** character, i.e., $\chi:H\to \mathbb{C}^\times$. Now define a function $\mathcal{F}:R_1(G) \rightarrow \mathcal{A}$, where $\mathcal{A}$ is a multiplicative abelian group with $$\mathcal{F}(H,1_H)=1\label{eqn 2.1}$$ and $$\mathcal{F}(H^g,\chi^g)=\mathcal{F}(H,\chi)\label{eqn 2.2}$$ for all $(H,\chi)$, where $1_H$ denotes the trivial representation of $H$.\ Here a function $\mathcal{F}$ on $R_1(G)$ means a function which satisfies the equation (\[eqn 2.1\]) and (\[eqn 2.2\]). A function $\mathcal{F}$ is said to be extendible if $\mathcal{F}$ can be extended to an $\mathcal{A}$-valued function on $R(G)$ satisfying: $$\label{eqn 2.3} \mathcal{F}(H,\rho_1+\rho_2)=\mathcal{F}(H,\rho_1)\mathcal{F}(H,\rho_2)$$ for all $(H,\rho_i)\in R(G),i=1,2$, and if $(H,\rho)\in R(G)$ with $\mathrm{dim}\,\rho=0$, and $\Delta$ is a subgroup of $G$ containing $H$, then $$\label{eqn 2.4} \mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}\rho)=\mathcal{F}(H,\rho),$$ where $\mathrm{Ind}_{H}^{\Delta}\rho$ is the virtual representation of $\Delta$ induced from $\rho$. In general, let $\rho$ be a representation of $H$ with $\mathrm{dim}\,\rho\neq0$. We can define a zero dimensional representation of $H$ by $\rho$ and which is: $\rho_0:=\rho-\mathrm{dim}\,\rho\cdot 1_H$. So $\mathrm{dim}\,\rho_0$ is zero, then now we use the equation (\[eqn 2.4\]) for $\rho_0$ and we have, $$\label{eqn 2.5} \mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}\rho_0)=\mathcal{F}(H,\rho_0).$$ Now replace $\rho_0$ by $\rho-\mathrm{dim}\rho\cdot 1_H$ in the above equation (\[eqn 2.5\]) and we have $$\begin{aligned} \mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}(\rho-\mathrm{dim}\rho \cdot 1_H)) &=\mathcal{F}(H,\rho-\mathrm{dim}\rho\cdot1_H)\\\implies \frac{\mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}\rho)} {\mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}1_H)^{\mathrm{dim}\rho}} &=\frac{\mathcal{F}(H,\rho)} {\mathcal{F}(H,1_H)^{\mathrm{dim}\rho}}. \end{aligned}$$ Therefore, $$\begin{aligned} \mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}\rho)\nonumber &=\left\{\frac{\mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}1_H)}{\mathcal{F}(H,1_H)}\right\}^{\mathrm{dim}\rho}\cdot\mathcal{F}(H,\rho)\\ &=\lambda_{H}^{\Delta}(\mathcal{F})^{\mathrm{dim}\rho}\mathcal{F}(H,\rho), \label{eqn 2.6}\end{aligned}$$ where $$\label{eqn 2.7} \lambda_{H}^{\Delta}(\mathcal{F}):=\frac{\mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}1_H)}{\mathcal{F}(H,1_H)}.$$ But by the definition of $\mathcal{F}$, we have $\mathcal{F}(H,1_H)=1$, so we can write $$\label{eqn 2.8} \lambda_{H}^{\Delta}(\mathcal{F})={\mathcal{F}(\Delta,\mathrm{Ind}_{H}^{\Delta}1_H}).$$ This $\lambda_{H}^{\Delta}(\mathcal{F})$ is called **Langlands $\lambda$-function** (or simply $\lambda$-function) which is independent of $\rho$. A extendible function $\mathcal{F}$ is called **strongly** extendible if it satisfies equation (\[eqn 2.3\]) and fulfills equation (\[eqn 2.4\]) for all $(H,\rho)\in R(G)$, and if the equation (\[eqn 2.4\]) is fulfilled only when $\mathrm{dim}\,\rho=0$, then $\mathcal{F}$ is called **weakly** extendible function. The extendible functions are **unique**, if they exist (cf. [@JT1], p. 103). Langlands proves the local constants are weakly extendible functions (cf. [@JT1], p. 105, Theorem 1). The Artin root numbers (also known as global constants) are strongly extendible functions (for more examples and details about extendible function, see [@JT1] and [@HK]). Now we take a tower of local Galois extensions $K/L/F$, and denote $G=\rm{Gal}(K/F)$, $H=\rm{Gal}(K/L)$. Then the $\lambda$-function for the extension $L/F$ is: $$\lambda_{\rm{Gal}(K/L)}^{\rm{Gal}(K/F)}(W):=\lambda_{L/F}(\psi)=W(\rm{Ind}_{L/F}(1_L),\psi),$$ where $1_L$ is the trivial character of $L^\times$ which corresponds to the trivial character of $H$ by class field theory, and $\psi$ is a nontrivial additive character of $F$. And when we take $\psi=\psi_F$ as the canonical additive character, we simply write $\lambda_{L/F}$ instead of $\lambda_{L/F}(\psi_F)$. Since the Heisenberg representations of a finite local Galois are monomial (i.e., induced from linear character of a finite-index subgroup), we need to know the explicit formula for lambda functions for finite Galois extensions. For this article we need the following computations of lambda functions. \[General Theorem for odd case\] Let $F$ be a non-archimedean local field and $\mathrm{Gal}(E/F)$ be a local Galois group of odd order. If $L\supset K\supset F $ be any finite extension inside $E$, then $\lambda_{L/K}=1$. \[Theorem 3.21\] Let $K$ be a tamely ramified quadratic extension of $F/{\mathbb{Q}}_p$ with $q_F=p^s$. Let $\psi_F$ be the canonical additive character of $F$. Let $c\in F^\times$ with $-1=\nu_F(c)+d_{F/{\mathbb{Q}}_p}$, and $c'=\frac{c}{\rm{Tr}_{F/F_0}(pc)}$, where $F_0/{\mathbb{Q}}_p$ is the maximal unramified extension in $F/{\mathbb{Q}}_p$. Let $\psi_{-1}$ be an additive character of $F$ with conductor $-1$, of the form $\psi_{-1}=c'\cdot\psi_F$. Then $$\lambda_{K/F}(\psi_F)=\Delta_{K/F}(c')\cdot\lambda_{K/F}(\psi_{-1}),$$ where $$\lambda_{K/F}(\psi_{-1})=\begin{cases} (-1)^{s-1} & \text{if $p\equiv 1\pmod{4}$}\\ (-1)^{s-1}i^{s} & \text{if $p\equiv 3\pmod{4}$}. \end{cases}$$ If we take $c=\pi_{F}^{-1-d_{F/{\mathbb{Q}}_p}}$, where $\pi_F$ is a norm for $K/F$, then $$\Delta_{K/F}(c')=\begin{cases} 1 & \text{if $\overline{\rm{Tr}_{F/F_0}(pc)}\in k_{F_0}^{\times}=k_{F}^{\times}$ is a square},\\ -1 & \text{if $\overline{\rm{Tr}_{F/F_0}(pc)}\in k_{F_0}^{\times}=k_{F}^{\times}$ is not a square}. \end{cases}$$ Here “overline” stands for modulo $P_{F_0}$. \[Theorem 2.5\] Let $G=\rm{Gal}(E/F)$ be a finite local Galois group of a non-archimedean local field $F/{\mathbb{Q}}_p$ with $p\ne 2$. Let $S\cong G/H$ be a nontrivial Sylow 2-subgroup of $G$, where $H$ is a uniquely determined Hall subgroup of odd order. Suppose that we have a tower $E/K/F$ of fields such that $S\cong \rm{Gal}(K/F)$, $H=\rm{Gal}(E/K)$ and $G=\rm{Gal}(E/F)$. If $S\subset G$ is cyclic, then 1. $$\lambda_{1}^{G}=\lambda_{K/F}^{\pm 1}=\begin{cases} \lambda_{K/F}=W(\alpha) & \text{if $[E:K]\equiv 1\pmod{4}$}\\ \lambda_{K/F}^{-1}=W(\alpha)^{-1} & \text{if $[E:K]\equiv -1\pmod{4}$}, \end{cases}$$ (here $\alpha=\Delta_{K/F}$ corresponds to the unique quadratic subextension in $K/F$) if $[K:F]=2$, hence $\alpha=\Delta_{K/F}$. 2. $$\lambda_{1}^{G}=\beta(-1)W(\alpha)^{\pm 1}=\beta(-1)\times\begin{cases} W(\alpha) & \text{if $[E:K]\equiv 1\pmod{4}$}\\ W(\alpha)^{-1} & \text{if $[E:K]\equiv -1\pmod{4}$} \end{cases}$$ if $K/F$ is cyclic of order $4$ with generating character $\beta$ such that $\beta^2=\alpha=\Delta_{K/F}$. 3. $$\lambda_{1}^{G}=\lambda_{K/F}^{\pm 1}=\begin{cases} \lambda_{K/F}=W(\alpha) & \text{if $[E:K]\equiv 1\pmod{4}$}\\ \lambda_{K/F}^{-1}=W(\alpha)^{-1} & \text{if $[E:K]\equiv -1\pmod{4}$} \end{cases}$$ if $K/F$ is cyclic of order $2^n{\geqslant}8$. And if the $4$th roots of unity are in the $F$, we have the same formulas as above but with $1$ instead of $\pm 1$. Moreover, when $p\ne 2$, a precise formula for $W(\alpha)$ will be obtained in Theorem \[Theorem 3.21\]. Classical Gauss sums -------------------- Let $k_q$ be a finite field of order $q$. Let $\chi, \psi$ be a multiplicative and an additive character respectively of $k_q$. Then the Gauss sum $G(\chi,\psi)$ is defined by $$G(\chi,\psi)=\sum_{x\in k_{q}^{\times}}\chi(x)\psi(x).$$ For this article we need the following theorem. In general, we cannot give explicit formula of $G(\chi,\psi)$ for arbitrary character $\chi$. But if $q=p^r$($r{\geqslant}2$), where $p$ is an odd prime, then by R. Odoni (cf. [@BRK], p. 33, Theorem 1.6.2) we can show that $G(\chi,\psi)/\sqrt{q}$ is a certain root of unity. If $q$ is an odd prime and order of $\chi$ is ${\geqslant}3$, then $G(\chi,\psi)/\sqrt{q}$ is [**not**]{} a root of unity. \[Theorem Chowla\] Let $q$ be an odd prime, and let $\chi$ be a character of $k_{q}^{\times}$ of order $>2$. Let $\psi(x)=e^{\frac{2\pi i x}{q}}$ for $x\in k_q$. Then the Gauss sum $G(\chi,\psi)$ does not equal to $\sqrt{q}$ times a root of unity, **Heisenberg representation** ----------------------------- Let $\rho$ be an irreducible representation of a (pro-)finite group $G$. Then $\rho$ is called a **Heisenberg representation** if it represents commutators by scalar matrices. Therefore higher commutators are represented by $1$. We can see that the linear characters of $G$ are Heisenberg representations as the degenerate special case. To classify Heisenberg representations we need to mention two invariants of an irreducible representation $\rho\in\rm{Irr}(G)$: 1. Let $Z_\rho$ be the **scalar** group of $\rho$, i.e., $Z_\rho\subseteq G$ and $\rho(z)=\text{scalar matrix}$ for every $z\in Z_\rho$. If $V/{\mathbb{C}}$ is a representation space of $\rho$ we get $Z_\rho$ as the kernel of the composite map $$\label{eqn 2.6.1} G\xrightarrow{\rho}GL_{{\mathbb{C}}}(V)\xrightarrow{\pi} PGL_{{\mathbb{C}}}(V)=GL_{{\mathbb{C}}}(V)/{\mathbb{C}}^\times E,$$ where $E$ is the unit matrix and denote $\overline{\rho}:=\pi\circ\rho$. Therefore $Z_\rho$ is a normal subgroup of $G$. 2. Let $\chi_\rho$ be the character of $Z_\rho$ which is given as $\rho(g)=\chi_\rho(g)\cdot E$ for all $g\in Z_\rho$. Apparently $\chi_\rho$ is a $G$-invariant character of $Z_\rho$ which we call the central character of $\rho$. Let $A$ be a profinite abelian group. Then we know that (cf. [@Z5], p. 124, Theorem 1 and Theorem 2) the set of isomorphism classes $\rm{PI}(A)$ of projective irreducible representations (for projective representation, see [@CR], §51) of $A$ is in bijective correspondence with the set of continuous alternating characters $\rm{Alt}(A)$. If $\rho\in\rm{PI}(A)$ corresponds to $X\in\rm{Alt}(A)$ then $\rm{Ker}(\rho)=\rm{Rad}(X)$ and $[A:\rm{Rad}(X)]=\rm{dim}(\rho)^2$, where $\rm{Rad}(X):=\{a\in A|\, X(a,b)=1,\,\text{for all}\, b\in A\}$, the [**radical of $X$**]{}. Let $A:=G/[G,G]$, so $A$ is abelian. We also know from the composite map (\[eqn 2.6.1\]) $\overline{\rho}$ is a projective irreducible representation of $G$ and $Z_\rho$ is the kernel of $\overline{\rho}$. Therefore **modulo commutator group $[G,G]$**, we can consider that $\overline{\rho}$ is in $\rm{PI}(A)$ which corresponds an alternating character $X$ of $A$ with kernel of $\overline{\rho}$ is $Z_\rho/[G,G]=\rm{Rad}(X)$. We also know that $$[A:\rm{Rad}(X)]=[G/[G,G]:Z_\rho/[G,G]]=[G:Z_\rho].$$ Then we observe that $$\rm{dim}(\overline{\rho})=\rm{dim}(\rho)=\sqrt{[G:Z_\rho]}.$$ Let $H$ be a subgroup of $A$, then we define the orthogonal complement of $H$ in $A$ with respect to $X$ $$H^\perp:=\{a\in A:\quad X(a, H)\equiv1\}.$$ An [**isotropic**]{} subgroup $H\subset A$ is a subgroup such that $H\subseteq H^\perp$ (cf. [@EWZ], p. 270, Lemma 1(v)). And when isotropic subgroup $H$ is maximal, we call $H$ is a **maximal isotropic** for $X$. Thus when $H$ is maximal isotropic we have $H=H^\perp$. We also can show that the Heisenberg representations $\rho$ are fully characterized by the corresponding pair $(Z_{\rho},\chi_{\rho})$. \[Proposition 3.1\] The map $\rho\mapsto(Z_\rho,\chi_\rho)$ is a bijection between equivalence classes of Heisenberg representations of $G$ and the pairs $(Z_\rho,\chi_\rho)$ such that 1. $Z_\rho\subseteq G$ is a coabelian normal subgroup, 2. $\chi_\rho$ is a $G$-invariant character of $Z_\rho$, 3. $X(\hat{g_1},\hat{g_2}):=\chi_\rho(g_1g_2g_1^{-1}g_2^{-1})$ is a nondegenerate **alternating character** on $G/Z_\rho$ where $\hat{g_1},\hat{g_2}\in G/Z_{\rho}$ and their corresponding lifts $g_1,g_2\in G$. For pairs $(Z_\rho,\chi_\rho)$ with the properties $(a)-(c)$, the corresponding Heisenberg representation $\rho$ is determined by the identity (cf. [@SAB2], p. 30): $$\label{eqn 322} \sqrt{[G:Z_\rho]}\cdot\rho=\mathrm{Ind}_{Z_\rho}^{G}\chi_\rho.$$ Let $C^1G=G$, $C^{i+1}G=[C^iG,G]$ denote the descending central series of $G$. Now assume that every projective representation of $A$ lifts to an ordinary representation of $G$. Then by I. Schur’s results (cf. [@CR], p. 361, Theorem 53.7) we have (cf. [@Z5], p. 124, Theorem 2): 1. Let $A\wedge_{\mathbb{Z}}A$ denote the alternating square of the ${\mathbb{Z}}$-module $A$. The commutator map $$\label{eqn 2.6.3} A\wedge_{\mathbb{Z}}A\cong C^2G/C^3G, \hspace{.3cm} a\wedge b\mapsto [\hat{a},\hat{b}]$$ is an isomorphism. 2. The map $\rho\to X_\rho\in\rm{Alt}(A)$ from Heisenberg representations to alternating characters on $A$ is surjective. \[Remark 3.2\] Let $\chi_\rho$ be a character of $Z_\rho$. All extensions $\chi_H\supset\chi_\rho$ are conjugate with respect to $G/H$. This can be easily seen, since we know $\chi_H\supset\chi_\rho$ and $\chi_{H}^{g}(h)=\chi_{H}(ghg^{-1})$. If we take $z\in Z_\rho$, then we obtain $\chi_{H}^{g}(z)=\chi_{H}(gzg^{-1})=\chi_{\rho}(gzg^{-1})=\chi_{\rho}(gzg^{-1}z^{-1}z)$\ $=\chi_\rho([g,z]z)=X(g,z)\cdot\chi_\rho(z)=\chi_\rho(z)$, since $Z_\rho$ is a normal subgroup of $G$ and the radical of $X$ (i.e., $X(g,z)=\chi_\rho([g,z])=1$ for all $z\in Z_\rho$ and $g\in G$). Therefore, $\chi_{H}^{g}$ are extensions of $\chi_\rho$ for all $g\in G/H$. It can also be seen that the conjugates $\chi_{H}^{g}$ are all different, because $\chi_{H}^{g_1}=\chi_{H}^{g_2}$ is the same as $\chi_{H}^{g_1g_{2}^{-1}}=\chi_H$. So it is enough to see that $\chi_{H}^{g-1}\not\equiv 1$ if $g\neq1\in G/H$. But $\chi_{H}^{g-1}(h)=\chi_\rho(ghg^{-1}h^{-1})=X(g,h)$, and therefore $\chi_{H}^{g-1}\equiv 1$ on $H$ implies $g\in H^{\bot}=H$, where $``\bot"$ denotes the orthogonal complement with respect to $X$. Then for a given one extension $\chi_H$ of $\chi_\rho$ all other extensions are of the form $\chi_{H}^{g}$ for $g\in G/H$. \[Remark 2.10\] Let $\rho=(Z,\chi_\rho)$ be a Heisenberg representation of $G$. Then from the definition of Heisenberg representation we have $$[[G,G], G]\subseteq \rm{Ker}(\rho).$$ Now let $\overline{G}:=G/\rm{Ker}(\rho)$. Then we obtain $$[\overline{G},\overline{G}]=[G/\rm{Ker}(\rho),G/\rm{Ker}(\rho)]=[G,G]\cdot\rm{Ker}(\rho)/\rm{Ker}(\rho)=[G,G]/[G,G]\cap\rm{Ker}(\rho).$$ Since $[[G,G],G]\subseteq\rm{Ker}(\rho)$, then $[x,g]\in\rm{Ker}(\rho)$ for all $x\in [G,G]$ and $g\in G$. Hence we obtain $[[\overline{G},\overline{G}],\overline{G}]=[[G,G]/[G,G]\cap \rm{Ker}(\rho), G/\rm{Ker}(\rho)]\subseteq\rm{Ker}(\rho)$, This shows that $\overline{G}$ is a two-step nilpotent group. **Arithmetic description of Heisenberg representations** ======================================================== In Section 2.5, we see the notion of Heisenberg representations of a (pro-)finite group. These Heisenberg representations have arithmetic structure due to E.-W. Zink (cf. [@Z2], [@Z4], [@Z5]). For this article we need to describe the arithmetic structure of Heisenberg representations. Let $F/{\mathbb{Q}}_p$ be a local field, and $\overline{F}$ be an algebraic closure of $F$. Denote $G_F=\rm{Gal}(\overline{F}/F)$ the absolute Galois group for $\overline{F}/F$. We know that (cf. [@HK2], p. 197) each representation $\rho:G_F\to GL(n,{\mathbb{C}})$ corresponds to a projective representation $\overline{\rho}:G_F\to GL(n,{\mathbb{C}})\to PGL(n,{\mathbb{C}})$. On the other hand, each projective representation $\overline{\rho}:G_F\to PGL(n,{\mathbb{C}})$ can be lifted to a representation $\rho:G_F\to GL(n,{\mathbb{C}})$. Let $A_F=G_{F}^{ab}$ be the factor commutator group of $G_F$. Define $FF^\times:=\varprojlim(F^\times/N\wedge F^\times/N)$ where $N$ runs over all open subgroups of finite index in $F^\times$. Denote by $\rm{Alt}(F^\times)$ as the set of all alternating characters $X:F^\times\times F^\times\to{\mathbb{C}}^\times$ such that $[F^\times:\rm{Rad}(X)]<\infty$. Then the local reciprocity map gives an isomorphism between $A_F$ and the profinite completion of $F^\times$, and induces a natural bijection $$\rm{PI}(A_F)\xrightarrow{\sim}\rm{Alt}(F^\times),$$ where $\rm{PI}(A_F)$ is the set of isomorphism classes of projective irreducible representations of $A_F$. By using class field theory from the commutator map (\[eqn 2.6.3\]) (cf. p. 125 of [@Z5]) we obtain $$\label{eqn 5.1.2} c:FF^\times\cong [G_F,G_F]/[[G_F,G_F], G_F].$$ Let $K/F$ be an abelian extension corresponding to the norm subgroup $N\subset F^\times$ and if $W_{K/F}$ denotes the relative Weil group, the commutator map for $W_{K/F}$ induces an isomorphism (cf. p. 128 of [@Z5]): $$\label{eqn 5.1.3} c: F^\times/N\wedge F^\times/N\to K_{F}^{\times}/I_{F}K^\times,$$ where $K_{F}^{\times}:=\{x\in K^\times|\quad N_{K/F}(x)=1\}$, i.e., the norm-1-subgroup of $K^\times$,\ $I_FK^\times:=\{x^{1-\sigma}|\quad x\in K^{\times}, \sigma\in \rm{Gal}(K/F)\}<K_{F}^{\times}$, the augmentation with respect to $K/F$. Taking the projective limit over all abelian extensions $K/F$ the isomorphisms (\[eqn 5.1.3\]) induce: $$\label{eqn 5.1.4} c:FF^\times\cong \varprojlim K_{F}^{\times}/I_FK^\times,$$ where the limit on the right side refers to norm maps. This gives an arithmetic description of Heisenberg representations of the group $G_F$. \[Theorem 5.1.1\] The set of Heisenberg representations $\rho$ of $G_F$ is in bijective correspondence with the set of all pairs $(X_\rho,\chi_\rho)$ such that: 1. $X_\rho$ is a character of $FF^\times$, 2. $\chi_\rho$ is a character of $K^{\times}/I_FK^\times$, where the abelian extension $K/F$ corresponds to the radical $N\subset F^\times$ of $X_\rho$, and 3. via (\[eqn 5.1.3\]) the alternating character $X_\rho$ corresponds to the restriction of $\chi_\rho$ to $K_{F}^{\times}$. Given a pair $(X,\chi)$, we can construct the Heisenberg representation $\rho$ by induction from $G_K:=\rm{Gal}(\overline{F}/K)$ to $G_F$: $$\label{eqn 5.1.5} \sqrt{[F^\times:N]}\cdot\rho=\rm{Ind}_{K/F}(\chi),$$ where $N$ and $K$ are as in (2) of the above Theorem \[Theorem 5.1.1\] and where the induction of $\chi$ (to be considered as a character of $G_K$ by class field theory) produces a multiple of $\rho$. From $[F^\times:N]=[K:F]$ we obtain the [**dimension formula:**]{} $$\label{eqn dimension formula} \rm{dim}(\rho)=\sqrt{[F^\times:N]},$$ where $N$ is the radical of $X$. Let $K/E$ be an extension of $E$, and $\chi_K:K^\times\to{\mathbb{C}}^\times$ be a character of $K^\times$. In the following lemma, we give the conditions of the existence of characters $\chi_E\in\widehat{E^\times}$ such that $\chi_E\circ N_{K/E}=\chi_K$, and the solutions set of this $\chi_E$. \[Lemma 5.1.4\] Let $K/E$ be a finite extension of a field $E$, and $\chi_K: K^\times\to{\mathbb{C}}^\times$. 1. The existence of characters $\chi_E: E^\times\to{\mathbb{C}}^\times$ such that $\chi_E\circ N_{K/E}=\chi_K$ is equivalent to $K_{E}^{\times}\subset\rm{Ker}(\chi_K)$. 2. In case (i) is fulfilled, we have a well defined character $$\chi_{K/E}:=\chi_K\circ N_{K/E}^{-1}:\mathcal{N}_{K/E}\to {\mathbb{C}}^\times,$$ on the subgroup of norms $\mathcal{N}_{K/E}:=N_{K/E}(K^\times)\subset E^\times$, and the solutions $\chi_E$ such that $\chi_E\circ N_{K/E}=\chi_K$ are precisely the extensions of $\chi_{K/E}$ from $\mathcal{N}_{K/E}$ to a character of $E^\times$. [**(i)**]{} Suppose that an equation $\chi_K=\chi_E\circ N_{K/E}$ holds. Let $x\in K_{E}^{\times}$, hence $N_{K/E}(x)=1$. Then $$\chi_K(x)=\chi_E\circ N_{K/E}(x)=\chi_E(1)=1.$$ So $x\in\rm{Ker}(\chi_K)$, and hence $K_{E}^{\times}\subset \rm{Ker}(\chi_K)$. Conversely assume that $K_{E}^{\times}\subset\rm{Ker}(\chi_K)$. Then $\chi_K$ is actually a character of $K^\times/K_{E}^{\times}$. Again we have $K^\times/K_{E}^{\times}\cong \mathcal{N}_{K/E}\subset E^\times$, hence $\widehat{K^\times/K_{E}^{\times}}\cong \widehat{\mathcal{N}_{K/E}}$. Now suppose that $\chi_K$ corresponds to the character $\chi_{K/E}$ of $\mathcal{N}_{K/E}$. Hence we can write $\chi_K\circ N_{K/F}^{-1}=\chi_{K/E}$. Thus the character $\chi_{K/E}:\mathcal{N}_{K/E}\to{\mathbb{C}}^\times$ is well defined. Since $E^\times$ is an abelian group and $\mathcal{N}_{K/E}\subset E^\times$ is a subgroup of finite index (by class field theory) $[K:E]$, we can extend $\chi_{K/E}$ to $E^\times$, and $\chi_K$ is of the form $\chi_K=\chi_E\circ N_{K/E}$ with $\chi_E|_{{\mathcal{N}}_{K/E}}=\chi_{K/E}$.\ [**(ii)**]{} If condition (i) is satisfied, then this part is obvious. If $\chi_E$ is a solution of $\chi_K=\chi_E\circ N_{K/E}$, with $\chi_{K/E}:=\chi_K\circ N_{K/E}^{-1}:\mathcal{N}_{K/E}\to{\mathbb{C}}^\times$, then certainly $\chi_E$ is an extension of the character $\chi_{K/E}$. Conversely, if $\chi_E$ extends $\chi_{K/E}$, then it is a solution of $\chi_K=\chi_E\circ N_{K/E}$ with $\chi_K\circ N_{K/E}^{-1}=\chi_{K/E}:\mathcal{N}_{K/E}\to{\mathbb{C}}^\times$. Now take Heisenberg representation $\rho=\rho(X,\chi_K)$ of $G_F$. Let $E/F$ be any extension corresponding to a maximal isotropic for $X$. In this Heisenberg setting, from Theorem \[Theorem 5.1.1\](2), we know $\chi_K$ is a character of $K^\times/I_FK^\times$, and from the first commutative diagram on p. 302 of [@Z2] we have $N_{K/E}:K_F^\times/I_FK^\times\to E_F^\times/I_F{\mathcal{N}}_{K/E}$. Thus in the Heisenberg setting, we have more information than Lemma \[Lemma 5.1.4\](i), that $\chi_K$ is a character of $$K^\times/K_{E}^{\times}I_FK^\times\xrightarrow{N_{K/E}}\mathcal{N}_{K/E}/I_F\mathcal{N}_{K/E}\subset E^\times/I_F\mathcal{N}_{K/E},$$ and therefore $\chi_{K/F}$ is actually a character of $\mathcal{N}_{K/E}/I_F\mathcal{N}_{K/E}$, or in other words, it is a $\rm{Gal}(E/F)$-invariant character of the $\rm{Gal}(E/F)$-module $\mathcal{N}_{K/E}\subset E^\times$. And if $\chi_E$ is one of the solution of Lemma \[Lemma 5.1.4\](ii), then the complete solutions is the set $\{\chi_E^\sigma\,|\,\sigma\in \rm{Gal}(E/F)\}$. [ **We know that $W(\chi_E,\psi\circ\rm{Tr}_{K/E})$ has the same value for all solutions $\chi_E$ of $\chi_E\circ N_{K/E}=\chi_K$, which means for all $\chi_E$ which extend the character $\chi_{K/E}$**]{}. Moreover, from the above Lemma \[Lemma 5.1.4\], we also can see that $\chi_E|_{\mathcal{N}_{K/E}}=\chi_{K}\circ N_{K/E}^{-1}$. Let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation of $G_F$. Let $E/F$ be any extension corresponding to a maximal isotropic for $X$. Then by using the above Lemma \[Lemma 5.1.4\], we have the following lemma. Let $\rho=\rho(Z,\chi_\rho)=\rho(\rm{Gal}(L/K),\chi_K)$ be a Heisenberg representation of a finite local Galois group $G=\rm{Gal}(L/F)$, where $F$ is a non-archimedean local field. Let $H=\rm{Gal}(L/E)$ be a maximal isotropic for $\rho$. Then we obtain $$\rho=\rm{Ind}_{E/F}(\chi_{E}^{\sigma})\quad\text{for all $\sigma\in\rm{Gal}(E/F)$},$$ where $\chi_E:E^\times/I_F{\mathcal{N}}_{K/E}\to{\mathbb{C}}^\times$ with $\chi_K=\chi_E\circ N_{K/E}$.\ Moreover, for a fixed base field $E$ of a maximal isotropic for $\rho$, this construction of $\rho$ is independent of the choice of this character $\chi_E$. From the group theoretical construction of Heisenberg representation (cf. see Section 2.6), we can write $$\rho=\rm{Ind}_{H}^{G}(\chi_{H}^{g}), \quad\text{for all $g\in G/H$},$$ where $\chi_H:H\to{\mathbb{C}}^\times$ is an extension of $\chi_\rho$. From Remark \[Remark 3.2\] we know that all extensions of character $\chi_\rho$ are conjugate with respect to $G/H$, and they are different. If we fix $H$, then $\rho$ is independent of the choice of character $\chi_H$. For every extension of $\chi_\rho$ we will have same $\rho$. The assertion of the lemma is the arithmetic expression of this group theoretical facts, and which we will prove in the following. By the given conditions, $L/F$ is a finite Galois extension of the local field $F$ and $G=\rm{Gal}(L/F)$, and $H=\mathrm{Gal}(L/E)$, $Z=\mathrm{Gal}(L/K)$ and $\{1\}=\mathrm{Gal}(L/L)$. Then by class field theory, equation (\[eqn 5.1.3\]), and the condition $X:=\chi_K\circ [-,-]$, $\chi_\rho$ identifies with a character $\chi_K: K^\times/I_FK^\times\to\mathbb{C}^\times$. Moreover, for the Heisenberg representations we also have the following commutative diagram $$\begin{CD} K^\times_E/I_EK^\times @>inclusion>> K^\times_F/I_FK^\times\\ @AAcA @AAcA \\ E^\times/{\mathcal{N}}_{K/E} \wedge E^\times/{\mathcal{N}}_{K/E} @>{N_{E/F}\wedge N_{E/F}}>> F^\times/{\mathcal{N}}_{K/F}\wedge F^\times/{\mathcal{N}}_{K/F} \end{CD}$$ where $N_{E/F}\wedge N_{E/F}(a\wedge b)=N_{E/F}(a)\wedge N_{E/F}(b)$ for all $a,b\in E^\times$, and the vertical isomorphisms in upward direction are given as the commutator maps (cf. equation (\[eqn 5.1.3\])) in the Weil groups $W_{K/E}/I_EK^\times$ and $W_{K/F}/I_FK^\times$ respectively. Under the right vertical $\chi_K$ corresponds (cf. Theorem \[Theorem 5.1.1\](3)) to the alternating character $X$ which is trivial on $N_{E/F}\wedge N_{E/F},$ because $H$ corresponding to $E^\times$ is isotropic. The commutative diagram now shows that $\chi_K$ must be trivial on the image of the upper horizontal, i.e., $\chi_K$ is trivial on the subgroups $K_{E}^{\times}$ for all maximal isotropic $E$. Hence $\chi_K$ is actually a character of $K^\times/K_{E}^{\times}$. Then from Lemma \[Lemma 5.1.4\] we can say that there exists a character $\chi_E:E^\times/I_F{\mathcal{N}}_{K/E}\to {\mathbb{C}}^\times$ such that $\chi_K=\chi_E\circ N_{K/E}$. And this $\chi_E$ is determined by the character $\chi_H$. For $\sigma\in G/H=\mathrm{Gal}(E/F)$ we have $\chi_{E}^{\sigma}\circ N_{K/E}=\chi_E\circ N_{K/E}=\chi_K$ because $\chi_{E}^{\sigma-1}\circ N_{K/E}\equiv 1$, because $\chi_E$ is trivial on $I_F\mathcal{N}_{K/E}$. Therefore instead of $\rho=\mathrm{Ind}_{H}^{G}(\chi_{H}^{g})$ for all $g\in G/H$, we obtain $\rho=\rm{Ind}_{E/F}(\chi_{E}^{\sigma})$, for all $\sigma\in\rm{Gal}(E/F)$, independently of the choice of $\chi_E$. Moreover we have the exact sequence $$\begin{aligned} \label{sequence 5.1.2} K^\times/I_FK^\times\xrightarrow{N_{K/E}} E^\times/I_F\mathcal{N}_{K/E}\xrightarrow{N_{E/F}} F^\times/\mathcal{N}_{K/F},\end{aligned}$$ which is only exact in the middle term. For the dual groups this gives $$\begin{aligned} \label{sequence 5.1.3} \widehat{K^\times/I_FK^\times}\xleftarrow{N_{K/E}^{*}} \widehat{E^\times/I_F\mathcal{N}_{K/E}} \xleftarrow{N_{E/F}^{*}} \widehat{F^\times/\mathcal{N}_{K/F}}.\end{aligned}$$ But $N_{K/E}^{*}(\chi_{E}^{\sigma-1})=\chi_{E}^{\sigma-1}\circ N_{K/E}\equiv 1$, and therefore the exactness of sequence (\[sequence 5.1.3\]) yields $$\chi_{E}^{\sigma-1}=\chi_F\circ N_{E/F}, \quad\text{ for some $\chi_F\in\widehat{F^\times/\mathcal{N}_{K/F}}$},$$ For our (arithmetic) determinant computation of Heisenberg representation $\rho$ of $G_F$, we need the following lemma regarding transfer map. \[Lemma transfer Heisenberg\] Let $\rho=\rho(Z,\chi_\rho)$ be a Heisenberg representation of a group $G$ and assume that $H/Z\subset G/Z$ is a maximal isotropic for $\rho$. Then transfer map $T_{(G/Z)/(H/Z)}\equiv1$ is the trivial map. In general, if $H$ is a central subgroup[^1] of finite index $n=[G:H]$ of a group $G$, then by Theorem 5.6 on p. 154 of [@MI] we have $T_{G/H}(g)=g^n$. If $G$ is abelian, then center $Z(G)=G$. Hence every subgroup of $G$ is central subgroup. Now if we take $G$ as an abelian group and $H$ is a subgroup of finite index, then we can write $T_{G/H}(g)=g^{[G:H]}$. Now we come to the Heisenberg setting. We know that $G/Z$ is abelian, hence $H/Z\subset G/Z$ is a central subgroup. Then we have $T_{(G/Z)/(H/Z)}(g)=g^{[G/Z:H/Z]}=g^d$, where $d$ is the dimension of $\rho$. For the Heisenberg setting, we also know (cf. Lemma 3.3 on p. 8 of [@SAB3]) that $G^d\subseteq Z$, hence $g^d\in Z$. This implies $$T_{(G/Z)/(H/Z)}(g)=g^d=1,\quad\text{the identity in $H/Z$},$$ for all $g\in G$, hence $T_{(G/Z)/(H/Z)}\equiv1$ is a trivial map. By using the above Lemma \[Lemma 5.1.4\] and Lemma \[Lemma transfer Heisenberg\], in the following, we give the arithmetic description of the determinant of Heisenberg representations. \[Proposition arithmetic form of determinant\] Let $\rho=\rho(Z,\chi_\rho)=\rho(G_K,\chi_K)$ be a Heisenberg representation of the absolute Galois group $G_F$. Let $E$ be a base field of a maximal isotropic for $\rho$. Then $F^\times\subseteq {\mathcal{N}}_{K/E}$, and $$\label{eqn 5.1.12} \det(\rho)(x)=\Delta_{E/F}(x)\cdot\chi_K\circ N_{K/E}^{-1}(x)\quad \text{for all $x\in F^\times$},$$ where, for all $x\in F^\times$, $$\label{eqn 5.1.13} \Delta_{E/F}(x)=\begin{cases} 1 & \text{when $\rm{rk}_2(\rm{Gal}(E/F))\ne 1$}\\ \omega_{E'/F}(x) & \text{when $\rm{rk}_2(\rm{Gal}(E/F))= 1$}, \end{cases}$$ where $E'/F$ is a uniquely determined quadratic subextension in $E/F$, and $\omega_{E'/F}$ is the character of $F^\times$ which corresponds to $E'/F$ by class field theory. From the given condition, we can write $G/Z=\rm{Gal}(K/F)\supset H/Z=\rm{Gal}(K/E)$. Here both $G/Z$ and $H/Z$ are abelian, then from class field theory we have the following commutative diagram $$\label{diagram 5.1.13} \begin{CD} F^\times/{\mathcal{N}}_{K/F} @>inclusion>> E^\times/{\mathcal{N}}_{K/E}\\ @VV\theta_{K/F}V @VV\theta_{K/E}V\\ \rm{Gal}(K/F) @>T_{(G/Z)/(H/Z)}>> \rm{Gal}(K/E) \end{CD}$$ Here $\theta_{K/F}$, $\theta_{K/F}$ are the isomorphism (Artin reciprocity) maps and $T_{(G/Z)/(H/Z)}$ is transfer map. From Lemma \[Lemma transfer Heisenberg\], we have $T_{(G/Z)/(H/Z)}\equiv1$. Therefore from the above diagram (\[diagram 5.1.13\]) we can say $F^\times\subseteq{\mathcal{N}}_{K/E}$, i.e., all elements [^2] from the base field $F$ are norms with respect to the extension $K/E$. Now identify $\chi_\rho=\chi_K:K^\times/I_FK^\times\to{\mathbb{C}}^\times$. Then the map $$x\in F^\times\mapsto\chi_K\circ N_{K/E}^{-1}(x)$$ is well-defined character of $F^\times$. Now by Gallagher’s Theorem (cf. [@GK], Theorem $30.1.6$) (arithmetic side) we can write for all $x\in F^\times$, $$\det(\rho)(x)=\Delta_{E/F}(x)\cdot\chi_E(x)=\Delta_{E/F}(x)\cdot\chi_K(N_{K/E}^{-1}(x)),$$ since $F^\times\subseteq{\mathcal{N}}_{K/E}$, and $\chi_E|_{{\mathcal{N}}_{K/E}}=\chi_K\circ N_{K/E}^{-1}$. Furthermore, since $E/F$ is an abelian extension, $\rm{Gal}(E/F)\cong\widehat{\rm{Gal}(E/F)}$, and from Miller’s Theorem (cf. [@PC], Theorem 6), we can write $$\begin{aligned} \Delta_{E/F} &=\det(\rm{Ind}_{E/F}(1))\\ &=\det(\sum_{\chi\in\widehat{\rm{Gal}(E/F)}}\chi)\\ &=\prod_{\chi\in\widehat{\rm{Gal}(E/F)}}\chi\\ &=\begin{cases} 1 & \text{when $2$-rank $\rm{rk}_2(\rm{Gal}(E/F))\ne 1$}\\ \omega_{E'/F}(x) & \text{when $2$-rank $\rm{rk}_2(\rm{Gal}(E/F))= 1$}, \end{cases}\end{aligned}$$ where $E'/F$ is a uniquely determined quadratic subextension in $E/F$, and $\omega_{E'/F}$ is the character of $F^\times$ which corresponds to $E'/F$ by class field theory. [**Heisenberg representations of $G_F$ of dimensions prime to $p$**]{} ---------------------------------------------------------------------- Let $F/{\mathbb{Q}}_p$ be a non-archimedean local field, and $G_F$ be the absolute Galois group of $F$. In this subsection we construct all Heisenberg representations of $G_F$ of dimensions prime to $p$. Studying the construction of this type (i.e., dimension prime to $p$) Heisenberg representations are important for our next section. \[Definition U-isotropic\] Let $F$ be a non-archimedean local field. Let $X:FF^\times\to {\mathbb{C}}^\times$ be an alternating character with the property $$X(\varepsilon_1,\varepsilon_2)=1,\qquad \text{for all $\varepsilon_1,\varepsilon_2\in U_F$}.$$ In other words, $X$ is a character of $FF^\times/U_F\wedge U_F$. Then $X$ is said to be the U-isotropic. These $X$ are easy to classify: \[Lemma U-isotropic\] Fix a uniformizer $\pi_F$ and write $U:=U_F$. Then we obtain an isomorphism $$\widehat{U}\cong \widehat{FF^\times/U\wedge U}, \quad \eta\mapsto X_\eta,\quad \eta_X\leftarrow X$$ between characters of $U$ and $U$-isotropic alternating characters as follows: $$\label{eqn 5.1.25} X_\eta(\pi_F^a\varepsilon_1,\pi_F^b\varepsilon_2):=\eta(\varepsilon_1)^b\cdot\eta(\varepsilon_2)^{-a},\quad \eta_X(\varepsilon):=X(\varepsilon,\pi_F),$$ where $a,b\in{\mathbb{Z}}$, $\varepsilon,\varepsilon_1,\varepsilon_2\in U$, and $\eta:U\to{\mathbb{C}}^\times$. Then $$\rm{Rad}(X_\eta)=<\pi_F^{\#\eta}>\times\rm{Ker}(\eta)=<(\pi_F\varepsilon)^{\#\eta}>\times\rm{Ker}(\eta),$$ does not depend on the choice of $\pi_F$, where $\#\eta$ is the order of the character $\eta$, hence $$F^\times/\rm{Rad}(X_\eta)\cong <\pi_F>/<\pi_F^{\#\eta}>\times U/\rm{Ker}(\eta)\cong {\mathbb{Z}}_{\#\eta}\times{\mathbb{Z}}_{\#\eta}.$$ Therefore all Heisenberg representations of type $\rho=\rho(X_\eta,\chi)$ have dimension $\rm{dim}(\rho)=\#\eta$. To prove $\widehat{U}\cong \widehat{FF^\times/U\wedge U}$, we have to show that $\eta_{X_\eta}=\eta$ and $X_{\eta_X}=X_\eta$, and that the inverse map $X\mapsto \eta_X$ does not depend on the choice of $\pi_F$. From the above definition of $\eta_X$, we can write: $$\begin{aligned} \eta_{X_\eta}(\varepsilon) &=X_\eta(\epsilon,\pi_F)=\eta(\varepsilon)^{1}\cdot \eta(1)^0=\eta(\varepsilon), \end{aligned}$$ for all $\varepsilon\in U$, hence $\eta_{X_\eta}=\eta$. Similarly, from the above definition of $X$, we have: $$\begin{aligned} X_{\eta_X}(\pi_F^a\varepsilon_1,\pi_F^b\varepsilon) &=\eta_X(\varepsilon_1)^b\cdot\eta_X(\varepsilon_2)^{-a}=X(\varepsilon_1,\pi_F)^b\cdot X(\varepsilon_2,\pi_F)^{-a}\\ &=X(\varepsilon_1,\pi_F)^b\cdot X(\pi_F,\varepsilon_2)^{a}=X(\varepsilon_1,\pi_F^b)\cdot X(\pi_F^a,\varepsilon_2)\\ &=X(\pi_F^a\varepsilon_1,\pi_F^b\varepsilon).\end{aligned}$$ This shows that $X_{\eta_X}=X$. Now we choose a uniformizer $\pi_F\varepsilon$, where $\varepsilon\in U$, instead of choosing $\pi_F$. Then we can write $$\begin{aligned} X_\eta((\pi_F\varepsilon)^a\varepsilon_1,(\pi_F\varepsilon)^b\varepsilon_2) &=X_\eta(\pi_F^a(\varepsilon^a\varepsilon_1),\pi_F^b(\varepsilon^b\varepsilon_2))\\ &=\eta(\varepsilon^a\varepsilon_1)^b\cdot \eta(\varepsilon^b\varepsilon_2)^{-a}\\ &=\eta(\varepsilon_1)^b\cdot \eta(\varepsilon_2)^{-a}\cdot \eta(\varepsilon^{ab-ab})\\ &=\eta(\varepsilon_1)^b\cdot\eta(\varepsilon_2)^{-a}=X(\pi_F^a\varepsilon_1,\pi_F^b\varepsilon_2).\end{aligned}$$ This shows that $X_\eta$ does not depend on the choice of the uniformizer $\pi_F$. Similarly since $\eta_X(\varepsilon):=X(\varepsilon,\pi_F)$, it is clear that $\eta_X$ is also does not depend on the choice of the uniformizer $\pi_F$. By the definition of the radical of $X_\eta$, we have: $$\rm{Rad}(X_\eta)= \{\pi_F^a\varepsilon\in F^\times\,|\; X_\eta(\pi_F^{a}\varepsilon,\pi_F^{b}\varepsilon')= \eta(\varepsilon)^{b}\cdot \eta(\varepsilon')^{-a}=1\},$$ for all $b\in {\mathbb{Z}}$, and $\varepsilon'\in U$. Now if we fix a uniformizer $\pi_F\varepsilon'',$ where $\varepsilon''\in U$ instead of $\pi_F$, we can write: $$\rm{Rad}(X_\eta)= \{(\pi_F\varepsilon'')^a\varepsilon\in F^\times\,|\; X_\eta((\pi_F\varepsilon'')^{a}\varepsilon,(\pi_F\varepsilon'')^{b}\varepsilon')= \eta(\varepsilon''^a\varepsilon)^{b}\cdot \eta(\varepsilon''^b\varepsilon')^{-a}=\eta(\varepsilon)^b\cdot\eta(\varepsilon')^{-a}=1\},$$ This gives $\rm{Rad}(X_\eta)=<\pi_F^{\#\eta}>\times\rm{Ker}(\eta)=<(\pi_F\varepsilon)^{\#\eta}>\times\rm{Ker}(\eta)$, hence $$F^\times/\rm{Rad}(X_\eta)\cong <\pi_F>/<\pi_F^{\#\eta}>\times U/\rm{Ker}(\eta)\cong {\mathbb{Z}}_{\#\eta}\times{\mathbb{Z}}_{\#\eta}.$$ Then all Heisenberg representations of type $\rho=\rho(X_\eta,\chi)$ have dimension $$\rm{dim}(\rho)=\sqrt{[F^\times:\rm{Rad}(X_\eta)]}=\#\eta.$$ From the above Lemma \[Lemma U-isotropic\] we know that the dimension of a U-isotropic Heisenberg representation $\rho=\rho(X_\eta,\chi)$ of $G_F$ is $\rm{dim}(\rho)=\#\eta$, and $F^\times/\rm{Rad}(X_\eta)\cong {\mathbb{Z}}_{\#\eta}\times{\mathbb{Z}}_{\#\eta}$, a direct product of two cyclic (bicyclic) groups of the same order $\#\eta$. In general, if $A={\mathbb{Z}}_m\times{\mathbb{Z}}_m$ is a bicyclic group of order $m^2$, then by the following lemma we can compute total number of elements of order $m$ in $A$, and number of cyclic complementary subgroup of a fixed cyclic subgroup of order $m$. \[Lemma on bicyclic abelian groups\] Let $A\cong {\mathbb{Z}}_m\times{\mathbb{Z}}_m$ be a bicyclic abelian group of order $m^2$. Then: 1. Then number $\psi(m)$ of cyclic subgroups $B\subset A$ of order $m$ is a multiplicative arithmetic function (i.e., $\psi(mn)=\psi(m)\psi(n)$ if $gcd(m,n)=1$). 2. Explicitly we have $$\psi(m)=m\cdot\prod_{p|m}(1+\frac{1}{p}).$$ And the number of elements of order $m$ in $A$ is: $$\varphi(m)\cdot\psi(m)=m^2\cdot\prod_{p|m}(1-\frac{1}{p^2}).$$ Here $p$ is a prime divisor of $m$ and $\varphi(n)$ is the Euler’s totient function of $n$. 3. Let $B\subset A$ be cyclic of order $m$. Then $B$ has always a complementary subgroup $B'\subset A$ such that $A=B\times B'$, and $B'$ is again cyclic of order $m$. And for $B$ fixed, the number of all different complementary subgroups $B'$ is $=m$. To prove these assertions we need to recall the fact: If $G$ is a finite cyclic group of order $m$, then number of generators of $G$ is $\varphi(m)=m\prod_{p|m}(1-\frac{1}{p})$.\ [**(1).**]{} By the given condition $A\cong {\mathbb{Z}}_m\times{\mathbb{Z}}_m$ and $\psi(m)$ is the number of cyclic subgroup of $A$ of order $m$. Then it is clear that $\psi$ is an arithmetic function with $\psi(1)=1\ne 0$, hence $\psi$ is not [**additive**]{}. Now take $m{\geqslant}2$, and the prime factorization of $m$ is: $m=\prod_{i=1}^{k}p_{i}^{a_i}$. To prove this, first we should start with $m=p^n$, hence $A\cong {\mathbb{Z}}_{p^n}\times{\mathbb{Z}}_{p^n}$. Then number of subgroup of $A$ of order $p^n$ is: $$\psi(p^n)=\frac{2\varphi(p^n)p^n-\varphi(p^n)^2}{\varphi(p^n)}=2p^n-\varphi(p^n)=p^n(2-1+\frac{1}{p})=p^n(1+\frac{1}{p}).$$ Now take $m=p^nq^r$, where $p,q$ are both prime with $gcd(p,q)=1$. We also know that ${\mathbb{Z}}_{p^nq^r}\times{\mathbb{Z}}_{p^nq^r}\cong{\mathbb{Z}}_{p^n}\times{\mathbb{Z}}_{p^n}\times{\mathbb{Z}}_{q^r}\times{\mathbb{Z}}_{q^r}$. This gives $\psi(p^nq^r)=\psi(p^n)\cdot\psi(q^r)$. By the similar method we can show that $\psi(m)=\prod_{i=i}^{k}\psi(p_{i}^{a_i})$, where $m=\prod_{i=1}^{k}p_{i}^{a_i}$. This condition implies that $\psi$ is a multiplicative arithmetic function.\ [**(2).**]{} Since $\psi$ is multiplicative arithmetic function, we have $$\begin{aligned} \psi(m) &=\prod_{i=1}^{k}\psi(p_{i}^{a_i})=\prod_{i=1}^{k}p_{i}^{a_i}(1+\frac{1}{p_i})\quad\text{since $\psi(p^n)=p^n(1+\frac{1}{p})$},\\ &=p_{1}^{a_1}\cdots p_{k}^{a_k}\prod_{i=1}^{k}(1+\frac{1}{p_i})=m\cdot\prod_{p|m}(1+\frac{1}{p}). \end{aligned}$$ We also know that number of generator of a finite cyclic group of order $m$ is $\varphi(m)$, hence number of elements of order $m$ is $\varphi(m)$. Then the number of elements of order $m$ in $A$ is: $$\varphi(m)\cdot\psi(m)=m\cdot\prod_{p|m}(1-\frac{1}{p})\cdot m\prod_{p|m}(1+\frac{1}{p})=m^2\cdot\prod_{p|m}(1-\frac{1}{p^2}).$$ [**(3).**]{} Let $B\subset A$ be a cyclic subgroup of order $m$. Since $A$ is abelian and bicyclic of order $m^2$, $B$ has always a complementary subgroup $B'\subset A$ such that $A=B\times B'$, and $B'$ is again cyclic (because $A$ is cyclic, hence $A/B$ and $|A/B|=m$) of order $m$. To prove the last part of (3), we start with $m=p^n$. Here $B$ is a cyclic subgroup of $A$ of order $p^n$, hence $B=<(a,e)>$, where $\# a=p^n$, and $e$ is the identity of $B'$. Since $B$ has complementary cyclic subgroup, namely $B'$, of order $p^n$. we can choose $B'=<(b,c)>$, where $B\cap B'=(e,e)$. This gives that $c$ is a generator of $B'$, and $b$ could be any element in ${\mathbb{Z}}_{p^n}$. Thus total number $\psi_{B'}(p^n)$ of all different complementary subgroups $B'$ is: $$\psi_{B'}(p^n)=\frac{p^n\varphi(p^n)}{\varphi(p^n)}=p^n=m.$$ Now if we take $m=p^nq^r$, where $q$ is a different prime from $p$. Then by same method we can see that $\psi_{B'}(p^nq^r)=\psi_{B'}(p^n)\cdot\psi_{B'}(q^r)=p^nq^r=m$. Thus for arbitrary $m$ we can conclude that $\psi_{B'}(m)=m$. In the following lemma, we give an equivalent condition for U-isotropic Heisenberg representation. \[Lemma U-equivalent\] Let $G_F$ be the absolute Galois group of a non-archimedean local field $F$. For a Heisenberg representation $\rho=\rho(Z,\chi_\rho)=\rho(X,\chi_K)$ the following are equivalent: 1. The alternating character $X$ is U-isotropic. 2. Let $E/F$ be the maximal unramified subextension in $K/F$. Then $\rm{Gal}(K/E)$ is maximal isotropic for $X$. 3. $\rho=\rm{Ind}_{E/F}(\chi_E)$ can be induced from a character $\chi_E$ of $E^\times$ (where $E$ is as in (2)). This proof follows from the above Lemma \[Lemma U-isotropic\].\ First, assume that $X$ is U-isotropic, i.e., $X\in\widehat{FF^\times/U\wedge U}$. We also know that $\widehat{U}\cong\widehat{FF^\times/U\wedge U}$. Then $X$ corresponds a character of $U$, namely $X\mapsto \eta_X$. Then from Lemma \[Lemma U-isotropic\] we have $F^\times /\rm{Rad}(X)\cong {\mathbb{Z}}_{\#\eta_X}\times{\mathbb{Z}}_{\#\eta_X}$, i.e., product of two cyclic groups of same order. Since $K/F$ is the abelian bicyclic extension which corresponds to $\rm{Rad}(X)$, we can write: $${\mathcal{N}}_{K/F}=\rm{Rad}(X),\qquad\rm{Gal}(K/F)\cong F^\times/\rm{Rad}(X).$$ Let $E/F$ be the maximal unramified subextension in $K/F$. Then $[E:F]=\#\eta_K$ because the order of maximal cyclic subgroup of $\rm{Gal}(K/F)$ is $\#\eta_X$. Then $f_{E/F}=\#\eta_X$, hence $f_{K/F}=e_{K/F}=\#\eta_X$ because $f_{K/F}\cdot e_{K/F}=[K:F]=\#\eta_X^2$ and $\rm{Gal}(K/F)$ is not cyclic group. Now we have to prove that the extension $E/F$ corresponds to a maximal isotropic for $X$. Let $H/Z$ be a maximal isotropic for $X$, hence $[G_F/Z:H/Z]=\#\eta_X$, hence $H/Z=\rm{Gal}(K/E)$, i.e., the maximal unramified subextension $E/F$ in $K/F$ corresponds to a maximal isotropic subgroup, hence $\rho(X,\chi_K)=\rm{Ind}_{E/F}(\chi_E)$, for $\chi_E\circ N_{K/E}=\chi_K$. Finally, since $E/F$ is unramified and the extension $E$ corresponds a maximal isotropic subgroup for $X$, we have $U_F\subset{\mathcal{N}}_{E/F}$, hence $U_F\subset{\mathcal{N}}_{K/F}$ and $X|_{U\times U}=1$ because $U_F\subset F^\times\subset{\mathcal{N}}_{K/E}$. This shows that $X$ is U-isotropic. \[Corollary U-isotropic\] The U-isotropic Heisenberg representation $\rho=\rho(X_\eta,\chi)$ can never be wild because it is induced from an unramified extension $E/F$, but the dimension $\rm{dim}(\rho(X_\eta,\chi))=\#\eta$ can be a power of $p.$\ The representations $\rho$ of dimension prime to p are precisely given as $\rho=\rho(X_\eta,\chi)$ for characters $\eta$ of $U/U^1.$ This is clear from the above lemma \[Lemma U-isotropic\] and the fact $|U/U^1|=q_F-1$. We let $K_\eta|F$ be the abelian bicyclic extension which corresponds to $\rm{Rad}(X_\eta):$ $${\mathcal{N}}_{K_\eta/F}= \rm{Rad}(X_\eta),\qquad \rm{Gal}(K_\eta/F)\cong F^\times/\rm{Rad}(X_\eta).$$ Then we have $f_{K_\eta|F}= e_{K_\eta|F}=\#\eta$ and the maximal unramified subextension $E/F\subset K_\eta/F$ corresponds to a maximal isotropic subgroup, hence $$\rho(X_\eta,\chi) = \rm{Ind}_{E/F}(\chi_E),\quad\textrm{for}\; \chi_E\circ N_{K_\eta/E} =\chi.$$ We recall here that $\chi:K_\eta^\times/I_FK_\eta^\times\rightarrow{\mathbb{C}}^\times$ is a character such that (cf. Theorem \[Theorem 5.1.1\](3)) $$\chi|_{(K_\eta^\times)_F} \leftrightarrow X_\eta,\quad\textrm{with respect to}\; (K_\eta^\times)_F/I_FK_\eta^\times\cong F^\times/\rm{Rad}(X_\eta)\wedge F^\times/\rm{Rad}(X_\eta).$$ In particular, we see that $(K_\eta^\times)_F/I_FK_\eta^\times$ is cyclic of order $\#\eta$ and $\chi|_{(K_\eta^\times)_F}$ must be a faithful character of that cyclic group. In the following lemma we see the explicit description of the representation $\rho=\rho(X_\eta,\chi)$. \[Explicit Lemma\] Let $\rho=\rho(X_\eta,\chi_K)$ be a U-isotropic Heisenberg representation of the absolute Galois group $G_F$ of a local field $F/{\mathbb{Q}}_p$. Let $K=K_\eta$ and let $E/F$ be the maximal unramified subextension in $K/F$. Then: 1. The norm map induces an isomorphism: $$N_{K/E}:K_F^\times/I_FK^\times\stackrel{\sim}{\to}I_FE^\times/I_F{\mathcal{N}}_{K/E}.$$ 2. Let $c_{K/F}:F^\times/\rm{Rad}(X_\eta)\wedge F^\times/\rm{Rad}(X_\eta)\cong K_F^\times/I_FK^\times$ be the isomorphism which is induced by the commutator in the relative Weil-group $W_{K/F}$. Then for units $\varepsilon\in U_F$ we explicitly have: $$c_{K/F}(\varepsilon\wedge\pi_F)=N_{K/E}^{-1}(N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}}),$$ where $\varphi_{E/F}$ is the Frobenius automorphism for $E/F$ and where $N^{-1}$ means to take a preimage of the norm map. 3. The restriction $\chi_K|_{K_F^\times}$ is characterized by: $$\chi_K\circ c_{K/F}(\varepsilon\wedge\pi_F)=X_\eta(\varepsilon,\pi_F)=\eta(\varepsilon),$$ for all $\varepsilon\in U_F$, where $c_{K/F}(\varepsilon\wedge\pi_F)$ is explicitly given via (2). [**(1).**]{} By the given conditions we have: $K=K_\eta,$ and $K/F$ is the bicyclic extension with $\rm{Rad}(X_\eta)={\mathcal{N}}_{K/F}$, and $E/F$ is the maximal unramified subextension in $K/F$. So $K/E$ and $E/F$ both are cyclic, hence $$E_F^\times=I_FE^\times,\qquad K_E^\times=I_EK^\times.$$ From the diagram (3.6.1) on p. 41 of [@Z4], we have $$N_{K/E}: K_F^\times/I_FK^\times\stackrel{\sim}{\to} E_F^\times/I_F{\mathcal{N}}_{K/E}.$$ We also know that $E_F^\times=I_FE^\times$. Thus the norm map $N_{K/E}$ induces an isomorphism: $$N_{K/E}:K_F^\times/I_FK^\times\cong I_FE^\times/I_F{\mathcal{N}}_{K/E}.$$ [**(2).**]{} By the given conditions, $c_{K/F}$ is the isomorphism which is induced by the commutator in the relative Weil-group $W_{K/F}$ (cf. the map (\[eqn 5.1.3\]). Here $\rm{Rad}(X_\eta)={\mathcal{N}}_{K/F}=:N$. Then from Proposition 1(iii) of [@Z5] on p. 128, we have $$c_{K/F}: N\wedge F^\times/N\wedge N\stackrel{\sim}{\to} I_FK^\times/I_FK_F^\times$$ as an isomorphism by the map: $$c_{K/F}(x\wedge y)=N_{K/F}^{-1}(x)^{1-\phi_F(y)},$$ where $\phi_F(y)\in \rm{Gal}(K/F)$ for $y\in F^\times$ by class field theory. If $y=\pi_F$, then by class field theory (cf. [@JM], p. 20, Theorem 1.1(a)), we can write $\phi_F(\pi_F)|_{E}=\varphi_{E/F}$, where $\varphi_{E/F}$ is the Frobenius automorphism for $E/F$. Now we come to our special case. Since $E/F$ is unramified, we have $U_F\subset{\mathcal{N}}_{E/F}$, and we obtain (cf. [@Z4], pp. 46-47 of Section 4.4 and the diagram on p. 302 of [@Z2]): $$\label{eqn explicit lemma} N_{K/E}\circ c_{K/F}(\varepsilon\wedge\pi_F)=N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}}.$$ We also know (see the first two lines under the upper diagram on p. 302 of [@Z2]) that $E_F^\times\subseteq {\mathcal{N}}_{K/E}$. Here $$N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}}\in I_FE^\times/I_F{\mathcal{N}}_{K/E}=E_F^\times/I_F{\mathcal{N}}_{K/E},$$ because $E/F$ is cyclic, hence $E_F^\times=I_FE^\times$. Therefore from equation (\[eqn explicit lemma\]) we can conclude: $$c_{K/F}(\varepsilon\wedge\pi_F)=N_{K/E}^{-1}(N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}}).$$ [**(3.)**]{} We know that the $c_{K/F}(\varepsilon\wedge\pi_F)\in K_F^\times$ and $\chi_K:K^\times/I_FK^\times\to{\mathbb{C}}^\times$. Then we can write $$\begin{aligned} \chi_K\circ c_{K/F}(\varepsilon\wedge\pi_F) &=\chi_K(N_{K/E}^{-1}(N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}})\\ &=\chi_E\circ N_{K/E}(N_{K/E}^{-1}(N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}}), \quad\text{since $\chi_K=\chi_E\circ N_{K/E}$}\\ &=\chi_E(N_{E/F}^{-1}(\varepsilon)^{1-\varphi_{E/F}})=X_\eta(\varepsilon,\pi_F)\\ &=\eta(\varepsilon).\end{aligned}$$ This is true for all $\varepsilon\in U_F$. Therefore we can conclude that $\chi_K|_{K_F^\times}=\eta$. \[Example for Heisenberg reps\] Let $F/{\mathbb{Q}}_p$ be a local field, and $G_F$ be the absolute Galois group of $F$. Let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation of $G_F$ of dimension $m$ prime to $p$. Then from Corollary \[Corollary U-isotropic\] the alternating character $X=X_\eta$ is $U$-isotropic for a character $\eta:U_F/U_F^1\to{\mathbb{C}}^\times$. Here from Lemma \[Lemma U-isotropic\] we can say $m=\sqrt{[F^\times:\rm{Rad}(X_\eta)]}=\#\eta$ divides $q_F-1$. Since $U_F^1$ is a pro-p-group and $gcd(m,p)=1$, we have $(U_F^1)^m=U_F^1\subset {F^\times}^m$, and therefore $$F^\times/{F^\times}^m\cong{\mathbb{Z}}_m\times{\mathbb{Z}}_m,$$ is a bicyclic group of order $m^2$. So by class field theory there is precisely one extension $K/F$ such that $\rm{Gal}(K/F)\cong{\mathbb{Z}}_m\times{\mathbb{Z}}_m$ and the norm group ${\mathcal{N}}_{K/F}:=N_{K/F}(K^\times)={F^\times}^m$. We know that $U_F/U_F^1$ is a cyclic group of order $q_F-1$, hence $\widehat{U_F/U_F^1}\cong U_F/U_F^1$. By the given condition $m|(q_F-1)$, hence $U_F/U_F^1$ has exactly one subgroup of order $m$. Then number of elements of order $m$ in $U_F/U_F^1$ is $\varphi(m)$, the Euler’s $\varphi$-function of $m$. In this setting, we have $\eta\in \widehat{U_F/U_F^1}\cong \widehat{FF^\times/U_F^1\wedge U_F^1}$ with $\#\eta=m$. This implies that up to $1$-dimensional character twist there are $\varphi(m)$ representations corresponding to $X_\eta$ where $\eta:U_F/U_F^1\to{\mathbb{C}}^\times$ is of order $m$. According to Corollary 1.2 of [@Z2], all dimension-m-Heisenberg representations of $G_F=\rm{Gal}(\overline{F}/F)$ are given as $$\rho=\rho(X_\eta,\chi_K),\tag{1H}$$ where $\chi_K: K^\times/ I_{F}K^\times\to\mathbb{C}^{\times}$ is a character such that the restriction of $\chi_K$ to the subgroup $K_{F}^{\times}$ corresponds to $X_\eta$ under the map (\[eqn 5.1.3\]), and $$F^\times/{F^\times}^m\wedge F^\times/{F^\times}^m\cong K_{F}^{\times}/I_{F}K^\times,\tag{2H}$$ which is given via the commutator in the relative Weil-group $W_{K/F}$ (for details arithmetic description of Heisenberg representations of a Galois group, see [@Z2], pp. 301-304). The condition (2H) corresponds to (\[eqn 5.1.3\]). Here the above Explicit Lemma \[Explicit Lemma\] comes in. Here due to our assumption both sides of (2H) are groups of order $m$. And if one choice $\chi_K=\chi_0$ has been fixed, then all other $\chi_K$ are given as $$\label{eqn 4.20} \chi_K=(\chi_F\circ N_{K/F})\cdot\chi_0,$$ for arbitrary characters of $F^\times$. For an optimal choice $\chi_K=\chi_0$, and order of $\chi_0$ we need the following lemma. \[Lemma 5.3.3\] Let $K/F$ be the extension of $F/{\mathbb{Q}}_p$ for which $\rm{Gal}(K/F)={\mathbb{Z}}_m\times{\mathbb{Z}}_m$. The $K_{F}^{\times}$ and $I_{F}K^\times$ are as above. Then the sequence $$\label{eqn 4.21} 1\to U_{K}^{1}K_{F}^{\times}/U_{K}^{1}I_{F}K^\times\to U_K/U_{K}^{1}I_{F}K^\times\xrightarrow{N_{K/F}} U_F/U_{F}^{1}\to U_F/U_F\cap {F^\times}^m\to 1$$ is exact, and the outer terms are both of order $m$, hence inner terms are both cyclic of order $q_F-1$. The sequence is exact because ${F^\times}^m=N_{K/F}(K^\times)$ is the group of norms, and $F^\times/{F^\times}^m\cong {\mathbb{Z}}_m\times{\mathbb{Z}}_m$ implies that the right hand term[^3] is of order $m$. By our assumption the order of $K_{F}^{\times}/I_{F}K^\times$ is $m$. Now we consider the exact sequence $$\label{sequence 5.1.25} 1\to U_{K}^{1}\cap K_{F}^{\times}/U_{K}^{1}\cap I_{F}K^\times\to K_{F}^{\times}/I_{F}K^\times\to U_{K}^{1}K_{F}^{\times}/U_{K}^{1}I_{F}K^\times\to 1.$$ Since the middle term has order $m$, the left term must have order $1$, because $U_{K}^{1}$ is a pro-p-group and $gcd(m,p)=1$. Hence the right term is also of order $m$. So the outer terms of the sequence (\[eqn 4.21\]) have both order $m$, hence the inner terms must have the same order $q_F-1=[U_F:U_{F}^{1}]$, and they are cyclic, because the groups $U_F/U_{F}^{1}$ and $U_K/U_{K}^{1}$ are both cyclic. [****]{}: 1. we take $\chi_0$ as a character of $K^\times/U_{K}^{1}I_{F}K^\times$, 2. we take it on $U_{K}^{1}K_{F}^{\times}/U_{K}^{1}I_{F}K^\times$ as it is prescribed by the above Explicit Lemma \[Explicit Lemma\], in particular, $\chi_0$ restricted to that subgroup (which is cyclic of order $m$) will be faithful. 3. we take it trivial on all primary components of the cyclic group $U_{K}/U_{K}^{1}I_{F}K^\times$ which are not $p_i$-primary, where $m=\prod_{i=1}^{n}p_i^{a_i}$. 4. we take it trivial for a fixed prime element $\pi_K$. Under the above optimal choice of $\chi_0$, we have \[Lemma 5.1.17\] Denote $\nu_p(n):=$ as the highest power of $p$ for which $p^{\nu_p(n)}|n$. The character $\chi_0$ must be a character of order $$m_{q_F-1}:=\prod_{l|m}l^{\nu_l(q_F-1)},$$ which we will call the $m$-primary part of $q_F-1$, so it determines a cyclic extension $L/K$ of degree $m_{q_F-1}$ which is totally tamely ramified, and we can consider the Heisenberg representation $\rho=(X,\chi_0)$ of $G_F=\rm{Gal}(\overline{F}/F)$ is a representation of $\rm{Gal}(L/F)$, which is of order $m^2\cdot m_{q_F-1}$. By the given conditions, $m|q_F-1$. Therefore we can write $$q_F-1=\prod_{l|m}l^{\nu_l(q_F-1)}\cdot \prod_{p|q_F-1,\; p\nmid m}p^{\nu_p(q_F-1)}= m_{q_F-1}\cdot \prod_{p|q_F-1,\;p\nmid m}p^{\nu_p(q_F-1)},$$ where $l, p$ are prime, and $m_{q_F-1}=\prod_{l|m}l^{\nu_l(q_F-1)}$. From the construction of $\chi_0$, $\pi_K\in\rm{Ker}(\chi_0)$, hence the order of $\chi_0$ comes from the restriction to $U_K$. Then the order of $\chi_0$ is $m_{q_F-1}$, because from Lemma \[Lemma 5.3.3\], the order of $U_K/U_{K}^{1}I_FK$ is $q_F-1$. Since order of $\chi_0$ is $m_{q_F-1}$, by class field theory $\chi_0$ determines a cyclic extension $L/K$ of degree $m_{q_F-1}$, hence $$N_{L/K}(L^\times)=\rm{Ker}(\chi_0)=\rm{Ker}(\rho).$$ This means $G_L$ is the kernel of $\rho(X,\chi_0)$, hence $\rho(X,\chi_0)$ is actually a representation of $G_F/G_L\cong\rm{Gal}(L/F)$. Since $G_L$ is normal subgroup of $G_F$, hence $L/F$ is a normal extension of degree $[L:F]=[L:K]\cdot[K:F]=m_{q_F-1}\cdot m^2$. Thus $\rm{Gal}(L/F)$ is of order $m^2\cdot m_{q_F-1}$. Moreover since $[L:K]=m_{q_F-1}$ and $gcd(m,p)=1$, $L/K$ is tame. By construction we have a prime $\pi_K\in\rm{Ker}(\chi_0)=N_{L/K}(L^\times)$, hence $L/K$ is totally ramified extension. (Here $L$, $K$, and $F$ are the same as in Lemma \[Lemma 5.1.17\]) Let $F^{ab}/F$ be the maximal abelian extension. Then we have $$L\supset L\cap F^{ab}\supset K\supset F, \quad\{1\}\subset G'\subset Z(G)\subset G=\rm{Gal}(L/F),$$ where $[L:L\cap F^{ab}]=|G|=m$ and $[L:K]=|Z(G)|=m_{q_F-1}$. Let $F^{ab}/F$ be the maximal abelian extension. Then we have $$L\supset L\cap F^{ab}\supset K\supset F.$$ Here $L\cap F^{ab}/F$ is the maximal abelian in $L/F$. Then from Galois theory we can conclude $$\rm{Gal}(L/L\cap F^{ab})=[\rm{Gal}(L/F), \rm{Gal}(L/F)]=: G'.$$ Since $\rm{Gal}(L/F)=G_F/\rm{Ker}(\rho)$, and $[[G_F,G_F],G_F]\subseteq\rm{Ker}(\rho)$, from relation (\[eqn 5.1.3\]) we have $$G'=[G_F,G_F]/\rm{Ker}(\rho)\cap [G_F,G_F]=[G_F,G_F]/[[G_F,G_F],G_F]\cong K_F^\times/I_FK^\times.$$ Again from sequence \[sequence 5.1.25\] we have $|U_K^1K_F^\times/U_K^1 I_FK^\times|=|K_F^\times/I_FK^\times|=m$. Hence $|G'|=m$. From the Heisenberg property of $\rho$, we have $[[G_F,G_F],G_F]\subseteq\rm{Ker}(\rho)$, hence $\rm{Gal}(L/F)=G_F/\rm{Ker}(\rho)$ is a two-step nilpotent group (cf. Remark \[Remark 2.10\]). This gives $[G',G]=1$, hence $G'\subseteq Z:=Z(G)$. Thus $G/Z$ is abelian. Moreover, here $Z$ is the scalar group of $\rho$, hence the dimension of $\rho$ is: $$\rm{dim}(\rho)=\sqrt{[G:Z]}=m$$ Therefore the order of $Z$ is $m_{q_F-1}$ and $Z=\rm{Gal}(L/K)$. Now if we take $m=2$, hence $p\ne 2$, and choose $\chi_0$ as the above optimal choice, then we will have $m_{q_F-1}=2_{q_F-1}=2$-primary factor of the number $q_F-1$, and $\rm{Gal}(L/F)$ is a $2$-group of order $4\cdot 2_{q_F-1}$. When $q_F\equiv -1\pmod{4}$, $q_F$ is of the form $q_F=4l-1$, where $l{\geqslant}1$. So we can write $q_F-1=2(2l-1)$. Since $2l-1$ is always odd, therefore when $q_F\equiv-1\pmod{4}$, the order of $\chi_0$ is $2_{q_F-1}=2$. Then $\rm{Gal}(L/F)$ will be of order 8 if and only if $q_F\equiv -1\pmod{4}$, i.e., if and only if $i\not\in F$. And if $q_F\equiv 1\pmod{4}$, then similarly, we can write $q_F-1=4m$ for some integer $m{\geqslant}1$, hence $2_{q_F-1}{\geqslant}4$. Therefore when $q_F\equiv 1\pmod{4}$, the order of $\rm{Gal(L/F)}$ will be at least $16$. [**Artin conductors, Swan conductors, and the dimensions of Heisenberg representations**]{} ------------------------------------------------------------------------------------------- Let $G$ be a finite group and $R(G)$ be the complex representation ring of $G$. For any two representations $\rho_1,\rho_2\in R(G)$ with characters $\chi_1,\chi_2$ respectively, we have the Schur’s inner product: $$<\rho_1,\rho_2>_G=<\chi_1,\chi_2>_G:=\frac{1}{|G|}\sum_{g\in G}\chi_1(g)\cdot\overline{\chi_2(g)}.$$ Let $K/F$ be a finite Galois group with Galois group $G:=\rm{Gal}(K/F)$. For an element $g\in G$ different from identity $1$, we define the positive integer (cf. [@JPS], Chapter IV, p. 62) $$i_G(g):=\rm{inf}\{\nu_K(x-g(x))|\; x\in O_K\}.$$ By using this non-negative (when $g\ne 1$) integer $i_G(g)$ we define a function $a_G:G\to{\mathbb{Z}}$ as follows: $a_G(g)=-f_{K/F}\cdot i_G(g)$ when $g\ne 1$, and $a_G(1)=f_{K/F}\sum_{g\ne 1}i_G(g)$. Thus from this definition we can see that $\sum_{g\in G}a_G(g)=0$, hence $<a_G, 1_G>=0$. It can be proved (cf. [@JPS], p. 99, Theorem 1) that the function $a_G$ is the character of a linear representation of $G$, and that corresponding linear representation is called the [**Artin representation**]{} $A_G$ of $G$. Similarly, for a nontrivial $g\ne 1\in G$, we define (cf. [@VS], p. 247) $$s_G(g)=\rm{inf}\{\nu_K(1-g(x)x^{-1})|\;x\in K^\times\},\qquad s_G(1)=-\sum_{g\ne 1}s_G(g).$$ And we can define a function $\rm{sw}_G:G\to{\mathbb{Z}}$ as follows: $$\rm{sw}_G(g)=-f_{K/F}\cdot s_G(g)$$ It can also be shown that $\rm{sw}_G$ is a character of a linear representation of $G$, and that corresponding representation is called the [**Swan representation**]{} $SW_G$ of $G$. From [@JP], p. 160 , we have the relation between the Artin and Swan representations (cf. [@VS], p. 248, equation (6.1.9)) $$\label{eqn 5.1.22} SW_G=A_G+\rm{Ind}_{G_0}^{G}(1)-\rm{Ind}_{\{1\}}^{G}(1),$$ $G_0$ is the $0$-th ramification group (i.e., inertia group) of $G$. Now we are in a position to define the Artin and Swan conductor of a representation $\rho\in R(G)$. The Artin conductor of a representation $\rho\in R(G)$ is defined by $$a_F(\rho):=<A_G,\rho>_G=<a_G,\chi>_G,$$ where $\chi$ is the character of the representation $\rho$. Similarly, for the representation $\rho$, the Swan conductor is: $$\rm{sw}_F(\rho):=<SW_G,\rho>_G=<\rm{sw}_G,\chi>_G.$$ For more details about Artin and Swan conductor, see Chapter 6 of [@VS] and Chapter VI of [@JPS]. From equation (\[eqn 5.1.22\]) we obtain $$\label{eqn 5.1.23} a_F(\rho)=\rm{sw}_F(\rho)+\rm{dim}(\rho)-<1,\rho>_{G_0}.$$ Moreover, from Corollary of Proposition 4 on p. 101 of [@JPS], for an induced representation $\rho:=\rm{Ind}_{\rm{Gal}(K/E)}^{\rm{Gal}(K/F)}(\rho_E)=\rm{Ind}_{E/F}(\rho_E)$, we have $$\label{eqn 5.1.24} a_F(\rho)=f_{E/F}\cdot \left( d_{E/F}\cdot \rm{dim}(\rho_E)+\textrm{a}_E(\rho_E)\right).$$ We apply this formula (\[eqn 5.1.24\]) for $\rho_E=\chi_E$ of dimension $1$ and then conversely $$a(\chi_E)=\frac{a_F(\rho)}{f_{E/F}}-d_{E/F}.$$ So if we know $a_F(\rho)$ then we can compute $a(\chi_E)$. Let $\{G^i\}$, where $i{\geqslant}0,\in{\mathbb{Q}}$ be the ramification subgroups (in the upper numbering) of a local Galois group $G$. Now let $\rho$ be an irreducible representation of $G$. For this irreducible $\rho$ we define $$j(\rho):=\rm{max}\{ i\;|\; \rho|_{G^i}\not\equiv 1\}.$$ Now if $\rho$ is an irreducible representation of $G$, then $\rho|_{I}\not\equiv 1$, where $I=G^0=G_0$ is the inertia subgroup of $G$. Thus from the definition of $j(\rho)$ we can say, if $\rho$ is irreducible, then we always have $j(\rho){\geqslant}0$, i.e., $\rho$ is nontrivial on the inertia group $G_0$. Then from the definitions of Swan and Artin conductors, and equation (\[eqn 5.1.23\]), when $\rho$ is irreducible, we have the following relations $$\label{eqn 5.1.281} \rm{sw}_F(\rho)=\rm{dim}(\rho)\cdot j(\rho),\qquad a_F(\rho)=\rm{dim}(\rho)\cdot (j(\rho)+1).$$ From the Theorem of Hasse-Arf (cf. [@JPS], p. 76), if $\rm{dim}(\rho)=1$, i.e., $\rho$ is a character of $G/[G,G]$, we can say that $j(\rho)$ must be an integer, then $\rm{sw}_F(\rho)=j(\rho), a_F(\rho)=j(\rho)+1$. Moreover, by class field theory, $\rho$ corresponds to a linear character $\chi_F$, hence for linear character $\chi_F$, we can write $$j(\chi_F):=\rm{max}\{i\;|\;\chi_F|_{U_F^i}\not\equiv1\},$$ because under class field theory (under Artin isomorphism) the upper numbering in the filtration of $\rm{Gal}(F_{ab}/F)$ is compatible with the filtration (descending chain) of the group of units $U_F$. From equation (\[eqn 5.1.281\]), it is easy to see that for higher dimensional $\rho$, we have $\rm{sw}_F(\rho), a_F(\rho)$ multiples of $\rm{dim}(\rho)$ if and only if $j(\rho)$ is an integer. Now we come to our Heisenberg representations. For each $X\in\widehat{FF^\times}$ we define $$j(X):=\begin{cases} 0 & \text{when $X$ is trivial}\\ \rm{max}\{i\;|\; X|_{UU^i}\not\equiv 1\} & \text{when $X$ is nontrivial}, \end{cases}$$ where $UU^i\subseteq FF^\times$ is a subgroup which under (\[eqn 5.1.2\]) corresponds $$G_F^i\cap[G_F,G_F]/G_F^i\cap[[G_F,G_F],G_F]\subseteq[G_F,G_F]/[[G_F,G_F],G_F].$$ Let $\rho=\rho(X_\rho,\chi_K)$ be the [**minimal conductor**]{} (i.e., a representation with the smallest Artin conductor) Heisenberg representation for $X_\rho$ of the absolute Galois group $G_F$. From Theorem 3 on p. 125 of [@Z5], we have $$\label{eqn 5.1.26} \rm{sw}_F(\rho)=\rm{dim}(\rho)\cdot j(X_\rho)=\sqrt{[F^\times:\rm{Rad}(X_\rho)]}\cdot j(X_\rho).$$ Let $\rho_0=\rho_0(X,\chi_0)$ be a minimal representation corresponding $X$, then all other Heisenberg representations of dimension $\rm{dim}(\rho)$ are of the form $\rho=\chi_F\otimes \rho_0=(X, (\chi_F\circ N_{K/F})\chi_0)$, where $\chi_F:F^\times\to {\mathbb{C}}^\times$. Then we have (cf. [@Z2], p. 305, equation (5)) $$\label{eqn 5.1.27} \rm{sw}_F(\rho)=\rm{sw}_F(\chi_F\otimes\rho_0)=\sqrt{[F^\times:\rm{Rad}(X)]}\cdot\rm{max}\{j(\chi_F), j(X)\}.$$ For minimal conductor U-isotopic Heisenberg representation we have the following proposition. \[Proposition conductor\] Let $\rho=\rho(X_\eta,\chi_K)$ be a U-isotropic Heisenberg representation of $G_F$ of minimal conductor. Then we have the following conductor relation $j(X_\eta)=j(\eta)$, $\rm{sw}_F(\rho)=\rm{dim}(\rho)\cdot j(X_\eta)=\#\eta\cdot j(\eta)$, $a_F(\rho)=\rm{sw}_F(\rho)+\rm{dim}(\rho)=\#\eta(j(\eta)+1)=\#\eta\cdot a_F(\eta)$. From [@Z5], on p. 126, Proposition 4(i) and Proposition 5(ii), and $U\wedge U=U^1\wedge U^1$, we see the injection $U^i\wedge F^\times\subseteq UU^i$ induces a natural isomorphism $$U^i\wedge<\pi_F>\cong UU^{i}/UU^i\cap (U\wedge U)$$ for all $i{\geqslant}0$. Now let $j(X_\eta)=n-1$, hence $X_\eta|_{UU^n}=1$ but $X_\eta|_{UU^{n-1}}\ne 1$. This gives $X_\eta|_{U^n\wedge<\pi_F>}=1$ but $X_\eta|_{U^{n-1}\wedge<\pi_F>}\ne 1$. Now from equation (\[eqn 5.1.25\]) we can conclude that $\eta(x)=1$ for all $x\in U^n$ but $\eta(x)\ne 1$ for $x\in U^{n-1}$. Hence $$j(\eta)=n-1=j(X_\eta).$$ Again from the definition of $j(\chi)$, where $\chi$ is a character of $F^\times$, we can see that $j(\chi)=a(\chi)-1$, i.e., $a(\chi)=j(\chi)+1$. From equation (\[eqn 5.1.26\]) we obtain: $$\rm{sw}_F(\rho)=\rm{dim}(\rho)\cdot j(X_\eta)=\#\eta\cdot j(\eta),$$ since $\rm{dim}(\rho)=\#\eta$ and $j(X_\eta)=j(\eta)$. Finally, from equation (\[eqn 5.1.23\]) for $\rho$ (here $<1,\rho>_{G_0}=0$), we have $$\label{eqn 5.1.28} a_F(\rho)=\rm{sw}_F(\rho)+\rm{dim}(\rho)=\#\eta\cdot j(\eta)+\#\eta=\#\eta\cdot (j(\eta)+1)=\#\eta\cdot a_F(\eta).$$ By using the equation (\[eqn 5.1.24\]) in our Heisenberg setting, we have the following proposition. \[Proposition 5.1.20\] Let $\rho=\rho(Z,\chi_\rho)=\rho(X,\chi_K)$ be a Heisenberg representation of the absolute Galois group $G_F$ of a field $F/{\mathbb{Q}}_p$ of dimension $m$. Let $E/F$ be any subextension in $K/F$ corresponding to a maximal isotropic subgroup for $X$. Then $$a_F(\rho)=a_F(\rm{Ind}_{E/F}(\chi_E)),\qquad m\cdot a_F(\rho)=a_F(\rm{Ind}_{K/F}(\chi_K)).$$ As a consequence we have $$a(\chi_K)=e_{K/E}\cdot a(\chi_E)-d_{K/E}.$$ We know that $\rho=\rm{Ind}_{E/F}(\chi_E)$ and $m \cdot \rho=\rm{Ind}_{K/F}(\chi_K)$. By the definition of Artin conductor we can write $$a_F(\rm{dim}(\rho)\cdot \rho)=\rm{dim}(\rho)\cdot a_F(\rho)=m\cdot a_F(\rm{Ind}_{E/F}(\chi_E)).$$ Since $K/E/F$ is a tower of Galois extensions with $[K:E]=m=e_{K/E}f_{K/E}$, we have the transitivity relation of different (cf. [@JPS], p. 51, Proposition 8) $$\mathcal{D}_{K/F}=\mathcal{D}_{K/E}\cdot \mathcal{D}_{E/F}.$$ Now from the definition of different of a Galois extension, and taking $K$-valuation we obtain: $$\label{eqn discriminant relation} d_{K/F}=d_{K/E}+e_{K/E}\cdot d_{E/F}.$$ Now by using equation (\[eqn 5.1.24\]) we have: $$\label{eqn 44} m\cdot a_F(\rm{Ind}_{E/F}(\chi_E))=m\cdot f_{E/F}\left(d_{E/F}+a(\chi_E)\right)=m\cdot f_{E/F}\cdot d_{E/F}+e_{K/E}\cdot f_{K/F} \cdot a(\chi_E),$$ and $$\label{eqn 45} a_F(\rm{Ind}_{K/F}(\chi_K))=f_{K/F}\cdot\left(d_{K/F}+a(\chi_K)\right)=f_{K/F}\cdot d_{K/F}+f_{K/F}\cdot a(\chi_K).$$ By using equation (\[eqn discriminant relation\]), from equations (\[eqn 44\]), (\[eqn 45\]), we have $$a(\chi_K)=e_{K/E}\cdot a(\chi_E)-d_{K/E}$$ Now by combining Proposition \[Proposition 5.1.20\] with Proposition \[Proposition conductor\], we get the following result. \[Lemma general conductor\] Let $\rho=\rho(X_\eta,\chi_K)$ be a U-isotopic Heisenberg representation of the absolute Galois group $G_F$ of a non-archimedean local field $F$. Let $K=K_\eta$ correspond to the radical of $X_\eta$, and let $E_1/F$ be the maximal unramified subextension, and $E/F$ be any maximal cyclic and totally ramified subextension in $K/F$. Let $m$ denote the order of $\eta$. Then $\rho$ is induced by $\chi_{E_1}$ or by $\chi_E$ respectively, and we have 1. $a_E(\chi_E)=m\cdot a(\eta)-d_{E/F}$, 2. $a_{E_1}(\chi_{E_1})=a(\eta)$, 3. and for the character $\chi_K\in\widehat{K^\times}$, $$a_K(\chi_K)=m\cdot a(\eta)-d_{K/F}.$$ Moreover, $a_E(\chi_E)=a_K(\chi_K)$. Proof of these assertions follows from equation (\[eqn 5.1.24\]) and Proposition \[Proposition conductor\]. When $\rho=\rm{Ind}_{E/F}(\chi_E)$, where $E/F$ is a maximal cyclic and totally ramified subextension in $K/F$, from equation (\[eqn 5.1.24\]) we have $$\begin{aligned} a_F(\rho) &=m\cdot a(\eta)\quad\text{using Proposition $\ref{Proposition conductor}$},\\ &=f_{E/F}\cdot\left(d_{E/F}\cdot 1+a_E(\chi_E)\right),\quad\text{since $\rho=\rm{Ind}_{E/F}(\chi_E)$}\\ &=1\cdot\left(d_{E/F}+a_E(\chi_E)\right).\end{aligned}$$ because $E/F$ is totally ramified, hence $f_{E/F}=1$. This implies $a_E(\chi_E)=m\cdot a(\eta)-d_{E/F}$. Similarly, when $\rho=\rm{Ind}_{E_1/F}(\chi_{E_1})$, where $E_1/F$ is the maximal unramified subextension in $K/F$, hence $f_{E_1/F}=m$ and $d_{E_1/F}=0$, by using equation (\[eqn 5.1.24\]) we obtain $a_{E_1}(\chi_{E_1})= a(\eta)$. Again from Proposition \[Proposition 5.1.20\] we have $$a_K(\chi_K)=m\cdot a(\chi_{E_1})-d_{K/E_1}=m\cdot a(\eta)-d_{K/F}.$$ Finally, since $E/F$ is a maximal cyclic totally ramified implies $K/E$ is unramified and therefore $$d_{E/F}=d_{K/F},\quad\text{and hence}\; a_E(\chi_E)=a_K(\chi_K).$$ \[Remark 5.1.22\] Assume that we are in the dimension $m=\#\eta$ prime to $p$ case. Then from Corollary \[Corollary U-isotropic\], $\eta$ must be a character of $U/U^1$ (for $U=U_F$), hence $$a(\eta)=1\qquad a_F(\rho_0) =m.$$ Therefore in this case the minimal conductor of $\rho$ is $m$, hence it is equal to the dimension of $\rho$. From the above Lemma \[Lemma general conductor\], in this case we have $$a_{E_1}(\chi_{E_1})=a(\eta)=1.$$ And $K/F, E/F$ are tamely ramified of ramification exponent $e_{K/F}=m$, hence $$a_E(\chi_E) = a_K(\chi_K) = m\cdot a(\eta)-d_{K/F}=m -(e_{K/F}-1)=m-(m-1)=1.$$ Thus we can conclude that in this case all three characters (i.e., $\chi_{E_1},\chi_E$, and $\chi_K$) are of conductor $1$. In the general case $a_{E_1}(\chi_{E_1}) = a(\eta)$ and $$a_E(\chi_E)= a_K(\chi_K) = m\cdot a(\eta)-d,$$ where $d=d_{E/F}=d_{K/F}$, conductors will be different. In general, if $\rho=\rho_0\otimes\chi_F$, where $\rho_0$ is a finite dimensional minimal conductor representation of $G_F$, and $\chi_F\in\widehat{F^\times}$, then we have the following result. \[Lemma 5.1.23\] Let $\rho_0$ be a finite dimensional representation of $G_F$ of minimal conductor. Then we have $$a_F(\rho)=\rm{dim}(\rho_0)\cdot a_F(\chi_F),$$ where $\rho=\rho_0\otimes\chi_F=\rho(X_\eta,(\chi_F\circ N_{K/F})\chi_0)$ and $\chi_F\in\widehat{F^\times}$ with $a(\chi_F)>\frac{a(\rho_0)}{\rm{dim}(\rho)}$. From equation (\[eqn 5.1.281\]) we have $a_F(\rho_0)=\rm{dim}(\rho_0)\cdot (1+j(\rho_0))$. By the given condition $\rho_0$ is of minimal conductor. So for representation $\rho=\rho_0\otimes\chi_F$, we have $$\begin{aligned} a_F(\rho) &=a_F(\rho_0\otimes\chi_F)=\rm{dim}(\rho_0)\cdot\left(1+\rm{max}\{j(\rho_0),j(\chi_F)\}\right)\\ &=\rm{dim}(\rho_0)\cdot\rm{max}\{1+j(\chi_F), 1+j(\rho_0)\}\\ &=\rm{dim}(\rho_0)\cdot\rm{max}\{a(\chi_F), 1+j(\rho_0)\}\\ &=\rm{dim}(\rho_0)\cdot a_F(\chi_F),\end{aligned}$$ because by the given condition $$a(\chi_F)>\frac{a(\rho_0)}{\rm{dim}(\rho_0)}=\frac{\rm{dim}(\rho_0)\cdot(1+j(\rho_0))}{\rm{dim}(\rho_0)}=1+j(\rho_0).$$ \[Proposition 5.1.23\] Let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation dimension $m$ of the absolute Galois group $G_F$ of a non-archimedean local field $F$. Then $m| a_F(\rho)$ if and only if:\ $X$ is $U$-isotropic, or (if $X$ is not $U$-isotropic) $a_F(\rho)$ is with respect to $X$ not the minimal conductor. From the above Lemma \[Lemma 5.1.23\] we know that if $\rho$ is not minimal, then $a_F(\rho)$ is always a multiple of the dimension $m$. So now we just have to check for minimal conductors. In the U-isotropic case the minimal conductor is multiple of the dimension (cf. Proposition \[Proposition conductor\]). Finally, suppose that $X$ is not U-isotropic, i.e., $X|_{U\wedge U}=X|_{U^1\wedge U^1}\not\equiv1$, because $U\wedge U=U^1\wedge U^1$ (see the Remark on p. 126 of [@Z5]). We also know that $UU^i=(UU^i\cap U^1\wedge U^1)\times(U^i\wedge<\pi_F>)$ (cf. [@Z5], p. 126, Proposition 5(ii)). In Proposition 5 of [@Z5], we observe that all the jumps $v$ in the filtration $\{UU^i\cap (U^1\wedge U^1)\}, i\in{\mathbb{R}}_{+}$ are not [**integers with $v>1$**]{}. This shows that $j(X)$ is also not an integer, hence $a_F(\rho_0)$ is not multiple of the dimension. This implies the conductor $a_F(\rho)$ is not minimal. Let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation of the absolute Galois group $G_F$. Then from equation (\[eqn dimension formula\]), we have $$\rm{dim}(\rho)=\sqrt{[K:F]}=\sqrt{[F^\times:{\mathcal{N}}_{K/F}]},$$ when ${\mathcal{N}}_{K/F}=\rm{Rad}(X)$. \[Lemma dimension equivalent\] Let $\rho=(Z_\rho,\chi)=\rho(X_\rho,\chi)$ be a Heisenberg representation of the absolute Galois group $G_F$ of a non-archimedean local field $F/{\mathbb{Q}}_p$. Then following are equivalent: 1. $\rm{dim}(\rho)$ is prime to $p$. 2. $\rm{dim}(\rho)$ is a divisor of $q_F-1$. 3. The alternating character $X_\rho$ is $U$-isotropic and $X_\rho=X_\eta$ for a character $\eta$ of $U_F/U_F^1$. From Corollary \[Corollary U-isotropic\] we know that all Heisenberg representations of dimensions prime to $p$, are U-isotropic representations of the form $\rho=\rho(X_\eta,\chi)$, where $\eta:U_F/U_F^1\to{\mathbb{C}}^\times$, and the dimensions $\rm{dim}(\rho)=\#\eta$. Thus if $\rm{dim}(\rho)$ is prime to $p$, then $\rm{dim}(\rho)=\#\eta$ is a divisor of $q_F-1$. And if $\rm{dim}(\rho)$ is a divisor of $q_F-1$, then $gcd(p,\rm{dim}(\rho))=1$. Then from Corollary \[Corollary U-isotropic\], the alternating character $X_\rho$ is U-isotropic and $X_\rho=X_\eta$ for a character $\eta\in\widehat{U_F/U_F^1}$. Finally, if $\rho=\rho(X_\rho,\chi_K)=\rho(X_\rho,\chi_K)$ be a Heisenberg representation of $G_F$ for a character $\eta$ of $U_F/U_F^1$, then from Corollary \[Corollary U-isotropic\], we know that $\rm{dim}(\rho)$ is prime to $p$. For giving invariant formula of $W(\rho)$, we need to know the explicit dimension formula of $\rho$. In the following theorem we give the general dimension formula of a Heisenberg representation. \[Dimension Theorem\] Let $F/{\mathbb{Q}}_p$ be a local field and $G_F$ be the absolute Galois group of $F$. If $\rho$ is a Heisenberg representation of $G_F$, then $\rm{dim}(\rho)=p^n\cdot d'$, where $n{\geqslant}0$ is an integer and where the prime to $p$ factor $d'$ must divide $q_F-1$. By the definition of Heisenberg representation $\rho$ we have the relation $$[[G_F,G_F],G_F]\subseteq\rm{Ker}(\rho).$$ Then we can consider $\rho$ as a representation of $G:=G_F/[[G_F,G_F],G_F]$. Since $[x,g]\in [[G_F,G_F],G_F]$ for all $x\in [G_F,G_F]$ and $g\in G_F$, we have $[G,G]=[G_F,G_F]/[[G_F,G_F],G_F]\subseteq Z(G)$, hence $G$ is a two-step nilpotent group. We know that each nilpotent group is isomorphic to the direct product of its Sylow subgroups. Therefore we can write $$G=G_p\times G_{p'},$$ where $G_p$ is the Sylow $p$-subgroup, and $G_{p'}$ is the direct product of all other Sylow subgroups. Therefore each irreducible representation $\rho$ has the form $\rho=\rho_{p}\otimes\rho_{p'}$, where $\rho_{p}$ and $\rho_{p'}$ are irreducible representations of $G_p$ and $G_{p'}$ respectively. We also know that finite $p$-groups are nilpotent groups, and direct product of a finite number of nilpotent groups is again a nilpotent group. So $G_p$ and $G_{p'}$ are both two-step nilpotent group because $G$ is a two-step nilpotent group. Therefore the representations $\rho_p$ and $\rho_{p'}$ are both Heisenberg representations of $G_p$ and $G_{p'}$ respectively. Now to prove our assertion, we have to show that $\rm{dim}(\rho_p)$ can be an arbitrary power of $p$, whereas $\rm{dim}(\rho_{p'})$ must divide $q_F-1$. Since $\rho_p$ is an [**irreducible**]{} representation of $p$-group $G_p$, so the dimension of $\rho_p$ is some $p$-power. Again from the construction of $\rho_{p'}$ we can say that $\rm{dim}(\rho_{p'})$ is [**prime**]{} to $p$. Then from Lemma \[Lemma dimension equivalent\] $\rm{dim}(\rho_{p'})$ is a divisor of $q_F-1$. This completes the proof. \[Remark 5.1.3\] [**(1).**]{} Let $V_F$ be the wild ramification subgroup of $G_F$. We can show that $\rho|_{V_F}$ is irreducible if and only if $Z_\rho=G_K\subset G_F$ corresponds to an abelian extension $K/F$ which is totally ramified and wildly ramified[^4] (cf. [@Z2], p. 305). If $N:=N_{K/F}(K^\times)$ is the subgroup of norms, then this means that $N\cdot U_{F}^{1}=F^\times$, in other words, $$F^\times/N=N\cdot U_{F}^{1}/N=U_{F}^{1}/N\cap U_{F}^{1},$$ where $N$ can be also considered as the radical of $X_\rho$. So we can consider the alternating character $X_\rho$ on the principal units $U_{F}^{1}\subset F^\times$. Then $$\rm{dim}(\rho)=\sqrt{[F^\times:N]}=\sqrt{[U_F^1: N\cap U_F^1]},$$ is a power of $p$, because $U_F^1$ is a pro-p-group. Here we observe: If $\rho=\rho(X,\chi_K)$ with $\rho|_{V_F}$ stays irreducible, then $\rm{dim}(\rho)=p^n$, $n{\geqslant}1$ and $K/F$ is a totally and [**wildly**]{} ramified. But there is a [**big**]{} class of Heisenberg representations $\rho$ such that $\rm{dim}(\rho)=p^n$ is a $p$-power, but which are not wild representations (see the Definition \[Definition U-isotropic\] of U-isotropic).\ [**(2).**]{} Let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation of the absolute Galois group $G_F$ of dimension $d>1$ which is prime to $p$. Then from above Lemma \[Lemma dimension equivalent\], we have $d|q_F-1$. For this representation $\rho$, here $K/F$ must be tame if $\rm{Rad}(X)={\mathcal{N}}_{K/F}$ (cf. [@FV], p. 115). **Invariant formula for $W(\rho)$** =================================== \[Lemma 5.2.1\] Let $\rho=\rho(Z,\chi_Z)$ be a Heisenberg representation of the local Galois group $G=\mathrm{Gal}(L/F)$ of odd dimension. Let $H$ be a maximal isotropic subgroup for $\rho$ and $\chi_H\in\widehat{H}$ with $\chi_H|_{Z}=\chi_Z$ then: $$\label{eqn 4.9} W(\rho)=W(\chi_H),\hspace{.5cm} W(\rho)^{\mathrm{dim}(\rho)}=W(\chi_Z),$$ and $$W(\chi_H)^{[H:Z]}=W(\chi_Z).$$ From the construction of Heisenberg representation $\rho$ of $G$ we have $\rho=\rm{Ind}_{H}^{G}(\chi_H)$, $\rm{dim}(\rho)\cdot\rho=\rm{Ind}_{Z}^{G}(\chi_Z)$.\ This implies that $W(\rho)=\lambda_{H}^{G}\cdot W(\chi_H)$ and $W(\rho)^{\rm{dim}(\rho)}=\lambda_{Z}^{G}\cdot W(\chi_Z).$ Since $\rm{dim}(\rho)$ is odd we may apply now Lemma 3.4 on p. 10 of [@SAB1], and we obtain $\lambda_{H}^{G}=\lambda_{Z}^{G}=1$. So, we have $W(\rho)=\lambda_{H}^{G}(W)\cdot W(\chi_H)=W(\chi_H)$. Similarly, we have $W(\rho)^{\mathrm{dim}(\rho)}=W(\chi_Z)$. Moreover, it is easy to see[^5] that $W(\rm{Ind}_{Z}^{H}(\chi_Z))=W(\chi_H)^{[H:Z]}$. By the given condition, $[H:Z]=\rm{dim}(\rho)$ is odd, hence $\lambda_{Z}^{H}=1$, then we have $$\label{eqn 5.2.4} W(\chi_H)^{[H:Z]}=W(\rm{Ind}_{Z}^{H}(\chi_Z))=W(\chi_Z).$$ Related to $G\supset H\supset Z$ we have the base fields $F\subset E\subset K$, and $\chi_Z$ is the restriction of $\chi_H$. In arithmetic terms this means: $$\chi_K=\chi_E\circ N_{K/E}.$$ So in arithmetic terms of $W(\rm{Ind}_{Z}^{G}(\chi_Z))=W(\rm{Ind}_{H}^{G}(\chi_H))^{[G:H]}$ is as follows: $$W(\rm{Ind}_{K/F}(\chi_K),\psi)=W(\rm{Ind}_{E/F}(\chi_E),\psi)^{[K:E]}.$$ Then we can conclude that $$\lambda_{K/E}\cdot W(\chi_K,\psi_K)=W(\chi_E,\psi_E)^{[K:F]}.$$ If the dimension $\rm{dim}(\rho)=[K:E]$ is odd, we have $\lambda_{K/E}=1$, because $K/E$ is Galois. Then we obtain $$\label{eqn 5.2.5} W(\chi_E,\psi_E)^{[K:E]}=W(\chi_E\circ N_{K/E},\psi_E\circ\rm{Tr}_{K/E}).$$ The formula (\[eqn 5.2.5\]) is known as a [**Davenport-Hasse**]{} relation (cf. [@LN], p. 197, Theorem 5.14). Let $\rho=\rho(Z,\chi_Z)$ be a Heisenberg representation of a local Galois group $G$. Let $\rm{dim}(\rho)=d$ be odd. Let the order of $W(\chi_Z)$ be $n$ (i.e., $W(\chi_Z)^n=1$). If $d$ is prime to $n$, then $d^{\varphi(n)}\equiv 1\mod{n}$, and $$W(\rho)=W(\chi_Z)^{\frac{1}{d}}=W(\chi_Z)^{d^{\varphi(n)-1}},$$ where $\varphi(n)$ is the Euler’s totient function of $n$. By our assumption, here $d$ and $n$ are coprime. Therefore from [**Euler’s theorem**]{} we can write $$d^{\varphi(n)}\equiv 1\mod{n}.$$ This implies $d^{\varphi(n)}-1$ is a multiple of $n$. Here $d$ is odd, then from the above Lemma \[Lemma 5.2.1\] we have $W(\rho)^d=W(\chi_Z)$. So we obtain $$W(\rho)=W(\chi_Z)^{\frac{1}{d}}=W(\chi_Z)^{d^{\varphi(n)-1}},$$ since $d^{\varphi(n)}-1$ is a multiple of $n$, and by assumption $W(\chi_Z)^n=1$. We observe that when $\rm{dim}(\rho)=d$ is odd, if we take second part of the equation (\[eqn 4.9\]), we have $W(\rho)=W(\chi_Z)^{\frac{1}{d}}$, but it is not well-defined in general. Here we have to make precise which root $W(\chi_Z)^{\frac{1}{d}}$ really occurs. That is why, giving invariant formula of $W(\rho)$ using $\lambda$-functions computation is difficult. In the following theorem we give an invariant formula of local constant for Heisenberg representation. \[Theorem invariant odd\] Let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation of the absolute Galois group $G_F$ of a local field $F/{\mathbb{Q}}_p$ of dimension $d$. Let $\psi_F$ be the canonical additive character of $F$ and $\psi_K:=\psi_F\circ\rm{Tr}_{K/F}$. Denote $\mu_{p^\infty}$ as the group of roots of unity of $p$-power order and $\mu_{n}$ as the group of $n$-th roots of unity, where $n>1$ is an integer. 1. When the dimension $d$ is odd, we have $W(\rho)\equiv W(\chi_\rho)'\mod{\mu_{d}}$, where $W(\chi_\rho)'$ is any $d$-th root of $W(\chi_K,\psi_K)$. 2. When the dimension $d$ is even, we have $W(\rho)\equiv W(\chi_\rho)'\mod{\mu_{d'}}$, where $d'=\rm{lcm}(4,d)$. [**(1).**]{} We know that the lambda functions are always fourth roots of unity. In particular, when degree of the Galois extension $K/F$ is odd, from Theorem \[General Theorem for odd case\] we have $\lambda_{K/F}=1$. For proving our assertions we will use these facts about $\lambda$-functions. We know that $\rm{dim}(\rho)\cdot\rho=\rm{Ind}_{K/F}(\chi_K)$, where by class field theory $\chi_K\leftrightarrow\chi_\rho$ is a character of $K^\times$. When $d$ is odd, we can write $$W(\rho)^d=\lambda_{K/F}\cdot W(\chi_K,\psi_K)= W(\chi_K,\psi_K).$$ Now let $W(\chi_\rho)'$ be any $d$-th root of $W(\chi_K,\psi_K)$. Then we have $$W(\rho)^d={W(\chi_\rho)'}^d,$$ hence $\frac{W(\rho)}{W(\chi_\rho)}$ is a $d$-th root of unity. Therefore we have $$W(\rho)\equiv W(\chi_\rho)' \mod{\mu_{d}}.$$ [**(2).**]{} Similarly, we can give invariant formula for even degree Heisenberg representations. When the dimension $d$ of $\rho$ is even, we have $$\label{eqn 5.2.10} W(\rho)^d=\lambda_{K/F}\cdot W(\chi_K,\psi_K)\equiv W(\chi_K,\psi_K)\mod{\mu_4},$$ because $\lambda_{K/F}$ is a fourth root of unity. Now let $W(\chi_\rho)'$ be any $d$-th root of $W(\chi_K,\psi_K)$, hence $W(\chi_K,\psi_K)=W(\chi_\rho)'^d$. Then from equation (\[eqn 5.2.10\]) we have $$\left(\frac{W(\rho)}{W(\chi_\rho)'}\right)^d\equiv 1\mod{\mu_4}.$$ Therefore we can conclude that $$W(\rho)\equiv W(\chi_\rho)'\mod{\mu_{d'}},$$ where $d'=\rm{lcm}(4, d)$.\ When dimension of a Heisenberg representation $\rho=\rho(X,\chi_K)$ of $G_F$ is prime to $p$, then from Lemma \[Lemma dimension equivalent\] we can say that $X=X_\eta$ is U-isotropic with $\eta:U_F/U_F^1\to{\mathbb{C}}^\times$. Again from Remark \[Remark 5.1.22\] we observe that $a(\chi_K)=1$ when $\rho$ is of minimal conductor. In the following lemma for minimal conductor $\rho$ with dimension prime to $p$, we show that $W(\rho)$ is a root of unity. \[Lemma 5.2.12\] Let $\rho=\rho(X,\chi_K)$ be a minimal conductor Heisenberg representation with respect to $X$ of the absolute Galois group $G_F$ of a non-archimedean local field $F/{\mathbb{Q}}_p$. If dimension $\rm{dim}(\rho)$ is prime to $p$, then $W(\rho)$ is always a root of unity.\ Assume that $\rm{dim}(\rho)=d$ and $gcd(g,p)=1$. Then from Lemma \[Lemma dimension equivalent\], we can say that $\rho=\rho(X,\chi_K)=\rho(X_\eta,\chi_K)$ is a U-isotropic with $a(\eta)=1$. Since $\rho$ is of minimal conductor, from Remark \[Remark 5.1.22\] we have $a(\chi_K)=1$. From equation (\[eqn 5.1.5\]) we also know that: $d\cdot\rho=\rm{Ind}_{K/F}(\chi_K)$. Then we can write $$\begin{aligned} W(\rho)^d &=\lambda_{K/F}\cdot W(\chi_K)\nonumber\\ &=\lambda_{K/F}\cdot q_{K}^{-\frac{1}{2}}\sum_{x\in U_K/U_{K}^{1}}\chi_{K}^{-1}(x/c)\psi_K(x/c)\nonumber\\ &=\lambda_{K/F}\cdot q_{K}^{-\frac{1}{2}}\tau(\chi_K),\label{eqn 4.25}\end{aligned}$$ where $c=\pi_{K}^{1+n(\psi_K)}$, $\psi_K=\psi_F\circ\rm{Tr}_{K/F}$, the canonical character of $K$ and $$\tau(\chi_K)=\sum_{x\in U_K/U_{K}^{1}}\chi_{K}^{-1}(x/c)\psi_K(x/c).$$ Since $U_K/U_{K}^{1}\cong k_{K}^{\times}$, $a(\chi_K)=1$, and $n(\frac{1}{c}\cdot\psi_K)=-1$, we can consider $\tau(\chi_K)$ as a classical Gauss sum of $\chi_K$. We also know that $|\tau(\chi_K)|=q_{K}^{\frac{1}{2}}$ (cf. [@M], p. 30, Proposition 2.2(i)). Moreover, here we have $f_{K/F}=e_{K/F}=d$, hence $f_{K/{\mathbb{Q}}_p}{\geqslant}d$. So here we have $q_K=p^{f_{K/{\mathbb{Q}}_p}}{\geqslant}p^d$. Then from Theorem 1.6.2 on p. 33 of [@BRK], we can write $\tau(\chi_K)=q_{K}^{\frac{1}{2}}\cdot\gamma$, where $\gamma$ is a certain root of unity. We also know that $\lambda_{K/F}^{4}=1$, then from the equation (\[eqn 4.25\]) we obtain: $$W(\rho)^{4 d n}=\gamma^{4n}=1,$$ where $n$ is the order of $\gamma$. This completes the proof. As to the computation of $W(\rho)=W(\rho(X,\chi_K))$ we also can precisely say what an unramified twist will do by the formula of local constant of unramified character twist (cf. [@JT2], p. 15, (3.4.5)). Let $\omega_{K,s}$ be an unramified character of $K^\times$ such that $\omega_{K,s}|_{F^\times}=\omega_{F,s}$, then we have $$\omega_{F,s}\otimes\rho(X,\chi_K)=\rho(X,\omega_{K,s}\cdot\chi_K),\hspace{.3cm} W(\rho(X,\omega_{K,s}\cdot\chi_K))= \omega_{F,s}(c_{\rho,\psi})\cdot W(\rho(X,\chi_K)).$$ Therefore the question: Is $W(\rho)$ a root of unity or not?, is completely under control if we do unramified twists. In particular, unramified twists of finite order cannot change the answer. In the following theorem we give an invariant formula for $W(\rho,\psi)$, where $\rho=\rho(X,\chi_K)$ is a minimal conductor Heisenberg representation of the absolute Galois group $G_F$ of a local field $F/{\mathbb{Q}}_p$ of dimension $m$ which is prime to $p$. \[invariant formula for minimal conductor representation\] Let $\rho=\rho(X,\chi_K)$ be a minimal conductor Heisenberg representation of the absolute Galois group $G_F$ of a non-archimedean local field $F/{\mathbb{Q}}_p$ of dimension $m$ with $gcd(m,p)=1$. Let $\psi$ be a nontrivial additive character of $F$. Then $$W(\rho,\psi)=R(\psi,c)\cdot L(\psi,c),$$ where $$R(\psi,c):=\lambda_{E/F}(\psi)\Delta_{E/F}(c),$$ is a fourth root of unity that depends on $c\in F^\times$ with $\nu_F(c)=1+n(\psi)$ but not on the totally ramified cyclic subextension $E/F$ in $K/F$, and $$L(\psi,c):=\det(\rho)(c)q_F^{-\frac{1}{2}}\sum_{x\in k_F^\times}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)\cdot (c^{-1}\psi)(mx),$$ where $E_1/F$ is the unramified extension of $F$ of degree $m$. Before proving this Theorem \[invariant formula for minimal conductor representation\] we need to prove the following lemma. \[Lemma 5.2.17\] (With the notation of the above theorem) 1. Let $E/F$ be any totally ramified cyclic extension of degree $m$ inside $K/F$. Then: $$\Delta_{E/F}(\epsilon)=:\Delta(\epsilon),\qquad\text{for all $\epsilon\in U_F$},$$ does not depend on $E$ if we restrict to units of $F$. 2. We have $L(\psi,\epsilon c)=\Delta(\epsilon)L(\psi,c)$, and therefore changing $c$ by unit we see that $$\Delta_{E/F}(\epsilon c)\cdot L(\psi,\epsilon c)=\Delta(\epsilon)^2\Delta_{E/F}(c)\cdot L(\psi,c)=\Delta_{E/F}(c) L(\psi,c).$$ 3. We also have the transformation rule $R(\psi,\epsilon c)=\Delta(\epsilon)\cdot R(\psi,c)$. [**(1).**]{} Denote $G:=\rm{Gal}(E/F)$. By class field theory we know that $$\Delta_{E/F}=\begin{cases} \omega_{E'/F} & \text{when $\rm{rk}_2(G)=1$}\\ 1 & \text{when $\rm{rk}_2(G)=0$}, \end{cases} $$ where $E'/F$ is a uniquely determined quadratic extension inside $E/F$, and $\omega_{E'/F}$ is the quadratic character of $F^\times$ which corresponds to the extension $E'/F$ by class field theory. When $m$ is odd, i.e., $\rm{rk}_2(G)=0$, hence $\Delta_{E/F}\cong 1$. So for odd case, the assertion (1) is obvious. When $m$ is even, we choose two different totally ramified cyclic subextensions, namely $L_1/F, \;L_2/F$, in $K/F$ of degree $m$. Then we can write for all $\epsilon\in U_F$, $$\Delta_{L_1/F}(\epsilon)=\omega_{E'/F}(\epsilon) =\eta(\epsilon)\cdot\omega_{E'/F}(\epsilon)=\omega_{E'/F}(\epsilon)=\Delta_{L_2/F}(\epsilon),$$ where $\eta$ is the unramified quadratic character of $F^\times$. This proves that $\Delta_{E/F}$ does not depend on $E$ if we restrict to $U_F$.\ [**(2).**]{} From Proposition \[Proposition arithmetic form of determinant\] we know that $\det(\rho)(x)=\Delta_{E/F}(x)\cdot \chi_K\circ N_{K/E}^{-1}(x)$ for all $x\in F^\times$. Let $E_1/F$ be the unramified subextension in $K/F$ of degree $m$. Then we have $EE_1=K$ and $$N_{K/E}|_{E_1}=N_{E_1/F}, \qquad (E_1^\times)_{F}\subseteq K_E^\times\subset\rm{Ker}(\chi_K).$$ Moreover $U_F\subset{\mathcal{N}}_{E_1/F}$ and therefore we may write $N_{K/E}^{-1}(\epsilon)=N_{E_1/F}^{-1}(\epsilon)$ for all $\epsilon\in U_F$. Now we can write: $$\begin{aligned} L(\psi,\epsilon c) &=\det(\rho)(\epsilon c)q_F^{-\frac{1}{2}}\sum_{x\in k_F^\times}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)\cdot (c^{-1}\psi)(mx/\epsilon)\\ &=\Delta_{E/F}(\epsilon)\chi_K\circ N_{K/E}^{-1}(\epsilon)\det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{x\in k_F^\times}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(\epsilon x)\cdot (c^{-1}\psi)(mx)\\ &=\Delta(\epsilon)\chi_K\circ N_{E_1/F}^{-1}(\epsilon\epsilon^{-1})\det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{x\in k_F^\times}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)\cdot (c^{-1}\psi)(mx)\\ &=\Delta(\epsilon)\cdot \det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{x\in k_F^\times}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)\cdot (c^{-1}\psi)(mx)\\ &=\Delta(\epsilon)\cdot L(\psi,c).\end{aligned}$$ This implies that $$\Delta_{E/F}(\epsilon c)\cdot L(\psi,\epsilon c)=\Delta(\epsilon)^2\Delta_{E/F}(c)\cdot L(\psi,c)=\Delta_{E/F}(c)L(\psi,c).$$ [**(3).**]{} By the definition of $R(\psi,c)$ we can write: $$\begin{aligned} R(\psi,\epsilon c) &=\lambda_{E/F}(\psi)\Delta_{E/F}(\epsilon c)=\lambda_{E/F}(\psi)\Delta_{E/F}(\epsilon)\Delta_{E/F}(c)\\ &=\Delta(\epsilon)\lambda_{E/F}(\psi)\Delta_{E/F}(c)=\Delta(\epsilon)\cdot R(\psi,c).\end{aligned}$$ Now we are in a position to give a proof of Theorem \[invariant formula for minimal conductor representation\] by using Lemma \[Lemma 5.2.17\]. By the given conditions: $\rho=\rho(X,\chi_K)$ is a minimal conductor Heisenberg representation of the absolute Galois group $G_F$ of a local field $F/{\mathbb{Q}}_p$ of dimension $m$ which is prime to $p$. This means we are in the situation: $\rho=\rho(X,\chi_K)=\rho(X_\eta,\chi_K)$, where $\eta$ is a character of $U_F/U_F^1$, and $\rm{dim}(\rho)=\#\eta=m$. Since $\rho$ is of minimal conductor, we have $a(\rho_0)=m$. Then from Remark \[Remark 5.1.22\] we have $a(\chi_K)=1$. Now we choose $E/F\subset K/F$ a totally ramified cyclic subextension of degree $[E:F]=m$, hence $k_E=k_F$ the same residue fields, and $K/E$ is unramified of degree $m$. Then we can write $\rho=\rm{Ind}_{E/F}(\chi_E)$, and $a(\chi_E)=1$. Again, from Proposition \[Proposition arithmetic form of determinant\] we have $$\det(\rho)(x)=\Delta_{E/F}(x)\cdot \chi_K\circ N_{K/E}^{-1}(x)\quad\text{for all $x\in F^\times$}.$$ Then for all $x\in F^\times$, we can write $$\chi_K\circ N_{K/E}^{-1}(x)=\chi_E(x)=\Delta_{E/F}(x)\cdot \det(\rho)(x).$$ This is true for all subextensions[^6] $E/F$ in $K/F$ which are cyclic of degree $m$. Now we come to in our particular choice: $\rho=\rm{Ind}_{E/F}(\chi_E)$, with $a(\chi_E)=1$ and $E/F$ is totally ramified. We can write $$\begin{aligned} W(\rho,\psi) &=W(\rm{Ind}_{E/F}(\chi_E),\psi)=\lambda_{E/F}(\psi)\cdot W(\chi_E,\psi\circ\rm{Tr}_{E/F})\\ &=\lambda_{E/F}(\psi)\cdot q_E^{-\frac{1}{2}}\chi_E(c_E)\sum_{x\in U_E/U_E^1}\chi_E^{-1}(x)(c_E^{-1}\psi\circ\rm{Tr}_{E/F})(x),\end{aligned}$$ where $v_E(c_E)=1+n(\psi\circ\rm{Tr}_{E/F})=e_{E/F}(1+n(\psi))$. This implies that we can choose $c_F\in F^\times$ such that $\nu_F(c_F=c_E)=1+n(\psi)$. Let $E_1/F$ be the unramified subextension in $K/F$, then for each $\epsilon\in U_F$, we have $N_{K/E}^{-1}(\epsilon)=N_{E_1/F}^{-1}(\epsilon)$ where $N_{E_1/F}:=N_{K/E}|_{E_1}$. Since $E/F$ is totally ramified, we have $q_E=q_F$. And when $x\in F^\times$, we have $\rm{Tr}_{E/F}(x)=mx$. Then the above formula rewrites: $$\begin{aligned} W(\rho,\psi) &=\lambda_{E/F}(\psi)\cdot q_F^{-\frac{1}{2}}\chi_K\circ N_{K/E}^{-1}(c_F)\sum_{x\in k_F^\times} (\chi_K\circ N_{K/E}^{-1})^{-1}(x)(c_F^{-1}\psi)(mx)\\ &=\lambda_{E/F}(\psi)\cdot q_F^{-\frac{1}{2}}\Delta_{E/F}(c_F)\det(\rho)(c_F)\sum_{x\in k_F^\times} (\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)(c_F^{-1}\psi)(mx)\\ &=\lambda_{E/F}(\psi)\Delta_{E/F}(c_F)\cdot\left(\det(\rho)(c_F)q_F^{-\frac{1}{2}}\sum_{x\in k_F^\times} (\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)(c_F^{-1}\psi)(mx)\right)\\ &=R(\psi,c)\cdot L(\psi,c), \end{aligned}$$ where $c_F=c\in F^\times$ with $\nu_F(c)=1+n(\psi)$, $R(\psi,c)=\lambda_{E/F}(\psi)\Delta_{E/F}(c)$, and $$L(\psi,c)=\det(\rho)(c_F)q_F^{-\frac{1}{2}}\sum_{x\in k_F^\times} (\chi_K\circ N_{E_1/F}^{-1})^{-1}(x)(c^{-1}\psi)(mx).$$ Now it is clear that $L(\psi,c)$ depends on $c$ but not on the totally ramified cyclic extension $E/F$ which we have chosen. Again we know that $\lambda_{E/F}(\psi)$ is a fourth root of unity and $\Delta_{E/F}(c)\in\{\pm 1\}$. Therefore it is easy to see that $R(\psi,c)$ is a fourth root of unity. So to call our expression $$W(\rho,\psi)=R(\psi,c)\cdot L(\psi,c)$$ is invariant, we are left to show $R(\psi,c)$ does not depend on the the totally ramified cyclic subextension $E/F$ in $K/F$. Moreover, we can write (cf. Lemma 3.2 of [@SAB1]) here $$R(\psi,c)=\lambda_{E/F}(\psi)\Delta_{E/F}(c)=\lambda_{E/F}(c\psi)=\lambda_{E/F}(\psi'),$$ where $\psi'=c\psi$, hence $n(\psi')=\nu_F(c)+n(\psi)=1+n(\psi)+n(\psi)=2n(\psi)+1$. When $m(=[E:F])$ is odd, we have $\lambda_{E/F}(\psi')=1$, hence $R(\psi,c)=\lambda_{E/F}(c\psi)=1$. Thus in the odd case $R(\psi,c)$ is independent of the choice of the totally ramified subextension $E/F$ in $K/F$. When $m$ is even, we have $$\begin{aligned} R(\psi,c) &=\lambda_{E/F}(\psi')=\lambda_{E/E'}(\psi'')\cdot \lambda_{E'/F}^{[E:E'']}\\ &=\lambda_{E'/F}(\psi')^{\pm 1},\end{aligned}$$ where $[E',F]$ is the $2$-primary part of $m$, hence $[E:E']$ is odd. Here the sign only depends on $m$ but not on $E$. So we can restrict to the case where $m=[E:F]$ is a power of $2$. Let $E_2/F$ be the unique quadratic subextension in $E/F$. Since $E/F$ is a cyclic tame extension, from Theorem \[Theorem 2.5\], we obtain: $$\lambda_{E/F}(\psi')=\begin{cases} \lambda_{E_2/F}(\psi') & \text{if $[E:F]\ne 4$}\\ \beta(-1)\cdot\lambda_{E_2/F}(\psi') & \text{if $[E:F]=4$}, \end{cases}$$ where $\beta$ is the character of $F^\times/{\mathcal{N}}_{E/F}$ of order $4$. Since here $n(\psi')=2 n(\psi)+1$ is [**odd**]{}[^7], from Remark 5.10 of [@SAB1] we can tell that $\lambda_{E_2/F}(\psi')$ is invariant.\ Finally, we have to see that $\beta(-1)$ does not depend on $E$ if $[E:F]=4$. Since $E/F$ is totally ramified of degree $4$, we have $F^\times=U_F\cdot N$, hence $F^\times/N=U_F N/N=U_F/U_F\cap N\cong{\mathbb{Z}}_4$, where $N=N_{E/F}(E^\times)$. Again $U_F^1\subset U_F$, and $U_F^1\subset N$, hence $U_{F}^{1}\subset N\cap U_F\subset U_F$. We know that $U_F/U_F^1$ is a cyclic group. Therefore $N\cap U_F$ is determined by its index in $U_F$, which does not depend on $E$. Hence, $U_F\cap N$ does not depend on $E$. We also know that there are two characters of $U_F/U_F\cap N$ of order $4$, and they are inverse to each other. Then $$\beta(-1)=\beta(-1)^{-1}=\beta^{-1}(-1)$$ is the same in both cases. Since $\beta$ is the character which corresponds to $E/F$ by class field theory, we can say $\beta$ is a character of $F^\times/U_F^1$, hence $a(\beta)=1$. It clearly shows that $\beta(-1)$ does not depend on $E$. So we can conclude that $R(\psi,c)$ does not depend on $E$. Thus our expression $W(\rho,\psi)=R(\psi,c)\cdot L(\psi,c)$ does not depend on the choice of the totally ramified cyclic subextension $E/F$ in $K/F$. Moreover we notice that we have the transformation rules $$R(\psi,\epsilon c)=\Delta(\epsilon)R(\psi,c),\qquad L(\psi,\epsilon c)=\Delta(\epsilon)L(\psi,c),$$ for all $\epsilon\in U_F$. Again $\Delta(\epsilon)^2=1$, hence the product $R(\psi,\epsilon c)\cdot L(\psi,\epsilon c)= R(\psi,c)\cdot L(\psi,c)=W(\rho,\psi)$ does not depend on the choice of $c$. Therefore, finally, we can conclude our formula $W(\rho,\psi)=R(\psi,c)L(\psi,c)$ is an invariant expression. Now let $\rho=\rho(X,\chi_K)$ be a Heisenberg representation of dimension prime to $p$ but the conductor of $\rho$ is [**not**]{} minimal. In the following theorem we give an invariant formula of $W(\rho,\psi)$. \[Theorem invariant for non minimal representation\] Let $\rho=\rho(X_\rho,\chi_K)$ be a Heisenberg representation of the absolute Galois group $G_F$ of a local field $F/{\mathbb{Q}}_p$ of dimension $m$ prime to $p$. Let $\psi$ be a nontrivial additive character of $F$. Suppose that the conductor of $\rho$ is not minimal, $\rho=\rho_0\otimes\widetilde{\chi_F}$ and $a(\rho)=m\cdot a(\chi_F)$, where $\widetilde{\chi_F}:W_F\to{\mathbb{C}}^\times$ corresponds to $\chi_F:F^\times\to{\mathbb{C}}^\times$, and $h=a(\chi_F){\geqslant}2$.\ [**Case-1:**]{} If $m$ is odd, then 1. when $1+m(h-1)=2d$ is even, we have $$W(\rho,\psi)=\det(\rho)(c) \psi(mc^{-1}),$$ 2. when $1+m(h-1)=2d+1$ is odd, we have $$W(\rho,\psi)=\det(\rho)(c)\cdot H(\psi,c),$$ where $$H(\psi,c)=q_F^{-\frac{1}{2}}\sum_{y\in U_F^{h'}/U_F^{h'+1}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my),$$ and $h'=[\frac{h}{2}]$, where $[x]$ denotes the largest integer ${\leqslant}x$.\ [**Case-2:**]{} If $m$ is even, then 1. when $h$ is odd, we have $$W(\rho,\psi)=R(\psi,c)\cdot\det(\rho)(c)\cdot H(\psi,c),$$ where $H(\psi,c)$ is the same as in Case-1(2). 2. when $h$ is even, we have $$W(\rho,\psi)=R(\psi,c)\cdot\det(\rho)(c)\cdot q_F^{\frac{1}{2}}\cdot\psi(c^{-1}m),$$ where $R(\psi,c)=\lambda_{E/F}(\psi)\cdot \Delta_{E/F}(c)$.\ Here $E_1/F$ is the maximal unramified subextension in $K/F$, and $E/F$ is a totally ramified cyclic subextension in $K/F$ and $c\in F^\times$ with $\nu_F(c)=h+n(\psi)$, and $$\chi_F(1+x)=\psi(x/c),\qquad \text{for all $x\in P_F^{h-h'}/P_F^h$}.$$ [**Step-1:**]{} By the given condition, $\rm{dim}(\rho)=m$ prime to $p$, and the Artin conductor $a_F(\rho)=m h$ where $h{\geqslant}2$, then from Lemma \[Lemma general conductor\], we have $a(\chi_E)=mh-d_{E/F}=mh-m+1=1+m(h-1)$, where $E/F$ is a totally ramified cyclic subextension in $K/F$, and $\rho=\rm{Ind}_{E/F}(\chi_E)$. Since by the given condition $\rho$ is not minimal conductor, we can write $$\label{eqn 55} \rho=\rho_0\otimes\widetilde{\chi_F},$$ where $\rho=\rho_0(X,\chi_0)$ is a minimal conductor Heisenberg representation of dimension $m$, and $\widetilde{\chi_F}: W_F\to{\mathbb{C}}^\times$ corresponds to $\chi_F:F^\times\to{\mathbb{C}}^\times$ by class field theory.\ Then we have $X_\rho=X_\eta$ for $\eta:U_F/U_F^1\to{\mathbb{C}}^\times$, $\#\eta=m$ and: $$\rho_0=\rm{Ind}_{E/F}(\chi_{E,0})\qquad\rho=\rm{Ind}_{E/F}(\chi_E),$$ where $E/F$ is a cyclic totally ramified extension of degree $m$.\ Because of (\[eqn 55\]) we may assume now that $$\label{eqn 66} \chi_E=\chi_{E,0}\cdot (\chi_F\circ N_{E/F}), \quad a(\chi_{E,0})=1,\quad a(\chi_E)=a(\chi_F\circ N_{E/F})=1+m(h-1).$$ From the first and second of the equalities (\[eqn 66\]) we deduce $$\label{eqn 77} \chi_E|_{U_E^1}=(\chi_F\circ N_{E/F})|_{U_E^1}, \quad N_{E/F}(U_E^1)=U_F^1,$$ where the second equality holds because $E/F$ is totally ramified, and it implies that conversely $\chi_E|_{U_E^1}$ determines $\chi_F|_{U_F^1}$.\ [**Step-2:**]{} Now for $d{\geqslant}1$ we put: $$A_E:=U_E^d/U_E^{d+1},$$ which we consider as a $\rm{Gal}(E/F)$-module. We also know that $A_E/I_{E/F}A_E\cong A_E^{\rm{Gal}(E/F)}$, where $I_{E/F}A_E$ is the augmentation with respect to the extension $E/F$. We also know that for any finite extension $E/F$, we have $$\label{eqn intersection} U_E^d\cap F^\times=\begin{cases} U_{F}^{\frac{d}{e_{E/F}}} & \text{if $e_{E/F}$ divides $d$}\\ U_{F}^{[\frac{d}{e_{E/F}}]+1} & \text{if $e_{E/F}$ does not divide $d$}. \end{cases}$$ Again we also have $$A_E^{\rm{Gal}(E/F)}=U_E^n/U_E^{n+1}\cap F^\times=U_E^{d}\cap F^\times/U_E^{d+1}\cap F^\times.$$ [**Step-3:**]{} If $1+m(h-1)=2d+1$, then $\frac{d}{m}=\frac{h-1}{2}$. Let $h':=[\frac{h}{2}]$. If $A_E=U_E^d/U_E^{d+1}$, and $h$ is odd, then we have: $$U_E^{d}\cap F^\times/U_E^{d+1}\cap F^\times=U_F^{\frac{h-1}{2}}/U_F^{\frac{h-1}{2}+1}=U_F^{h'}/U_F^{h'+1},$$ and if $h$ is even, hence $2$ does not divide $h-1$, then we can write $$U_E^{d}\cap F^\times/U_E^{d+1}\cap F^\times=U_F^{[\frac{h-1}{2}]+1}/U_F^{[\frac{h-1}{2}]+1}\cong\{1\}.$$ Since $A_E^{\rm{Gal}(E/F)}\cong A_E/I_{E/F}A_E$, we can uniquely write any element $x\in U_E^d/U_E^{d+1}$ as $x=yz$ where $y\in A_E^{\rm{Gal}(E/F)}$ and $z\in I_{E/F}A_E$. We also know that $U_E^d/U_E^{d+1}\cong k_E$, hence $|A_E|=|A_E^{\rm{Gal}(E/F)}|\cdot|I_{E/F}A_E|=q_E=q_F$. We also observe that when $h$ is even, we have $A_E^{\rm{Gal}(E/F)}\cong\{1\}$, hence $|A_E|=|I_{E/F}A_E|=q_F$. And when $h$ is odd, we have $A_E^{\rm{Gal}(E/F)}=U_F^{h'}/U_F^{h'+1}$, and hence $|A_E^{\rm{Gal}(E/F)}|=q_F$. So this implies $|I_{E/F}A_E|=1$. Now set: $$S(\psi,c):=\sum_{x\in A_E}\chi_E^{-1}(x)(c^{-1}\psi)(\rm{Tr}_{E/F}(x)).$$ Then we can write $$\begin{aligned} S(\psi,c) &=\sum_{y\in A_E^{\rm{Gal}(E/F)},\;z\in I_{E/F}A_E}\chi_E^{-1}(yz)\cdot (c^{-1}\psi)(\rm{Tr}_{E/F}(yz))\\ &=\sum_{y\in A_E^{\rm{Gal}(E/F)}}\sum_{z\in I_{E/F}A_E}\chi_E^{-1}(yz)(c^{-1}\psi)(\rm{Tr}_{E/F}(yz))\\ &=|I_{E/F}A_E|\cdot \sum_{y\in A_E^{\rm{Gal}(E/F)}}\chi_E^{-1}(y)(c^{-1}\psi)(my)\\ &=|I_{E/F}A_E|\cdot\sum_{y\in A_{E}^{\rm{Gal}(E/F)}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my)\\ &=\begin{cases} \sum_{y\in U_F^{h'}/U_{F}^{h'+1}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my) & \text{when $h$ is odd}\\ q_F\cdot (c^{-1}\psi)(m)=q_F\cdot\psi(mc^{-1}) & \text{when $h$ is even}, \end{cases} \end{aligned}$$ since $\chi_E(yz)=\chi_E(y)$ and $\rm{Tr}_{E/F}(yz)=y\rm{Tr}_{E/F}(z)=ym$. [**Step-4:**]{} Again, we have $\rho=\rm{Ind}_{E/F}(\chi_E)$. Then $$W(\rho,\psi)=W(\rm{Ind}_{E/F}(\chi_E),\psi)=\lambda_{E/F}(\psi)\cdot W(\chi_E,\psi\circ\rm{Tr}_{E/F}).$$ [**Case-1: Suppose that $m$ is odd:**]{}\ [**(1) When $a(\chi_E)=1+m(h-1)=2d$:**]{} In this situation, $h$ must be even and we take $h=2h'$, hence $d=mh'-\frac{m-1}{2}$. Since $m(h'-1)<d{\leqslant}mh'$, we have $P_E^d\cap F=P_F^{h'}$. Now we choose $c\in F^\times$ such that $$\label{eqn 991} \chi_F(1+y)=\psi(c^{-1}y), \quad\text{for all $y\in P_F^{h-h'}/P_F^h$},$$ hence $\nu_F(c)=a(\chi_F)+n(\psi)=h+n(\psi)$. Now if we take an element $y_E\in P_E^{a(\chi_E)-d}=P_E^{d}$, then $\rm{Tr}_{E/F}(y_E)\in P_F^{h'}=P_F^{h-h'}$ because $m(h'-1)<d{\leqslant}mh'=m(h-h')$. Since $E/F$ is cyclic, from Proposition 1.1 on p. 68 of [@FV], we have: $$N_{E/F}(1+y_E)=1+\rm{Tr}_{E/F}(y_E)+N_{E/F}(y_E)+\rm{Tr}_{E/F}(\delta),$$ where $\nu_E(\delta){\geqslant}2d=a(\chi_E)$. Then for all $y_E\in P_E^{a(\chi_E)-d}/P_E^{a(\chi_E)}$, we can write $$\begin{aligned} \label{eqn 1001} \chi_E(1+y_E)\nonumber &=\chi_F\circ N_{E/F}(1+y_E)=\chi_F(1+\rm{Tr}_{E/F}(y_E))\\ &=\psi(c^{-1}\rm{Tr}_{E/F}(y_E))=(c^{-1}\psi_E)(y_E),\end{aligned}$$ because $N_{E/F}(y_E)+\rm{Tr}_{E/F}(\delta)\in P_F^{h}$. This verifies that our choice of $c$ is right for applying Lamprecht-Tate formula for $W(\chi_E,\psi_E)$. Now we apply Lamprecht-Tate formula (cf. Theorem 6.1.1 and its Corollary of [@SAB2]) and we obtain: $$W(\chi_E,\psi_E)=\chi_E(c)\cdot (c^{-1}\psi_E)(1)=\Delta_{E/F}(c)\det(\rho)(c)\psi(mc^{-1}).$$ Therefore $$\begin{aligned} W(\rho,\psi) &=\lambda_{E/F}(\psi)\cdot W(\chi_E,\psi_E)\\ &=\lambda_{E/F}(\psi)\cdot \Delta_{E/F}(c)\det(\rho)(c)\psi(mc^{-1})\\ &=R(\psi,c)\cdot\det(\rho)(c)\cdot\psi(mc^{-1})\\ &=\det(\rho)(c)\cdot\psi(mc^{-1}),\end{aligned}$$ where $R(\psi,c)=\lambda_{E/F}(\psi)\Delta_{E/F}(c)=\lambda_{E/F}(c\psi)=1$ because $E/F$ is an odd degree Galois extension.\ [**(2). When $a(\chi_E)=1+m(h-1)=2d+1$:**]{} Since $m$ is odd, here $h$ must be odd. Let $h':=[\frac{h}{2}]$. Then from Step-3 we have $A_E^{\rm{Gal}(E/F)}=U_F^{h'}/U_{F}^{h'+1}$. Now if we choose $c\in F^\times$ such that $$\chi_F(1+y)=\psi(c^{-1}y),\qquad\text{for all $y\in P_F^{h-h'}/P_F^{h}$}.$$ Then this $c$ also satisfies the following relation $$\chi_E(1+y_E)=\psi_E(c^{-1}y_E),\qquad \text{for all $y_E\in P_E^{a(\chi_E)-d}/P_E^{a(\chi_E)}$},$$ because $d=\frac{m(h-1)}{2}$, and hence $m(h'-1)<d{\leqslant}mh'$. Then by Lamprecht-Tate formula we have $$\begin{aligned} W(\chi_E,\psi_E) &=\chi_E(c)\psi_E(c^{-1})q_E^{-\frac{1}{2}}\sum_{x\in P_E^d/P_E^{d+1}}\chi_E^{-1}(1+x)\cdot(c^{-1}\psi_E)(x)\\ &=\chi_E(c)\cdot q_F^{-\frac{1}{2}}\sum_{x\in U_E^d/U_E^{d+1}}\chi_E^{-1}(x)\cdot (c^{-1}\psi)(\rm{Tr}_{E/F}(x))\\ &=\Delta_{E/F}(c)\det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{y\in U_F^{h'}/U_F^{h'+1}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my),\end{aligned}$$ because $h$ is odd, and we use Step-3. Thus we obtain $$\begin{aligned} W(\rho,\psi) &=W(\rm{Ind}_{E/F}(\chi_E),\psi)=\lambda_{E/F}(\psi)\cdot W(\chi_E,\psi_E)\\ &=\lambda_{E/F}(\psi)\cdot \Delta_{E/F}(c)\det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{y\in U_F^{h'}/U_F^{h'+1}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my).\\ &=R(\psi,c)\cdot\det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{y\in U_F^{h'}/U_F^{h'+1}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my).\\ &=\det(\rho)(c)q_F^{-\frac{1}{2}} \sum_{y\in U_F^{h'}/U_F^{h'+1}}(\chi_K\circ N_{E_1/F}^{-1})^{-1}(y)(c^{-1}\psi)(my),\end{aligned}$$ because $m$ is odd, hence $R(\psi,c)=\lambda_{E/F}(c\psi)=1$.\ [**Case-2: Suppose that $m$ is even.**]{} If $m$ is even, then $1+m(h-1)=2d+1$ is always an odd number and $d=\frac{m(h-1)}{2}$. But here $h$ could be any number ${\geqslant}2$, i.e., $h$ is not fixed, and we put $h':=[\frac{h}{2}]$. This implies $m(h'-1)<d{\leqslant}mh'$ and $P_E^{d}\cap F=P_F^{h'}$. Now we take $c\in F^\times$ such that (\[eqn 991\]) holds, and this again satisfies equation (\[eqn 1001\]). Therefore we can use Lamprecht-Tate formula and we have two cases:\ 1. When $h$ is odd, we are in the same situation of Case-1(2), and we have $$W(\rho,\psi)=R(\psi,c)\cdot\det(\rho)(c)\cdot H(\psi,c).$$ 2. When $h$ is even, from Step-3 we know that $A_E^{\rm{Gal}(E/F)}\cong\{1\}$ and $$\begin{aligned} \sum_{x\in A_E}\chi_E^{-1}(x)(c^{-1}\psi)(\rm{Tr}_{E/F}(x)) &=q_F\cdot \psi(mc^{-1}). \end{aligned}$$ Therefore in this situation we have $$\begin{aligned} W(\rho,\psi) &=R(\psi,c)\cdot\det(\rho)(c)q_F^{-\frac{1}{2}}\cdot\sum_{x\in A_E}\chi_E^{-1}(x)(c^{-1}\psi)(\rm{Tr}_{E/F}(x))\\ &=R(\psi,c)\cdot\det(\rho)(c)q_F^{-\frac{1}{2}}\cdot q_F\psi(mc^{-1})\\ &=R(\psi,c)\cdot\det(\rho)(c)q_F^{\frac{1}{2}}\psi(mc^{-1}).\end{aligned}$$ Furthermore, in the proof of Theorem \[invariant formula for minimal conductor representation\], we observe that $R(\psi,c)$ does not depend on $E$. Hence our above computations are invariant. This completes the proof. By using following lemma, without using $\lambda$-function we also can give invariant formula for $W(\rho)$, when $\rm{dim}(\rho)$ is prime to $p$, for sufficiently large conductor character $\chi_F$. \[Lemma Deligne-Henniart\] Let $F$ be a non-archimedean local field and $\psi$ be a nontrivial additive character of $F$. Let $\rho$ be a finite dimensional representation of $G_F$. There is a sufficiently large integer $m_\rho$ such that if $\chi_F$ is a character of $F^\times$ of conductor $a(\chi_F){\geqslant}m_\rho$ , then $$\label{eqn DH} W(\rho\otimes\chi_F,\psi)=W(\chi_F,\psi)^{\rm{dim}(\rho)}\cdot \det(\rho)(c),$$ for any $c:=c(\chi_F,\psi)\in F^\times$ such that $\chi_F(1+x)=\psi(c^{-1}x)$, $x\in P_F^{[\frac{a(\chi_F)}{2}]+1}$. By using the above Lemma \[Lemma Deligne-Henniart\], we obtain the following theorem. \[Theorem using Deligne-Henniart\] Let $\rho=\rho_0\otimes\widetilde{\chi_F}$ be a Heisenberg representation of $G_F$ of dimension $d$ with $gcd(d,p)=1$, where $\rho_0=\rho_0(X_\eta,\chi_0)$ is a minimal conductor Heisenberg representation. If $a(\chi_F){\geqslant}m_\rho {\geqslant}2$, a sufficiently large number which depends on $\rho$, then we have $$\label{eqn 5.4.9} W(\rho,\psi)=W(\rho_0\otimes\widetilde{\chi_F})=W(\chi_F,\psi)^d\cdot\det(\rho_0)(c),$$ where $\psi$ is a nontrivial additive character of $F$, and $c:=c(\chi_F,\psi)\in F^\times$, satisfies $\chi_F(1+x)=\psi(c^{-1}x)$ for all $x\in P_{F}^{[\frac{a(\chi_F)}{2}]+1}$. From Corollary \[Corollary U-isotropic\] we know that all Heisenberg representation $\rho$ of $G_F$ of dimension prime to $p$ are precisely given as $\rho=\rho(X_\eta,\chi)$ for characters $\eta$ of $U_F/U_F^1$. Then from Remark \[Remark 5.1.22\] we have here $a_K(\chi_0)=1$. This implies that we always can choose a character $\chi_0$ of $K^\times$ with $a(\chi_0)=1$ such that all other $\chi_K$ are given as $$\chi_K=(\chi_F\circ N_{K/F})\cdot\chi_0,$$ for arbitrary characters $\chi_F$ of $F^\times$. Therefore the whole set of Heisenberg (U-isotopic) representations of $G_F$ of dimension prime to $p$ is: $\rho_0=\rho_0(G_K,\chi_0)$ and $\rho=\rho(G_K,\chi_K)$, where $\chi_K=(\chi_F\circ N_{K/F})\cdot \chi_0$, and $\chi_F\in\widehat{F^\times}$ . We also know that there are $d^2$ characters of $F^\times/{F^\times}^d$ such that $\rho_0\otimes\widetilde{\chi}=\rho_0$ (cf. [@Z2], p. 303, Proposition 1.4). So we always have: $$\rho=\rho_0\otimes\widetilde{\chi_F}=\rho_0\otimes\widetilde{\chi\chi_F},$$ where $\chi\in\widehat{F^\times/{F^\times}^d}$, and $\widetilde{\chi_F}:W_F\to{\mathbb{C}}^\times$ corresponds to $\chi_F$ by class field theory. Let $\zeta$ be a $(q_F-1)$-st root of unity. Since $U_F^1$ is a pro-p-group and $gcd(p,d)=1$, we have $$\label{eqn 5.2.23} F^\times/{F^\times}^d=<\pi_F>\times<\zeta>\times U_F^1/<\pi_F^d>\times<\zeta>^d\times U_F^1\cong {\mathbb{Z}}_d\times{\mathbb{Z}}_d,$$ that is, a direct product of two cyclic group of same order. Hence $F^\times/{F^\times}^d\cong\widehat{F^\times/{F^\times}^d}$. Since ${F^\times}^d=<\pi_F^d>\times<\zeta>^d\times U_F^1$, and $F^\times/{F^\times}^d\cong {\mathbb{Z}}_d\times{\mathbb{Z}}_d,$ we have $a(\chi){\leqslant}1$ and $\#\chi$ is a divisor of $d$ for all $\chi\in\widehat{F^\times/{F^\times}^d}$. Now if we take a character $\chi_F$ of $F^\times$ conductor ${\geqslant}m_\rho{\geqslant}2$, hence $a(\chi_F){\geqslant}2 a(\chi)$ for all $\chi\in \widehat{F^\times/{F^\times}^d}$. Then by using Deligne’s formula (cf. [@D1], Lemma 4.16) we have $$W(\chi_F\chi,\psi)^d=\chi(c)^d\cdot W(\chi_F,\psi)^d=W(\chi_F,\psi)^d,$$ where $c\in F^\times$ with $\nu_F(c)=a(\chi_F)+n(\psi)$, satisfies $$\chi_F(1+x)=\psi(c^{-1}x),\quad\text{for all $x\in F^\times$ with $2\nu_F(x){\geqslant}a(\chi)$}.$$ Finally, by using Lemma \[Lemma Deligne-Henniart\] we can write $$\begin{aligned} W(\rho,\psi) &=W(\rho_0\otimes\widetilde{\chi_F\chi},\psi)= W(\chi_F\chi,\psi)^{\rm{dim}(\rho_0)}\cdot\det(\rho_0)(c(\chi_F,\psi))\\ &=W(\chi_F,\psi)^d\cdot\det(\rho_0)(c).\end{aligned}$$ **Applications of Tate’s root-of-unity criterion** ================================================== Let $K/F$ be a finite Galois extension of the non-archimedean local field $F$, and $\rho:\mathrm{Gal}(K/F)\to \mathrm{Aut}_{\mathbb{C}}(V)$ a representation of $\mathrm{Gal}(K/F)$ on a complex vector space $V$. Let $P(K/F)$ denote the first [**wild**]{} ramification group of $K/F$. Let $V^{P}$ be the subspace of all elements of $V$ fixed by $\rho(P(K/F))$. Then $\rho$ induces a representation: $\rho^{P}:\mathrm{Gal}(K/F)/P(K/F)\to\mathrm{Aut}_{\mathbb{C}}(V^{P})$. Let $\overline{F}$ be an algebraic closure of the local field $F$, and $G_F=\rm{Gal}(\overline{F}/F)$ be the absolute Galois group for $\overline{F}/F$. Let $\rho$ be a representation of $G_F$.\ [**Then by Tate, $W(\rho)/W(\rho^{P})$ is a root of a unity (cf. [@JT1], p. 112, Corollary 4).**]{}\ Now let $\rho$ be an irreducible representation $G_F$, then either $\rho^P=\rho$, in which case $\frac{W(\rho)}{W(\rho^P)}=1$, or else $\rho^P=0$, in this case from Tate’s result we can say $W(\rho)$ is a root of unity. Equivalently:\ If $W(\rho)$ is not a root of unity then $\rho^P\ne 0$, hence $\rho^P=\rho$ because $\rho$ is irreducible. This means that all vectors $v\in V$ of the representation space are fixed under $P$ action on $V$.\ In other words, if we consider $\rho$ as a homomorphism $\rho:G_F\to\rm{Aut}_{\mathbb{C}}(V)$ then the elements from $P$ are mapped to the identity, hence $\rho^P=\rho$ means $P\subset\rm{Ker}(\rho)$. Therefore we can state the following lemma. If $\rho$ is an irreducible representation of $G_F$, such that the subgroup $P\subset G_F$, of wild ramification does [**not trivially**]{} act on the representation space $V$ (this gives $\rho^P\ne \rho$, i.e., $\rho^P=0$), then $W(\rho)$ is a root of unity. Before going to our next results we need to recall some facts from class field theory. Let $F$ be a non-archimedean local field. Let $F^{ab}$ be the maximal abelian extension of $F$ and $F_{nr}$ be the maximal unramified extension of $F$. Then by local class field theory there is a unique homomorphism $$\theta_{F}:F^\times\to \rm{Gal}(F^{ab}/F)$$ having certain properties (cf. [@JM], p. 20, Theorem 1.1). This local reciprocity map $\theta_F$ is continuous and injective with dense image. From class field theory we have the following commutative diagram $$\begin{array}{ccccccccc} &&& && v_F && \\ 0 & \to & U_F & \to & F^\times & \to & {\mathbb{Z}}& \to & 0 \\ && \quad\downarrow \theta_F && \quad\downarrow\theta_F && \quad\downarrow \rm{id} \\ 0 & \to & I_F & \to & \rm{Gal}(F^{ab}/F) & \to & \widehat{{\mathbb{Z}}} & \to & 0, \end{array}$$ where $I_F:=\rm{Gal}(F^{ab}/F_{nr})$ is the inertia subgroup of $\rm{Gal}(F^{ab}/F)$, and $\rm{Gal}(F_{nr}/F)$ is identified with $\widehat{{\mathbb{Z}}}$ (cf. [@CF], p. 144). We also know that $\theta_F:U_F\to I_F$ is an isomorphism. Moreover the descending chain $$U_F\supset U_{F}^{1}\supset U_{F}^{2}\cdots$$ is mapped isomorphically by $\theta_F$ to the descending chain of ramification subgroups of $\rm{Gal}(F^{ab}/F)$ in the upper numbering. Now let $I$ be the inertia subgroup of $G_F$. Let $P$ be the wild ramification subgroup of $G_F$. Then we have $G_F\supset I\supset P$. Parallel with this we have $F^\times\supset U_F\supset U_{F}^{1}$. Then we have $$\label{sequence 5.3.1} 1\to I/P\cdot[G_F,G_F]\to G_F/P\cdot[G_F,G_F]\to G_F/I\to 1,$$ and parallel $$\label{sequence 5.3.2} 1\to U_F/U_{F}^{1}\to F^\times/U_{F}^{1}\to F^\times/U_F\to 1.$$ Now by class field theory the left terms of sequences (\[sequence 5.3.1\]) and (\[sequence 5.3.2\]) are isomorphic, but for the right terms we have $G_F/I$ is isomorphic to the total completion of ${\mathbb{Z}}$ (because here $G_F/I$ is profinite group, hence compact). We also have $F^\times/U_F=<\pi_F>\times U_F/U_F\cong{\mathbb{Z}}$. Therefore sequence (\[sequence 5.3.2\]) is dense in (\[sequence 5.3.1\]) because ${\mathbb{Z}}$ is dense in the total completion $\widehat{{\mathbb{Z}}}$. But ${\mathbb{Z}}$ and $\widehat{{\mathbb{Z}}}$ have the same finite factor groups. [**As a consequence $F^\times/U_{F}^{1}$ is also dense in $G_F/P\cdot[G_F,G_F]$.**]{} Let $\rho$ be a Heisenberg representation of the absolute Galois group $G_F$. In the following proposition we show that if $W(\rho)$ is not a root of unity, then $\rm{dim}(\rho)|(q_F-1)$, and $a_F(\rho)$ is not minimal. \[Proposition 4.12\] Let $F/{\mathbb{Q}}_p$ be a local field and let $q_F=p^s$ be the order of its finite residue field. If $\rho=(Z_\rho,\chi_\rho)=\rho(X_\rho,\chi_K)$ is a Heisenberg representation of the absolute Galois group $G_F$ such that $W(\rho)$ is not a root of unity, then $dim(\rho)|(q_F-1)$ and $a_F(\rho)$ is not minimal. Let $P$ denote the wild ramification subgroup of $G_F$. By Tate’s root-of-unity criterion, we know that $\gamma:=\frac{W(\rho)}{W(\rho^P)}$ is a root of unity. If $W(\rho)$ is not a root of unity, then $\rho=\rho^P$, otherwise $W(\rho)$ must be a root of unity. Again $\rho^P=\rho$ implies $P\subset\rm{Ker}(\rho)\subset Z_\rho\subset G_F$. So $G_F/Z_\rho$ is a quotient of $G_F/P$, hence $F^\times/U_F^1$. Moreover, from the dimension formula (\[eqn dimension formula\]), we have $$\rm{dim}(\rho)=\sqrt{[G_F:Z_\rho]}=\sqrt{[K:F]}=\sqrt{[F^\times:{\mathcal{N}}_{K/F}]},$$ where $Z_\rho=G_K$ and $\rm{Rad}(X)={\mathcal{N}}_{K/F}$, hence $F^\times/N$ is a quotient group of $F^\times/U_F^1$. Therefore the alternating character $X_\rho$ induces an alternating character $X$ on $F^\times/U_F^1.$ We also know that $F^\times=<\pi_F>\times<\zeta>\times U_{F}^{1}$, where $\zeta$ is a root of unity of order $q_F-1$. This implies $F^\times/U_{F}^{1}=<\pi_F>\times<\zeta>$. So each element $x\in F^\times/U_F^1$ can be written as $x= \pi_{F}^a\cdot \zeta^b$, where $a,b\in{\mathbb{Z}}$. We now take $x_1=\pi_{F}^{a_1}\zeta^{b_1}, x_2=\pi_{F}^{a_2}\zeta^{b_2}\in F^\times/U_{F}^{1}$, where $a_i,b_i\in{\mathbb{Z}}(i=1,2)$, then $$\begin{aligned} X(x_1,x_2) &= X(\pi_{F}^{a_1}\zeta^{b_1},\; \pi_{F}^{a_2}\zeta^{b_2})\\ &= X(\pi_{F}^{a_1},\zeta^{b_2})\cdot X(\zeta^{b_1},\pi_{F}^{a_2})\\ &=\chi_\rho([\pi_{F}^{a_1},\zeta^{b_2}])\cdot\chi_\rho([\zeta^{b_1},\pi_{F}^{a_2}]).\end{aligned}$$ But this implies $X^{q_F-1}\equiv 1$ because $\zeta^{q_F-1}=1$, which means that $X$ is actually an alternating character on $F^\times/({F^\times}^{(q_F-1)} U_F^1),$ and therefore $G_F/G_K$ is actually a quotient of $F^\times/({F^\times}^{(q_F-1)} U_F^1).$ We also know that $U_F^1$ is a pro-p-group and therefore $$U_F^1=(U_F^1)^{q_F-1}\subset F^\times.$$ Thus the cardinality of $F^\times/({F^\times}^{(q_F-1)} U_F^1)$ is $(q_F-1)^2$ because $$F^\times/({F^\times}^{(q_F-1)} U_F^1)\cong {\mathbb{Z}}/(q_F-1){\mathbb{Z}}\times<\zeta>\cong {\mathbb{Z}}_{q_F-1}\times{\mathbb{Z}}_{q_F-1}.$$ Therefore $\rm{dim}(\rho)$ divides $q_F-1.$ Since $\rm{dim}(\rho)|q_F-1$, from Lemma \[Lemma dimension equivalent\] the alternating character $X_\rho$ is U-isotropic and $X_\rho=X_\eta$ for a character $\eta:U_F/U_F^1\to{\mathbb{C}}^\times$. Since $\rho=\rho(X_\eta,\chi_K)$ is U-isotropic, from Proposition \[Proposition conductor\], $a_F(\rho)$ is a multiple of $\rm{dim}(\rho)$. Moreover, by the given condition, $W(\rho)$ is not a root of unity, hence $a_F(\rho)$ is not minimal, otherwise if $a_F(\rho)$ is minimal, then from Lemma \[Lemma 5.2.12\] $W(\rho)$ is a root of unity. **Acknowledgements.** I would like to thank Prof E.-W. Zink for suggesting this problem and his constant valuable advice also I thank to my adviser Prof. Rajat Tandon for his continuous encouragement. I extend my gratitude to Prof. Elmar Grosse-Klönne for providing very good mathematical environment during stay in Berlin. I am also grateful to Berlin Mathematical School for their financial help. [99]{} B.C. Berndt, R.J. Evans, K.S. Williams, Gauss and Jacobi Sums, John Wiley and Sons, INC, C.J. Bushnell, G. Henniart, The local Langlands conjecture for $GL(2)$, Springer-Verlag, 2006. C.W. Curtis, I. Reiner, Representation theory of finite groups and associative algebras, Interscience Publishers, a division of John Wiley and Sons, 1962. E.-W. Zink, Weil-Darstellungen und lokale Galoistheorie, Math. Nachr. [**92**]{}, 265-288, 1979. , Ramification in local Galois groups-the second central step, Pure and Applied Mathematics Quarterly, Volume 5, Number 1 (special issue: In honor of Jean-Pierre Serre, part 2 of 2), 295-338, 2009, <http://www.math.hu-berlin.de/~zyska/zink/PAMQ-2009-0005-0001-a009.pdf>. , Representation filters and their application in the theory of local fields., J. Reine Angew. Math. [**387**]{} (1988), 182-208. , Lokale projektive Klassenkörpertheorie: Grundbegriffe und erste Resultate, Akademie der Wissenschaften der DDR, Institut für Mathematik, Berlin, 1982. iv +106 pp. , Lokale projektive Klassenkörpertheorie. II, Math. Nachr. [**114**]{} (1983), 123-150. G. Karpilovsky, Group representations, Volume 1, part B: Introduction to group representations and characters, North-Holland Mathematics Studies, 175. H. Koch, Extendible Functions, Centre Interuniversitaire en Calcul; Mathématique Algeébrique, Concordia University. , Classification of the primitive representations of the Galois group of local fields, Inventiones math. [**40**]{}, 195-216 (1977). I.B. Fesenko, S.V. Vostokov, Local fields and their extensions, Second edition 2001. I.M. Isaacs, Finite group theory, Graduate Studies in Mathematics, Volume 92, AMS, 2008. J. Martinet, Character theory of Artin $L$-functions, Algebraic Number Fields (L-functions and Galois properties), Proceedings of Symposium, Edited by A. Fröhlich, pp. 1-88. J-P. Serre, Local field, Springer-Verlag, 1979. J-P. Serre, Linear representations of finite groups, Graduate texts in mathematics; 42, Springer-Verlag. J-P. Serre, Local class field theory, Algebraic number theory, Proceedings of an instructional conference, Edited by J.W.S. Cassels and A. Fröhlich, Chapter VI, Academic Press, 1967. J.S. Milne, Class field theory, <http://www.jmilne.org/math/CourseNotes/CFT.pdf>. J. Tate, Local Constants, Algebraic Number Fields (L-functions and Galois properties), Proceedings of Symposium, Edited by A. Fröhlich, pp. 89-131. , Number theoretic background, Proceedings of Symposia in Pure Mathematics, Vol. 33 (1979), Vol. [**2**]{}, pp. 3-26. , Fourier analysis in number fields, and Hecke’s zeta-functions (Thesis), Algebraic Number Theory (Proc. Instructional Conf., Brighton, 1965), Thompson, Washington, D.C., pp. $305-347$. P.L. Clark, Wilson’s theorem in a finite commutative group: A elementary proof, <http://www.math.uga.edu/~pete/wilson_easy.pdf>. R. Lidl, H. Niederreiter, Finite fields, Encyclopedia of Mathematics and its applications, Cambridge University press 2000. R.P. Langlands, On the functional equation of the Artin $L$-functions, unpublished article, <https://publications.ias.edu/sites/default/files/a-ps.pdf>. S.A. Biswas, Computation of $\lambda_{K/F}$-function where $K/F$ is a finite Galois extension, <http://arxiv.org/pdf/1506.02847.pdf>. , Determinant of Heisenberg representations of finite groups, <http://arxiv.org/pdf/1505.03184.pdf>. , Local constants for Galois representations - some explicit results, Ph.D thesis. P. Deligne, Les constantes des équations fonctionnelle des fonctions L, in Modular functions of one variable II, Lecture Notes in Mathematics [**349**]{} (1972), 501-597, Springer-Verlag, Berlin-Heidelberg-New York. V.P. Snaith, Explicit Brauer induction with applications to algebra and number theory, Cambridge University Press, 1994. [^1]: A subgroup of a group which lies inside the center of the group, i.e., a subgroup $H$ of $G$ is central if $H\subseteq Z(G)$. [^2]: This condition $F^\times\subseteq{\mathcal{N}}_{K/E}$ implies that for every $x\in F^\times$ must have a preimage under the $N_{K/E}$, but the preimage is not unique. [^3]: Since $gcd(m,p)=1$, we have $U_F\cdot{F^\times}^m=(<\zeta>\times U_F^1)(<\pi_F^m>\times<\zeta^m>\times U_F^1)=<\pi_F^m>\times<\zeta>\times U_F^1$, where $\zeta$ is a $(q_F-1)$-st root of unity. Then $U_F/U_F\cap {F^\times}^m=U_F\cdot {F^\times}^m/{F^\times}^m= <\pi_F^m>\times<\zeta>\times U_F^1/<\pi_F^m>\times<\zeta^m>\times U_F^1\cong{\mathbb{Z}}_m$. Hence $|U_F/U_F\cap{F^\times}^m|=m$. [^4]: Group theoretically, if $\rho|_{V_F}=\rm{Ind}_{H}^{G_F}(\chi_H)|_{V_F}$ is irreducible, then from Section 7.4 of [@JP], we can say $G_F=H\cdot V_F$. Here $H=G_L$, where $L$ is a certain extension of $F$, and $V_F=G_{F_{mt}}$ where $F_{mt}/F$ is the maximal tame extension of $F$. Therefore $G_F=H\cdot V_F$ is equivalent to $F=L\cap F_{mt}$ that means the extension $L/F$ must be totally ramified and wildly ramified, and $[G_F:H]=[L:F]=|V_F|$. We know that the wild ramification subgroup $V_F$ is a pro-p-group (cf. [@FV], p. 106). Then $\rm{dim}(\rho)$ is a power of $p$. [^5]: We have $d\cdot\rho=\mathrm{Ind}_{Z}^{G}\chi_Z=\mathrm{Ind}_{H}^{G}(\mathrm{Ind}_{Z}^{H}\chi_Z)$, and $\mathrm{Ind}_{Z}^{H}\chi_Z$ of dimension $d=[H:Z]$. Therefore: $W(\rho)^d=(\lambda_{H}^{G})^d\cdot W(\mathrm{Ind}_{Z}^{H}\chi_Z)$. On the other hand $W(\rho)=\lambda_{H}^{G}\cdot W(\chi_H)$ implies $W(\rho)^d=(\lambda_{H}^{G})^d\cdot W(\chi_H)^d$. Now comparing these two expressions for $W(\rho)^d$ we see that $W(\chi_H)^d=W(\mathrm{Ind}_{Z}^{H}\chi_Z)$. [^6]: In $K/F$ of type ${\mathbb{Z}}_m\times{\mathbb{Z}}_m$ any cyclic subextension $E/F$ in $K/F$ of degree $m$ will correspond to a maximal isotropic subgroup. But we restrict to choosing $E$ totally ramified or unramified. [^7]: If $n(\psi')$ is even, then from the table of the Remark 5.10 of [@SAB1], $\lambda_{E_2/F}(\psi')=-\lambda_{E_2'/F}(\psi')$, where $E_2'/F$ be the totally ramified quadratic extension different from $E_2/F$. Therefore $\lambda_{E/F}(\psi')$ depends on $\psi'$.
--- abstract: 'Several relevant thermodynamic observables obtained within the (2+1) flavor and spin zero NJL and PNJL models with inclusion of the ’t Hooft determinant and $8q$ interactions are compared with lattice-QCD (lQCD) results. In the case that a small ratio $R=\frac{\mu_B}{T_c}\sim 3$ at the critical end point (CEP) associated with the hadron gas to quark-gluon plasma transition is considered, combined with fits to the lQCD data of the trace anomaly [@Bazavov:2009], subtracted light quark condensate [@Bazavov:2009] and continuum extrapolated data of the light quark chiral condensate [@Bazavov:2012], a reasonable description for the PNJL model is obtained with a strength $g_1\sim 5...6 \times 10^3$ GeV$^{-8}$ of the $8q$ interactions. The dependence on the further model parameters is discussed.' address: 'Departamento de Fsica, CFC, Faculdade de Ciências e Tecnologia da Universidade de Coimbra, P-3004-516 Coimbra, Portugal' author: - 'B. Hiller[^1], J. Moreira, A. A. Osipov, A. H. Blin' title: 'Observables in the 3 Flavor PNJL Model and their Relation to Eight Quark Interactions.[^2]' --- In recent years the role of effective chiral Lagrangians has grown as an important indicator of the order and universality class of phase transitions, as well as of the nature and location of the related CEP that may occur for the ground state of QCD in presence of external parameters, such as finite temperature $T$, baryonic chemical potential $\mu_B$, magnetic field $B$ [@Pawlowski:2011]. In parallel, lQCD advances at zero and moderate chemical potential with masses approaching the physical values of the light quarks [@Bazavov:2012] and pion mass [@Ratti:2012], strongly indicate at a crossover transition from the hadronic to the quark-gluon phase at finite $T$ and $\mu_B=0$. Combining lQCD and chemical freeze-out data from relativistic heavy-ion collision facilities, the location of the CEP is presently conjectured to eventually occur at $R=\frac{\mu_B}{T_c}\sim 2$ and $\frac{T}{T_c}\sim 1$, [@Stephanov:2004],[@Gupta:2011]. We consider the $SU(3)$ flavor and spin-0 Nambu-Jona-Lasinio model (NJL) [@Nambu:1961] with inclusion of the $U(1)_A$ breaking ’t Hooft flavor determinant [@Hooft:1976]-[@Reinhardt:1988] and eight quark ($8q$) interactions [@Osipov:2006b],[@Osipov:2007a] (of which there exist two types, one of them violationg the OZI rule, with strength $g_1$), and extend it to include the Polyakov loop (PNJL) [@Fukushima:2004]-[@Moreira:2011]. The $8q$ have been firstly introduced to stabilize the effective potential of the model [@Osipov:2006b]. Their role turned out to be of significant importance in the behavior of model observables in presence of external parameters [@Osipov:2007b]-[@Hiller:2010],[@Kashiwa:2008a]-[@Gatto]. Of particular interest is that the $8q$ coupling strengths $g_1$ can be varied in tune with the $4q$ interaction strength $G$ without changing the vacuum condensates and low energy meson spectra, except for the $\sigma$-meson mass $m_\sigma$ which decreases with increasing $g_1$. Fits to the low lying pseudoscalar and scalar meson spectra yield $m_\sigma\sim 560$ MeV for $g_1=6000$ GeV$^{-8}$ and $m_\sigma\sim 690$ MeV for $g_1=1500$ GeV$^{-8}$ [@Osipov:2007a]. In the $\mu,T$ plane (where $\mu=\frac{\mu_B}{3}$) the $g_1,G$ interplay gives rise to a line of CEP, starting from the regime of large ratios $R\sim 20$ (NJL) and $R\sim 10$ (PNJL) in the case of weak $8q$ coupling $g_1$, to small ratios for strong $g_1$. In the first case the chiral condensate is related with spontaneous symmetry breaking $(SSB)$ driven by $4q$ interactions, in the second scenario $SSB$ is induced by the $6q$ ’t Hooft strength [@Osipov:2007a],[@Osipov:2008],[@Hiller:2010]. This continuous set of CEP is particular to the $8q$ extension of the model.However a correlation between $m_\sigma$ and the location of the CEP is also observed in the (2+1) - flavor quark-meson Lagrangian, where besides the ’t Hooft term, a quartic mesonic contribution is present [@Schaefer:2010],[@Chatterjee], thus bearing a resemblance to the semi-bosonized version of the $8q$ NJL Lagrangian [@Osipov:2007a]. In order to restrict the $g_1$ values one may: i) calculate decays and scattering in the vacuum, which are expected to narrow the choice and ii) compare with available lQCD data at finite $T$ and moderate $\mu$. In the present study we try to explore the second option. For the PNJL case an extra uncertainty arises due to the parameters related with the choice of Polyakov potential ${\cal U_{P}}$. In particular the $T_0$ parameter of [@Ratti:2006],[@Roessner:2007] has a sizeable effect on the transition temperature. First we show in Fig. 1 the CEP lines in a $(\mu,T)$ vs. $g_1$ diagram. The PNJL model (solid lines) enhances the effect of pushing $R$ to small values as functios of $g_1$ in comparison with the NJL case (dashed lines). The crossing of the $CEP(T)$ and $CEP(\mu)$ lines, ( yielding $R=3$), is reached for the PNJL at $g_1\sim 6.4 \times 10^3$GeV$^{-8}$, for the choice $T_0=190$ MeV, whereas it occurs for the NJL only at a much larger value, $g_1\sim 8.4\times 10^3$GeV$^{-8}$ (we remind that with increasing $g_1$ the crossover becomes sharper and eventually gives rise to a first order transition at $\mu=0$, which happens at $g_1\sim 9\times 10^3$GeV$^{-8}$ in the NJL case). Changing $T_0$, the $CEP(T)$ is shifted up (down) with increasing (decreasing) $T_0$ (see caption of Fig.1), while $CEP(\mu)$ remains sensibly unaltered. In Fig. 2 the chiral condensates and dressed Polyakov loop for $u,s$ quarks are shown for the NJL as function of $T$ for $\mu=0$ and with varying strength $g_1$ (see caption). For $g_1 10^{-3}=1;5;6.5;8$ GeV$^{-8}$ the transition temperatures $T_t$ defined at the corresponding inflection points of the curves are $T_t=192;163;147;135$ MeV for the u-condensate, $T_t=197;163;150;135$ MeV for the u-quark dressed Polyakov loop, $T_t=197;160;147;135$ MeV at the first inflection point of s-quark condensate, $T_t=270;240;235;225$ MeV at its 2nd inlection point, $T_t= - ;166;150;135$ MeV at 1st inflection point of the dressed s-quark Polyakov loop and $T_t=270;240;235;225$ MeV at its 2nd inflection point. The 1st set of inflection points in the case of the s-quark condensate and dressed Polyakov loop occur due to the gap equations that correlate the $u$ and $s$ variables, yielding similar $T_t$ for the $u$ and $s$ observables. The 2nd inflection point occurs at temperatures $T_t$ larger by $\sim 80$ MeV. A similar pattern is observed for the PNJL model in Fig. 3, the second inflection points occur at roughly $55$ MeV higher $T_t$ values. Visually these 2nd inflection points can barely be detected, the transition is very slow and smooth. This behavior can be traced back to the fact that for large $T$ the $s$-quark constituent quark mass approaches asymptotically its current quark mass value, which is much larger than for the $u$-quark.[^3] It is a disputable matter which temperature should be taken to characterize the transition for the $s$-quark in these observables. A calculation of the chiral and quark number susceptibilities associated with the s-quark in the NJL model display only one peak characterizing the transition temperature. In Fig. 4 one sees however that in the PNJL case two peaks can occur again for the s-quark chiral susceptibility. Fig. 5 (a) shows the trace anomaly calculated for $g_1=6000$ GeV$^{-8}$, for various values of the parameter $T_0$ in comparison with lattice data. In Fig. 5 (b) the subtracted condensate $\Delta_{ls}$ is shown for several values for $g_1$, calculated with $T_0=.19$ GeV, and compared to lQCD. In Fig. 6 the light chiral condensate is compared with lQCD data extrapolated to the continuum limit [@Bazavov:2012] for different values of $g_1$ and with ${\cal U_{P}}$ of [@Ratti:2006] for the cases $T_0=.15$ GeV (left) and $T_0=.19$ GeV (right). ![Pairs $(T,\mu)$ of CEP as function of the $8q$ interaction $g_1$. Positive slope lines (red online) show T-dependence, negative slope lines (blue online) show $\mu$ dependence. All model parameters fixed as in [@Hiller:2010], except for $g_1,G$. PNJL potential from [@Ratti:2006]. Intersection ($R=3$) of PNJL curves (solid lines) occur at ($\mu=T=158;167;188$ MeV) with $g_1=6436;6251;6127$ GeV$^{-8}$ for $T_0=190;210;270$ respectively (shown only for $T_0=190$ MeV). Intersection of NJL curves (dashed lines) at $\mu=T=117$ with $g_1=8372$ GeV$^{-8}$.](CEPNJLvsPNJL190.pdf){height="4cm" width="6cm"} \ \ From these comparisons we conclude (i) that the smaller the ratio $R=\frac{\mu_B}{T_c}$ related with the CEP location, the larger the $8q$ interaction strength $g_1$ must be chosen; a sizeable dependence on the $T_0$ parameter of the Polyakov potentials can induce shifts of the order of several tens of MeV in $T_c$ (Fig. 1). For $R=3$ we get $g_1$ of the order $6000$ GeV$^{-8}$ and $T_c=158-188$ MeV for the range $T_0=190-270$ MeV. (ii) Besides the $8q$ strength, the Polyakov loop plays also a substantial role in decreasing the ratio $R$. (iii) The observables calculated at $\mu=0$ related with the light quarks, chiral condensates, traced Polyakov loop and dressed Polyakov loop (Fig. 3), chiral and quark number susceptibilities (Fig. 4 (a), (d)), as well as the s-quark number susceptibility (Fig. 4 (e)) and Polyakov loop susceptibility (Fig. 4 (c)) yield a crossover temperature $T_t\sim 179$ MeV for $g_1=6000$ GeV$^{-8}$ and $T_0=.19$ GeV. (iv) Some of the s-quark observables show two possible transition temperatures, Fig.2,3,4(b), the first close to the u-quark transition, the second about 50 MeV higher for the PNJL model. (v) The best fit to the trace anomaly is for $g_1=6000$ GeV$^{-8}$ at $T_0=.21$GeV (Fig. 5 (a)) and for the observable $\Delta_{ls}$ we obtain a reasonable fit with $g_1=5...6\times 10^3$ GeV$^{-8}$ and $T_0=.19$ GeV (Fig. 5 (b)). (vi) The peak positions and heights of the continuum extrapolated light quark chiral susceptibility vary considerably (Fig. 6). This big spread allows to accomodate a large range of $g_1$ values, whose peak positions in turn depend also on the choice of the $T_0$ parameter. The value $g_1\sim 5\times 10^3$GeV$^{-8}$ is eventually the best choice if one takes the height of the peak also into consideration. ![The lQCD data for the light quark chiral susceptibility $X^{l}_{chi}$ in the continuum limit taken from [@Bazavov:2012], in comparison with the PNJL model with ${\cal U_{P}}$ [@Ratti:2006] at $T_0=.15 GeV$ (left panel) and $T_0=.19 GeV$ (right panel) for different $g_1$ strengths (solid lines, narrower peaks correspond to increasing $g_1$).](chiLmsoverT_T0150.pdf "fig:"){height="2.5cm" width="5cm"}![The lQCD data for the light quark chiral susceptibility $X^{l}_{chi}$ in the continuum limit taken from [@Bazavov:2012], in comparison with the PNJL model with ${\cal U_{P}}$ [@Ratti:2006] at $T_0=.15 GeV$ (left panel) and $T_0=.19 GeV$ (right panel) for different $g_1$ strengths (solid lines, narrower peaks correspond to increasing $g_1$).](chiLmsoverT_T0190.pdf "fig:"){height="2.5cm" width="5cm"} [99]{} Bazavov et al, Phys. Rev. D80, 014504 (2009) Bazavov et al, HotQCD Collaboration, Phys. Rev. D85, 054503 (2012); J. M. Pawlowski, AIP Conf. Proc. 1343: 75-80, (2011). C. Ratti, these proceedings. M. Stephanov, Acta Physica Polonica B  **35** (2004), 2939. S. Gupta, Science  **332** (2011), 1525. Y. Nambu and G. Jona-Lasinio, Phys. Rev. [**122**]{}, 345 (1961); [**124**]{}, 246 (1961); G. ’t Hooft, Phys. Rev. D [**14**]{}, 3432 (1976); G. ’t Hooft, Phys. Rev. D [**18**]{}, 2199 (1978). V. Bernard, R. L. Jaffe and U.-G. Meissner, Phys. Lett. B [**198**]{}, 92 (1987); H. Reinhardt and R. Alkofer, Phys. Lett. B [**207**]{}, 482 (1988). A. A. Osipov, B. Hiller, J. da Providencia, Phys. Lett. B **634** (2006), 48. A. A. Osipov, B. Hiller, A. H. Blin, J. da Providencia, Annals of Phys.  **322** (2007), 2021. K. Fukushima, Phys. Lett. B 591 (2004). E. Megías, E. Ruiz Arriola, L. L. Salcedo, Phys. Rev. D 74, 065005 (2006). C. Ratti, M. A. Thaler, and W. Weise, Phys. Rev. D  **73** (2006), 014019. S. Roessner, C. Ratti, W. Weise, Phys. Rev. D 75, 034007 (2007) K. Fukushima, Phys. Rev. D  **77** (2008), 114028. Y. Sakai, K. Kashiwa, H. Kouno, M. Yahiro, Phys. Rev. D [**77**]{}, 051901 (2008), T. Sasaki, Y. Sakai, H. Kouno, M. Yahiro, Phys. Rev. D  **82**, (2010) 116004. A. Bhattacharyya, P. Deb, S. K. Ghosh, R. Ray, Phys. Rev. D [**82**]{}, 014021 (2010), A. Bhattacharyya, P. Deb, A. Lahiri, R. Ray, Phys. Rev. D [**83**]{} (2011) 014011. R. Gatto and M. Ruggieri, Phys. Rev. D  **83** (2011), 034016, R. Gatto and M. Ruggieri, Phys. Rev. D  **82** (2010), 054027. P. Costa, M.C. Ruivo, C.A. de Sousa, H. Hansen, W.M. Alberico, Phys. Rev. D  **79** (2009) 116003. J. Moreira, B. Hiller, A. A. Osipov, A. H. Blin, Int. J. Mod. Phys. A  **27** (2012) 1250060. A. A. Osipov, B. Hiller, J. Moreira, A.H. Blin, J. da Providencia, Phys. Lett. B  **646** (2007), 91. A. A. Osipov, B. Hiller, J. Moreira, A.H. Blin, Phys. Lett. B  **659** (2008), 270. A.A. Osipov, B. Hiller, A.H. Blin, J. da Providencia, Phys. Lett. B **650** (2007) 262. B. Hiller, J. Moreira, A.A. Osipov, A.H. Blin, Phys. Rev. D  **81** (2010), 116005. B.-J. Schaefer, M. Wagner, J. Wambach, Phys. Rev. D  **81** (2010) 074013. S. Chatterjee, these proceedings. [^1]: Speaker [^2]: Presented at the workshop “Excited QCD 2012”, 06-12 May 2012 in Peniche, Portugal. Work supported by FCT, CERN/FP/116334/2010, QREN, UE/FEDER through COMPETE. Part of the EU Research Infrastructure Integrating Activity Study of Strongly Interacting Matter (HadronPhysics3) under the 7th Framework Programme of EU: Grant Agreement No. 283286. [^3]: We calculate the thermodynamic potential with the prescription of [@Hiller:2010],[@Moreira:2011] where we show that it leads to the correct large $T$ asymptotic behavior for the quark masses (condensates), traced Polyakov loop and number of degrees of freedom.
--- abstract: 'Let $E$ be an arbitrary directed graph and let $L$ be the Leavitt path algebra of the graph $E$ over a field $K$. It is shown that every ideal of $L$ is an intersection of primitive/prime ideals in $L$ if and only if the graph $E$ satisfies Condition (K). Uniqueness theorems in representing an ideal of $L$ as an irredundant intersection and also as an irredundant product of finitely many prime ideals are established. Leavitt path algebras containing only finitely many prime ideals and those in which every ideal is prime are described. Powers of a single ideal $I$ are considered and it is shown that the intersection ${\displaystyle\bigcap\limits_{n=1}^{\infty}}I^{n}$ is the largest graded ideal of $L$ contained in $I$. This leads to an analogue of Krull’s theorem for Leavitt path algebras.' author: - | Songul Esin$^{(1)}$, Muge Kanuni$^{(2)}$ and Kulumani M. Rangaswamy$^{(3)}$\ $^{(1)}$ Tuccarbasi sok. Kase apt. No: 10A/25 Erenkoy\ Kadikoy, Istanbul, Turkey.\ E-mail: songulesin@gmail.com\ $^{(2)\text{ }}$Department of Mathematics, Duzce University,\ Konuralp Duzce, Turkey.\ E-mail: mugekanuni@duzce.edu.tr\ $^{(3)}$Department of Mathematics, University of Colorado,\ Colorado Springs, CO. 80919, USA.\ E-mail: krangasw@uccs.edu title: 'On intersections of two-sided ideals of Leavitt path algebras' --- Introduction and Preliminaries ============================== Leavitt path algebras $L_{K}(E)$ of directed graphs $E$ over a field $K$ are algebraic analogues of graph $C^{\ast}$-algebras $C^{\ast}(E)$ and have recently been actively investigated in a series of papers (see e.g. [@AA], [@AAPS], [@ABCR], [@R-1], [@T]). These investigations showed, in a number of cases, how an algebraic property of $L_{K}(E)$ and the corresponding analytical property of $C^{\ast}(E)$ are both implied by the same graphical property of $E$, though the techniques of proofs are often different. The initial investigation of special types of ideals such as the graded ideals, the corresponding quotient algebras and the prime ideals of a Leavitt path algebra was essentially inspired by the analogous investigation done for graph $C^{\ast}$-algebras. But an extensive investigation of the ideal theory of Leavitt path algebras, as has been done for commutative rings, is yet to happen. This paper may be considered as a small step in exploring the multiplicative ideal theory of Leavitt path algebras and was triggered by a question raised by Professor Astrid an Huef. In the theory of $C^{\ast}$-algebras and, in particular, that of the graph $C^{\ast}$-algebras, every (closed) ideal $I$ of $C^{\ast}(E)$ is the intersection of all the primitive/prime ideals containing $I$ (Theorem 2.9.7, [@D]). In a recent 2015 CIMPA research school in Turkey on Leavitt path algebras and graph $C^{\ast}$-algebras, Professor an Huef raised the question whether the preceding statement is true for ideals of Leavitt path algebras. We first construct examples showing that this property does not hold in general for Leavitt path algebras. We then prove that, for a given graph $E$, every ideal of the Leavitt path algebra $L_{K}(E)$ is an intersection of primitive/prime ideals if and only if the graph $E$ satisfies Condition (K). A uniqueness theorem is proved in representing an ideal of $L_{K}(E)$ as the irredundant intersection of finitely many prime ideals. As a corollary, we show that every ideal of $L_{K}(E)$ is a prime ideal if and only if (i) Condition (K) holds in $E$, (ii) for each hereditary saturated subset $H$ of vertices, $|B_{H}|\leq1$ and $E^{0}\backslash H$ is downward directed and (iii) the admissible pairs $(H,S)$ (see definition below) form a chain under a defined partial order. Equivalently, all the ideals of $L_{K}(E)$ are graded and form a chain under set inclusion. Following this, Leavitt path algebras possessing finitely many prime ideals are described. We also give conditions under which every ideal of a Leavitt path algebra is an intersection of maximal ideals. The graded ideals of a Leavitt path algebra possess many interesting properties. Using these, we examine the uniqueness of factorizing a graded ideal as a product of prime ideals. A perhaps interesting result is that if $I$ is a graded ideal and $I=P_{1}\cdot\cdot\cdot P_{n}$ is a factorization of $I$ as an irredundant product of prime ideals $P_{i}$, then necessarily all the ideals $P_{i}$ must be graded ideals and moreover, $I=P_{1}\cap\ldots\cap P_{n}$. We also prove a weaker version of this result for non-graded ideals. Finally, powers of an ideal in $L_{K}(E)$ are studied. While $I^{2}=I$ for any graded ideal $I$, it is shown that, for a non-graded ideal $I$ of $L_{K}(E)$, its powers $I^{n}$ $(n\geq1)$ are all non-graded and distinct, but the intersection of the powers ${\displaystyle\bigcap\limits_{n=1}^{\infty}}I^{n}$ is always a graded ideal and is indeed the largest graded ideal of $L_{K}(E)$ contained in $I$. As a corollary, we obtain an analogue of Krull’s theorem (Theorem 12, section 7, [@ZS]) for Leavitt path algebras: For an ideal $I$ of $L_{K}(E)$, the intersection ${\displaystyle\bigcap\limits_{n=1}^{\infty}}I^{n}=0$ if and only if $I$ contains no vertices. **Preliminaries**: For the general notation, terminology and results in Leavitt path algebras, we refer to [@AA], [@R-1] and [@T]. For basic results in associative rings and modules, we refer to [@AF]. We give below a short outline of some of the needed basic concepts and results. A (directed) graph $E=(E^{0},E^{1},r,s)$ consists of two sets $E^{0}$ and $E^{1}$ together with maps $r,s:E^{1}\rightarrow E^{0}$. The elements of $E^{0}$ are called *vertices* and the elements of $E^{1}$ *edges*. A vertex $v$ is called a *sink* if it emits no edges and a vertex $v$ is called a *regular* *vertex* if it emits a non-empty finite set of edges. An *infinite emitter* is a vertex which emits infinitely many edges. For each $e\in E^{1}$, we call $e^{\ast}$ a ghost edge. We let $r(e^{\ast})$ denote $s(e)$, and we let $s(e^{\ast})$ denote $r(e)$. A *path* $\mu$ of length $|\mu|=n>0$ is a finite sequence of edges $\mu =e_{1}e_{2}\cdot\cdot\cdot e_{n}$ with $r(e_{i})=s(e_{i+1})$ for all $i=1,\cdot\cdot\cdot,n-1$. In this case $\mu^{\ast}=e_{n}^{\ast}\cdot \cdot\cdot e_{2}^{\ast}e_{1}^{\ast}$ is the corresponding ghost path. A vertex is considered a path of length $0$. The set of all vertices on the path $\mu$ is denoted by $\mu^{0}$. A path $\mu$ $=e_{1}\dots e_{n}$ in $E$ is *closed* if $r(e_{n})=s(e_{1})$, in which case $\mu$ is said to be *based at the vertex* $s(e_{1})$. A closed path $\mu$ as above is called *simple* provided it does not pass through its base more than once, i.e., $s(e_{i})\neq s(e_{1})$ for all $i=2,...,n$. The closed path $\mu$ is called a *cycle* if it does not pass through any of its vertices twice, that is, if $s(e_{i})\neq s(e_{j})$ for every $i\neq j$. An *exit* for a path $\mu=e_{1}\dots e_{n}$ is an edge $e$ such that $s(e)=s(e_{i})$ for some $i$ and $e\neq e_{i}$. We say the graph $E$ satisfies *Condition (L)* if every cycle in $E$ has an exit. The graph $E$ is said to satisfy *Condition (K)* if every vertex which is the base of a closed path $c$ is also a base of another closed path $c^{\prime}$ different from $c$. If there is a path from vertex $u$ to a vertex $v$, we write $u\geq v$. A subset $D$ of vertices is said to be *downward directed*  if for any $u,v\in D$, there exists a $w\in D$ such that $u\geq w$ and $v\geq w$. A subset $H$ of $E^{0}$ is called *hereditary* if, whenever $v\in H$ and $w\in E^{0}$ satisfy $v\geq w$, then $w\in H$. A hereditary set is *saturated* if, for any regular vertex $v$, $r(s^{-1}(v))\subseteq H$ implies $v\in H$. Given an arbitrary graph $E$ and a field $K$, the *Leavitt path algebra* $L_{K}(E)$ is defined to be the $K$-algebra generated by a set $\{v:v\in E^{0}\}$ of pair-wise orthogonal idempotents together with a set of variables $\{e,e^{\ast}:e\in E^{1}\}$ which satisfy the following conditions: 1. $s(e)e=e=er(e)$ for all $e\in E^{1}$. 2. $r(e)e^{\ast}=e^{\ast}=e^{\ast}s(e)$ for all $e\in E^{1}$. 3. (The “CK-1 relations”) For all $e,f\in E^{1}$, $e^{\ast}e=r(e)$ and $e^{\ast}f=0$ if $e\neq f$. 4. (The “CK-2 relations”) For every regular vertex $v\in E^{0}$, $$v=\sum_{e\in E^{1},s(e)=v}ee^{\ast}.$$ Every Leavitt path algebra $L_{K}(E)$ is a $\mathbb{Z}$-graded algebra $L_{K}(E)={\displaystyle\bigoplus\limits_{n\in\mathbb{Z}}}L_{n}$ induced by defining, for all $v\in E^{0}$ and $e\in E^{1}$, $\deg (v)=0$, $\deg(e)=1$, $\deg(e^{\ast})=-1$. Further, for each $n\in \mathbb{Z}$, the homogeneous component $L_{n}$ is given by $$L_{n}=\left\{{\textstyle\sum} k_{i}\alpha_{i}\beta_{i}^{\ast}\in L:\text{ }|\alpha_{i}|-|\beta _{i}|=n\right\} .$$ An ideal $I$ of $L_{K}(E)$ is said to be a graded ideal if $I=$ ${\displaystyle\bigoplus\limits_{n\in\mathbb{Z}}}(I\cap L_{n})$. We shall be using the following concepts and results from [@T]. A *breaking vertex* of a hereditary saturated subset $H$ is an infinite emitter $w\in E^{0}\backslash H$ with the property that $0<|s^{-1}(w)\cap r^{-1}(E^{0}\backslash H)|<\infty$. The set of all breaking vertices of $H$ is denoted by $B_{H}$. For any $v\in B_{H}$, $v^{H}$ denotes the element $v-\sum_{s(e)=v,r(e)\notin H}ee^{\ast}$. Given a hereditary saturated subset $H$ and a subset $S\subseteq B_{H}$, $(H,S)$ is called an *admissible pair.* The set $\mathbf{H}$ of all admissible pairs becomes a lattice under a partial order $\leq^{\prime}$ under which $(H_{1},S_{1})\leq^{\prime} (H_{2},S_{2})$ if $H_{1}\subseteq H_{2}$ and $S_{1}\subseteq H_{2}\cup S_{2}$. Given an admissible pair $(H,S)$, the ideal generated by $H\cup\{v^{H}:v\in S\}$ is denoted by $I(H,S)$. It was shown in [@T] that the graded ideals of $L_{K}(E)$ are precisely the ideals of the form $I(H,S)$ for some admissible pair $(H,S)$. Moreover, $L_{K}(E)/I(H,S)\cong L_{K}(E\backslash (H,S))$. Here $E\backslash(H,S)$ is the *Quotient graph of* $E$ in which ** $(E\backslash(H,S))^{0}=(E^{0}\backslash H)\cup\{v^{\prime }:v\in B_{H}\backslash S\}$ and $(E\backslash(H,S))^{1}=\{e\in E^{1} :r(e)\notin H\}\cup\{e^{\prime}:e\in E^{1},r(e)\in B_{H}\backslash S\}$ and $r,s$ are extended to $(E\backslash(H,S))^{0}$ by setting $s(e^{\prime})=s(e)$ and $r(e^{\prime})=r(e)^{\prime}$. For a description of non-graded ideals of $L_{K}(E)$, see [@R-2]. A useful observation is that every element $a$ of $L_{K}(E)$ can be written as $a={\textstyle\sum\limits_{i=1}^{n}}k_{i}\alpha_{i}\beta_{i}^{\ast}$, where $k_{i}\in K$, $\alpha_{i},\beta_{i}$ are paths in $E$ and $n$ is a suitable integer. Moreover, $L_{K}(E)={\textstyle\bigoplus\limits_{v\in E^{0}}} L_{K}(E)v={\textstyle\bigoplus\limits_{v\in E^{0}}}vL_{K}(E).$ Further, the Jacobson radical of $L_{K}(E)$ is always zero (see [@AA]). Another useful fact is that if $p^{\ast}q\neq0$, where $p,q$ are paths, then either $p=qr$ or $q=ps$ where $r,s$ are suitable paths in $E$. Let $\Lambda$ be an arbitrary infinite index set. For any ring $R$, we denote by $M_{\Lambda}(R)$ the ring of matrices over $R$ with identity whose entries are indexed by $\Lambda\times\Lambda$ and whose entries, except for possibly a finite number, are all zero. It follows from the works in [@A], [@AM] that $M_{\Lambda}(R)$ are Morita equivalent to $R$. Throughout this paper, $E$ will denote an arbitrary graph (with no restriction on the number of vertices and the number of edges emitted by each vertex) and $K$ will denote an arbitrary field. For convenience in notation, we will denote, most of the times, the Leavitt path algebra $L_{K}(E)$ by $L$. Intersections of prime ideals ============================= In this section, we give necessary and sufficient conditions under which every ideal of a Leavitt path algebra $L$ of an arbitrary graph $E$ is the intersection of prime/primitive ideals. As applications, conditions on the graph $E$ are obtained under which (a) every ideal of $L$ is a prime ideal and (b) when $L$ contains only a finite number of prime ideals. A uniqueness theorem for irredundant intersections of prime ideals is also obtained. We also obtain conditions under which every ideal of $L$ is an intersection of maximal ideals. Remark: In this and the next section, by an ideal $I$ we mean an ideal $I$ of $L$ such that $I\neq L$. \[Graded =&gt; Primitive intersection\] *Let* $I$ *be a graded ideal a Leavitt path algebra* $L$ of an arbitrary graph $E$*. Then* $I$ *is the intersection of all primitive (and hence prime) ideals containing* $I$*.* Let $H=I\cap E^{0}$ and $S=\{v\in B_{H}:v^{H}\in I\}$. By [@T], the graded ideal $I=I(H,S)$, the ideal generated by $H\cup\{v^{H}:v\in S\}$. Also, $L/I\cong L_{K}(E\backslash(H,S))$. Since the Jacobson radical of $L_{K}(E\backslash(H,S))$ is zero, we conclude that the intersection of all primitive ideals of $L_{K}(E\backslash(H,S))$ is $0$. This means that $I$ is the intersection of all the primitive ideals of $L$ containing $I$. Moreover, since every primitive ideal is prime, we conclude that $I$ is the intersection of all prime ideals containing $I$. The next example shows that a non-graded ideal of a Leavitt path algebra need not be an intersection of all the prime/primitive ideals containing it. \[Laurent =&gt; No Prime Intersection\] *Let* $E$ be a graph with one vertex $v$ and a loop $c$ so $s(c)=v=r(c)$. Thus $E$ is the graph $$\xymatrix{ {\bullet}_v \ar@(ur,ul)_c }$$ *Consider the ideal* $B=\left\langle p(c)\right\rangle $ of $L_{K}(E)$ where $p(x)$ is *an irreducible polynomial in* $K[x,x^{-1}]$. We claim that $B^{2}$ is not an intersection of prime ideals in $L_{K}(E)$. To see this, first observe that $L_{K}(E)\overset{\theta}{\cong}R=K[x,x^{-1}]$ under the map $\theta$ mapping $v\mapsto1,c\mapsto x$ and $c^{\ast}\mapsto x^{-1}$. *Then* $B\cong A=<p(x)>$*, the ideal generated by* $p(x)$ in ** $R$. So it is enough if we show that ** $N=A^{2}$ *cannot be an intersection of prime ideals of* $R$*.* Suppose, on the contrary, $N={\displaystyle\bigcap\limits_{\lambda\in\Lambda}} M_{\lambda}$ where $\Lambda$ is an arbitrary index set and each $M_{\lambda}$ is a (non-zero) prime ideal of $R$. Note that each $M_{\lambda}$ is a maximal ideal of $R$, as $R$ is a principal ideal domain. Then there is a homomorphism $\phi:R\rightarrow{\displaystyle\prod\limits_{\lambda\in\Lambda}} R/M_{\lambda}$ given by $r\longmapsto(\cdot\cdot\cdot,r+M_{\lambda},\cdot \cdot\cdot)$ with $\ker(\phi)=N$. Now $\bar{A}=\phi(A)\cong A/N\neq0$ satisfies $(\bar{A})^{2}=0$ and this is impossible since ${\displaystyle\prod\limits_{\lambda\in\Lambda}}R/M_{\lambda}$, being a direct product of fields, does not contain any non-zero nilpotent elements. We wish to point out that, unlike the case of graded ideals, there are non-graded ideals in a Leavitt path algebra some of which (such as the ideal $N$ in the above example) are not intersections of prime/primitive ideals and there are also some non-graded ideals that are intersections of prime ideals. To obtain an example of the latter, different from the above, consider the Toeplitz algebra $L_{K}(E)$ where $E$ is a graph with two vertices $v,w$, a loop $c$ with $s(c)=v=r(c)$ and an edge $f$ with $s(f)=v$ and $r(f)=w$. Thus $E$ is the graph $$\xymatrix{ {\bullet}_v \ar@(ur,ul)_c \ar@{->}[r]^f & {\bullet}_w }$$ Then the ideals $A=\left\langle w,v+c\right\rangle $, $B=\left\langle w,v+c^{2}\right\rangle $ and $I=\left\langle w,(v+c)(v+c^{2})\right\rangle $ are all non-graded ideals of $L_{K}(E)$ (see Proposition 6, [@R-2]). Now $L_{K}(E)/\left\langle w\right\rangle \cong K[x,x^{-1}]$ under the map $v+\left\langle w\right\rangle \longmapsto1,c+\left\langle w\right\rangle \longmapsto x$ and $c^{\ast}+\left\langle w\right\rangle \longmapsto x^{-1}$. Since $1+x$ and $1+x^{2}$ are irreducible polynomials in the principal ideal domain $K[x,x^{-1}]$, $\left\langle 1+x\right\rangle \cap\left\langle 1+x^{2}\right\rangle =\left\langle (1+x)(1+x^{2})\right\rangle $ and moreover the ideals $\left\langle 1+x\right\rangle $ and $\left\langle 1+x^{2}\right\rangle $ are maximal ideals. Thus $A/\left\langle w\right\rangle $ and $B/\left\langle w\right\rangle $ are maximal ideals whose intersection is $I/\left\langle w\right\rangle $. Hence the non-graded ideal $I=A\cap B$ is an intersection of two primitive/prime ideals of $L_{K}(E)$. We next explore conditions under which every ideal in a Leavitt path algebra is an intersection of prime/primitive ideals. To begin with, we need a result on lattice isomorphisms. In general, a lattice isomorphism between two lattices need not preserve infinite infemums. But for complete lattices, the infemum is preserved. This assertion is perhaps folklore and we need it in the proofs of couple of statements below. Since we could not find this statement explicitly stated or proved in our literature search, we record it in the next Lemma and outline its easy proof. \[Lattice Iso\] *Let* $f:(\mathbf{L,\leq)}\longrightarrow (\mathbf{L}^{\prime},\leq)$ *be an isomorphism of two complete lattices. Let* $Y$ be *any finite or infinite index set and let* $P_{i}\in\mathbf{L}$ for each $i\in Y$. *Then* $f({\displaystyle\bigwedge\limits_{i\in Y}}P_{i})= {\displaystyle\bigwedge\limits_{i\in Y}}f(P_{i})$*.* Let $A={\displaystyle\bigwedge\limits_{i\in Y}}P_{i}$. Clearly $f(A)\leq{\displaystyle\bigwedge\limits_{i\in Y}}f(P_{i})$. Suppose $B\leq f(P_{i})$ for all $i\in Y$. Then $f^{-1}(B)\leq f^{-1}f(P_{i})=P_{i}$ for all $i$. Hence $f^{-1}(B)\leq {\displaystyle\bigwedge\limits_{i\in Y}}P_{i}=A$. Then $B=f(f^{-1}(B))\leq f(A)$. Consequently, $f(A)={\displaystyle\bigwedge\limits_{i\in Y}}f(P_{i})$. \[No L =&gt; No prime intersection\] Suppose $E$ is an arbitrary graph which does not satisfy Condition (L). Then there is an ideal $I$ of the corresponding Leavitt path algebra $L:=L_{K}(E)$ which is not an intersection of prime/primitive ideals of $L$. Since Condition (L) does not hold, there is a cycle $c$ based at a vertex $v$ having no exits in $E$. Now the ideal $A$ generated by the vertices on $c$ is isomorphic to $M_{\Lambda}(K[x,x^{-1}])$ where $\Lambda$ is an index set representing the set of all paths that end at $c$ but do not include all the edges $e$ (see [@AAPS]). Also $M_{\Lambda}(K[x,x^{-1}])$ is Morita equivalent to $K[x,x^{-1}]$ (see [@A], [@AM]) and so its lattice of ideals is isomorphic to the lattice of ideals of $K[x,x^{-1}]$ and that prime (primitive) ideals correspond to prime (primitive) ideals under this isomorphism (see [@AF]).  Then, in view of Lemma \[Lattice Iso\], the ideal $\overline{N}$ of $M_{\Lambda}(K[x,x^{-1}])$ that corresponds to the ideal $N$ of $K[x,x^{-1}]$ constructed in Example \[Laurent =&gt; No Prime Intersection\] is not an intersection of prime ideals and hence not an intersection of primitive ideals containing it. Let $I$ denote the ideal of $A$ that corresponds to the ideal $\overline{N}$ under the isomorphism $A\longrightarrow M_{\Lambda}(K[x,x^{-1}])$. Now $M_{\Lambda }(K[x,x^{-1}])$  and hence $A$ is a ring with local units and so every ideal of $A$ is also an ideal of $L$. The existence of local units in $A$ also implies that if $P$ is a prime ideal of $L$, then $P\cap A$ is a prime ideal of $A$. Consequently, $I$ will be an ideal of $L$ which is not an intersection of prime ideals of $L$. This implies that $I$ is also not an intersection of primitive ideals. The next theorem describes conditions under which every ideal of a Leavitt path algebra is an intersection of prime ideals. \[Prime Intersection &lt;=&gt; Condition K\] Let $E$ be an arbitrary graph. Then the following properties are equivalent for $L:=L_{K}(E)$: 1. *Every ideal* $I$ *of* $L$ *is the intersection of all the primitive ideals containing* $I$*;* 2. *Every ideal* $I$ *of* $L$ *is the intersection of all the prime ideals containing* $I$*;* 3. *The graph* $E$ *satisfies Condition (K).* Now (i)$\Rightarrow$(ii), since every primitive ideal is also a prime ideal. Assume (ii). Suppose, on the contrary, $E$ does not satisfy Condition (K). Then, by (Proposition 6.12, [@T]), there is an admissible pair $(H,S)$ where $H$ is a hereditary saturated subset of vertices and $S\subseteq B_{H}$ such that the quotient graph $E\backslash(H,S)$ does not satisfy Condition (L). By Lemma \[No L =&gt; No prime intersection\], there is an ideal $\bar{I}$ in $L_{K}(E\backslash(H,S))$ which is not an intersection of all prime ideals containing $\bar{I}$ in $L_{K}(E\backslash(H,S))$. Now $L/I(H,S)\cong L_{K}(E\backslash(H,S))$ and so the ideal $A/I(H,S)$ that corresponds to $\bar{I}$ under the preceding isomorphism is not an intersection of prime ideals in $L/I(H,S)$. This implies that the ideal $A$ is not an intersection of all the prime ideals containing $A$ in $L$. This contradiction shows that $E$ must satisfy Condition (K), thus proving (iii). Assume (iii) so that $E$ satisfies Condition (K). Then, by (Theorem 6.16, [@T]), every ideal of $L$ is graded. By Lemma \[Graded =&gt; Primitive intersection\], every ideal of $L$ is then an intersection of primitive ideals. This proves (i). We next prove the uniqueness of representing an ideal of $L$ as an irredundant intersection of finitely many prime ideals. We use the known ideas of proving such statements. Recall that an intersection $P_{1}\cap\cdot\cdot\cdot\cap P_{m}$ of ideals is *irredundant* if no $P_{i}$ contains the intersection of the other $m-1$ ideals $P_{j}$, $j\neq i$. \[Uniqueness\] *Suppose* $A=P_{1}\cap\ldots\cap P_{m}=Q_{1} \cap\ldots\cap Q_{n}$ *are two representations of an ideal* $A$ *of* $L$ *as irredundant intersections of finitely many prime ideals* $P_{i}$ *and* $Q_{j}$ *of* $L$*. Then* $m=n$ *and* $\{P_{1},\ldots,P_{m}\}=\{Q_{1},\ldots,Q_{m}\}$*.* Now the product $Q_{1}Q_{2}\cdot\cdot\cdot Q_{n}\subseteq A\subseteq P_{1}$ and $P_{1}$ is a prime ideal. Hence $Q_{j_{1}}\subseteq P_{1}$ for some $j_{1}\in\{1,\ldots,n\}$. Similarly, the product $P_{1}\cdot\cdot\cdot P_{m}\subseteq A\subseteq Q_{j_{1}}$ and since $Q_{j_{1}}$ is prime, $P_{i}\subseteq Q_{j_{1}}$ for some $i\in\{1,\ldots,m\}$. Thus $P_{i}\subseteq P_{1}$ and, by irredundancy, $i=1$. Hence $P_{1}=Q_{j_{1}}$. Starting with $P_{2}$, using similar arguments, we obtain $P_{2}=Q_{j_{2}}$ for some $Q_{j_{2}}$. Now $Q_{j_{2}}\neq Q_{j_{1}}$, since otherwise $P_{1}=P_{2}$ which is not possible by irredundancy. Proceeding like this we reach the conclusion that $\{P_{1},\ldots,P_{m}\}\subseteq\{Q_{1},\ldots,Q_{n}\}$. Reversing the role of $P_{i}$ and $Q_{j}$, starting with the $Q_{j}$ and proceeding as before, we can conclude that $\{Q_{1},\ldots,Q_{n} \}\subseteq\{P_{1},\ldots,P_{m}\}$. Thus $m=n$ and $\{P_{1},\ldots ,P_{m}\}=\{Q_{1},\ldots,Q_{m}\}$*.* We next explore the conditions on the graph $E$ under which will every ideal of $L_{K}(E)$ is a prime ideal. \[Everything Prime\]Let $E$ be an arbitrary graph. Then the following are equivalent for $L:=L_{K}(E)$: 1. *Every ideal of* $L$ *is a prime ideal;* 2. *The graph* $E$ *satisfies Condition (K), and* 1. *the set* $(\mathbf{H},\leq^{\prime})$ *of all the admissible pairs* $(H,S)$ *in* $E$ *is a chain under the defined partial order* $\leq^{\prime}$*,* 2. *for each hereditary saturated set* $H$ *of vertices,* $|B_{H}|\leq1$ *and* 3. *for each* $(H,S)\in\mathbf{H}$*,* $(E\backslash (H,S))^{0}$ *is downward directed;* 3. *All the ideals of* $L$ *are graded and form a chain under set inclusion.* Assume (a). By Theorem \[Prime Intersection &lt;=&gt; Condition K\], the graph $E$ satisfies Condition (K). Suppose there are two admissible pairs $(H_{1} ,S_{1})$ and $(H_{2},S_{2})$ such that $$(H_{1},S_{1})\nleqslant^{\prime}(H_{2},S_{2})\text{ and }(H_{2},S_{2} )\nleqslant^{\prime}(H_{1},S_{1}).$$ Then the ideal $Q=I(\overline{H},\overline{S})$, where $(\overline {H},\overline{S})=(H_{1},S_{1})\wedge(H_{2},S_{2})$, is not a prime ideal. To see this, observe that the lattice $(\mathbf{H,\leq}^{\prime})$ of all the admissible pairs is isomorphic to the lattice of graded ideals of $L$ (see Theorem 5.7, [@T]) and that $$I(H_{1},S_{1})\cdot I(H_{2},S_{2})\subseteq I(H_{1},S_{1})\cap I(H_{2} ,S_{2})=I(\overline{H},\overline{S})$$ but $I(H_{1},S_{1})\nsubseteq I(\overline{H},\overline{S})$ and $I(H_{2} ,S_{2})\nsubseteq I(\overline{H},\overline{S})$. Thus the set $\mathbf{H}$ of all the admissible pairs must form a chain under the defined partial order $\leq^{\prime}$. Now for any given hereditary saturated set $H$ of vertices, $|B_{H}|\leq1$. Because, otherwise, $B_{H}$ will contain two subsets $S_{1}$ and $S_{2}$ such that $S_{1}\nsubseteq S_{2}$ and $S_{2}\nsubseteq S_{1}$ and this will give rise to admissible pairs $(H,S_{1})\nleqslant^{\prime} (H,S_{2})$ and $(H,S_{2})\nleqslant^{\prime}(H,S_{1})$, a contradiction. Also, since for each $(H,S)\in\mathbf{H}$, $I(H,S)$ is a prime ideal, it follows from Theorem 3.12 of [@R-1], that $(E\backslash(H,S))^{0}$ is downward directed. This proves (b). Assume (b). Since the graph satisfies Condition (K), every ideal of $L$ is graded and so is of the form $I(H,S)$ for some admissible pair $(H,S)\in \mathbf{H}$. Since $(\mathbf{H,\leq}^{\prime})$ is a chain and is isomorphic to the lattice of ideals of $L$, it follows that the ideals of $L$ form a chain under set inclusion. This proves (c). Assume (c). Let $P$ be any ideal of $L$. Suppose $I,J$ are two ideals such that $IJ\subseteq P$. By hypothesis, one of them is contained in the other, say $I\subseteq J$. Moreover, since the ideals are graded, $I=I^{2}$ ( by Corollary 2.5, [@ABCR]) and so $I=IJ\subseteq P$. Thus $P$ is a prime ideal and this proves (a). The following example illustrates the conditions of Proposition \[Everything Prime\]. \[Example-everyone prime\]Let $E$ be a graph with $E^{0}=\{v_{i}:i=1,2,\cdot\cdot\cdot\}$. For each $i$, there is an edge $e_{i}$ with $r(e_{i})=v_{i}$ and $s(e_{i})=v_{i+1}$ and at each $v_{i}$ there are two loops $f_{i},g_{i}$ so that $v_{i}=s(f_{i})=r(f_{i})=s(g_{i})=r(g_{i})$. Thus $E$ is the graph $$\xymatrix{ \ar@{.>}[r] & \bullet_{v_3}\ar@(u,l)_{f_3} \ar@(u,r)^{g_3} \ar@/_.3pc/[rr]_{e_2} & & \bullet_{v_2}\ar@(u,l)_{f_2} \ar@(u,r)^{g_2} \ar@/_.3pc/[rr]_{e_1} && \bullet_{v_1}\ar@(u,l)_{f_1} \ar@(u,r)^{g_1} }$$ Clearly $E$ is a row-finite graph and the non-empty proper hereditary saturated subsets of vertices in $E$ are the sets $H_{n}=\{v_{1},\cdot \cdot\cdot,v_{n}\}$ for some $n\geq1$ and form a chain under set inclusion. Clearly, $E^{0}\backslash(H_{n},\emptyset)$ is downward directed for each $n$. Thus the ideals of $L$ are graded prime ideals of the form $I(H_{n},\emptyset)$ and they form a chain under set inclusion. It may be worth noting that $L_{K}(E)$ does not contain maximal ideals. \[Primes always exist\] As a property that is diametrically opposite of the property of $L$ stated in Proposition \[Everything Prime\], one may ask under what conditions a Leavitt path algebra contains no prime ideals. It may be some interest to note that, while a Leavitt path algebra $L$ may not contain a maximal ideal as indicated in Example \[Example-everyone prime\], $L$ will always contain a prime ideal. Indeed if the graph $E$ satisfies Condition (K), then Theorem \[Prime Intersection &lt;=&gt; Condition K\] implies that $L$ contains prime ideals. Suppose $E$ does not satisfy Condition (K). Then there will be a closed path $c$ based at a vertex $v$ in $E$ such that no vertex on $c$ is the base of another closed path in $E$. If $H=\{u\in E^{0}:u\ngeqq v\}$, then $E^{0}\backslash H$ is downward directed and, by Theorem 3.12 of [@R-1], $I(H,B_{H})$ will be a prime ideal of $L$. Thus a Leavitt path algebra $L$ always contains a prime ideal. Another consequence of Theorem \[Prime Intersection &lt;=&gt; Condition K\] is the following Proposition. \[Finitely many primes\] Let $E$ be an arbitrary graph. Then the following properties are equivalent for $L:=L_{K}(E)$: 1. $L$ *contains at most finitely many prime ideals;* 2. $L$ *contains at most finitely many prime ideals and all of them are graded ideals;* 3. *The graph* $E$ *satisfies Condition (K) and there are only finitely many hereditary saturated subsets* $H$ *of vertices with the corresponding set of breaking vertices* $B_{H}$ *finite;* 4. $L$ *has at most finitely many ideals.* Assume (a). If $L$ contains a non-graded prime ideal $P$ and $H=P\cap E^{0}$, then, by Theorem 3.12 of [@R-1], $P=\left\langle I(H,B_{H} ),f(c)\right\rangle $, where $f(x)$ is an irreducible polynomial belonging to the Laurent polynomial ring $K[x,x^{-1}]$. Then, for each of the infinitely many prime ideals $g(x)\in K[x,x^{-1}]$, we will have a prime ideal $P_{g}=\left\langle I(H,B_{H}),g(c)\right\rangle $ of $L$, a contradiction. So all the prime ideals of $L$ are graded. This proves (b). Assume (b). Since every prime ideal of $L$ is graded, $E$ satisfies Condition (K), by Corollary 3.13 of [@R-1]. Then, by Theorem \[Prime Intersection &lt;=&gt; Condition K\], every ideal of $L$ is an intersection of prime ideals. Since there are only finitely many distinct possible intersections of finitely many prime ideals, $L$ contains only a finite number of ideals all of which are graded. The conclusion of (c) then follows from Corollary 12 of [@R-2]. It is clear (c)$\Rightarrow$(d)$\Rightarrow$(a). \[Finite intersection of primes\] As a natural follow-up of Theorem \[Prime Intersection &lt;=&gt; Condition K\], one may wish to explore the conditions on $E$ under which every ideal of $L$, instead of being an intersection of possibly infinitely many prime ideals, just an intersection of no more than a finite number of prime ideals of $L$. In this case, $\{0\}$ will be the intersection of only a finite number of prime ideals. This means $L$ itself contains only a finitely many prime ideals. Since every ideal of $L$ is an intersection of prime ideals belonging to this finite set, it is clear that $L$ must then contain only a finite number of ideals. Thus, this property is then equivalent to the graph $E$ satisfying the condition of (c) of the above Proposition \[Finitely many primes\]. Another question, that is naturally related to Theorem \[Prime Intersection &lt;=&gt; Condition K\], is to find conditions under which every ideal of a Leavitt path algebra $L_{K}(E)$ is an intersection of maximal ideals. For finite graphs $E$, more specifically when $E^{0}$ is finite, we have a complete, easily derivable, answer. \[Finite intersection of maximals\] Let $E$ be a graph with $E^{0}$ finite. Then the following are equivalent for $L:=L_{K}(E)$: 1. *Every ideal of* $L$ *is an intersection of maximal ideals of* $L$*;* 2. $L=S_{1}\oplus\cdot\cdot\cdot\oplus S_{n}$ *where* $n>0$ *is an integer, each* $S_{i}$ *is a graded ideal which is a simple ring with identity and* $\oplus$ *is ring direct sum;* 3. *Every ideal of* $L$ *is graded and is a ring direct summand of* $L$*;* 4. *The graph* $E$ *satisfies Condition (K) and* $E^{0} $ *is the disjoint union of a finite number of hereditary saturated subsets* $H_{i}$ *each of which contains no non-empty proper hereditary saturated subsets of vertices.* Assume (a). Since every maximal ideal is prime, Theorem \[Prime Intersection &lt;=&gt; Condition K\] implies that Condition (K) holds in $E$ and so, by Theorem 6.16 of [@T], every ideal of $L$ is graded. Since $E^{0}$ is finite, we conclude, from the description of the graded ideals in Theorem 5.17, [@T], that $L$ contains only a finite number of ideals and, in particular, finitely many maximal ideals. So we can write, by hypothesis, $\{0\}=M_{1}\cap\cdot\cdot\cdot\cap M_{n}$ where the $M_{i}$ are all the maximal ideals of $L$. Now apply the Chinese Remainder Theorem to conclude that the map $a\mapsto(a+M_{1},\cdot\cdot\cdot,a+M_{n})$ is an epimorphism and hence an isomorphism $\theta$ from $L$ to $L/M_{1}\oplus\cdot\cdot\cdot\oplus L/M_{n}$. For each $i$, let $S_{i}$ be the (graded) ideal of $L$ isomorphic to $L/M_{i}$ under the isomorphism $\theta$. Then $S_{i}$ is a simple ring with identity and $L=\bigoplus\limits_{i=1}^{n}S_{i}$. This proves (b). Assume (b). Let $A$ be a non-zero ideal of $L$. Then $A=(A\cap S_{1} )\oplus\cdot\cdot\cdot\oplus(A\cap S_{n})$. Since the $S_{i}$ are all simple rings, $A\cap S_{i}=0$ or $S_{i}$. Hence $A=S_{i_{1}}\oplus\cdot\cdot \cdot\oplus S_{i_{k}}$, where $\{i_{1},\cdot\cdot\cdot,i_{k}\}\subseteq \{1,\cdot\cdot\cdot,n\}$ and $A$ is a direct summand of $L$. Also $A$ is clearly a graded ideal of $L$. This proves (c). Assume (c). Since $L$ is a ring with identity $1$, $L$ contains maximal ideals $M$ which are all direct summands of $L$ and whose complements $S$ will be ideals containing no other non-zero ideals of $L$. Now the graded ideal $S$ contains local units, as it is isomorphic to a Leavitt path algebra, by Theorem 6.1 of [@RT]. This implies that every ideal of $S$ is also an ideal of $L$ and so the ideal $S$ will be a simple ring. Moreover, $S$ is generated by a central idempotent, as it is a ring direct summand of a ring with identity. By Zorn’s Lemma choose a maximal family $\{S_{i}:i\in I\}$ of distinct ideals $S_{i}$ of $L$ each of which is a simple ring. We claim that $L=\sum\limits_{i\in I}S_{i}$. Otherwise, choose an ideal $M$ maximal with respect to the property that $\sum\limits_{i\in I}S_{i}\subseteq M$, but $1\notin M$. $M$ is clearly a maximal ideal of $L$. Since $M$ is a direct summand, $L=M\oplus S$ and this case, $\{S\}\cup\{S_{i}:i\in I\}$ violates the maximality of $\{S_{i}:i\in I\}$. Hence $L=\sum\limits_{i\in I}S_{i}$. Actually, $L=\bigoplus\limits_{i\in I}S_{i}$ as each $S_{i}$ is generated by a central idempotent. Since $L$ is a ring with identity, $I$ must be a finite set. This proves (b). Assume (b). For each $j$, let $M_{j}=\bigoplus\limits_{i\neq j,1\leq i\leq n}S_{i}$ be a maximal ideal of $L$. Clearly $\bigcap\limits_{j=1}^{n}M_{j}=0$. Moreover, if $A$ be any ideal of $L$, then $A$ is a direct sum of a subcollection of the summands $S_{i}$ and then it is easy to see that $A=\bigcap\limits_{A\subseteq M_{j}}M_{j}$. This proves (a). We prove (b) $\Leftrightarrow$ (d). Assume (b). It is clear that each ideal $A$ of $L$ is a direct sum of a subset of the graded ideals $S_{i}$ and hence is graded. This implies that the graph $E$ satisfies Condition (K). Now for each $i$, the graded ideal $S_{i}$ is a simple ring and so contains no non-zero proper ideal of $L$. Hence $H_{i}=S_{i}\cap E^{0}$ is a hereditary saturated set and contains no proper non-empty hereditary saturated subset of vertices. Also $S_{i}S_{j}=0$ for all $i\neq j$ and this implies that the sets $H_{i}$ are all pair-wise disjoint. Further $E^{0}=\bigcup\limits_{i=1} ^{n}H_{i}$. This proves (d). Assume (d) so, for some $n>0$, $E^{0}=\bigcup\limits_{i=1}^{n}H_{i}$, where the $H_{i}$ are pair-wise disjoint hereditary saturated subsets having no proper non-empty hereditary saturated subsets of vertices in $E$. It is clear that $$E^{0}\backslash H_{i}=\{u\in E^{0}:u\ngeqq v\text{ for any }v\in H_{i}\}.$$ For each $i=1,\cdot\cdot\cdot,n$, let $E_{i}$ be the subgraph with $(E_{i})^{0}=H_{i}$ and $(E_{i})^{1}=\{e\in E^{1}:s(e)\in H_{i}\}$. Clearly each $E_{i}$ is a complete subgraph of $E$ satisfying Condition (K) and having no proper non-empty hereditary saturated subsets of vertices. Moreover, the graphs $E_{i}$ are all pair-wise disjoint. It then follows that $L\cong \bigoplus\limits_{i=1}^{n}L_{K}(E_{i})$, where each $L_{K}(E_{i})$ is a simple ring with identity (see [@AA]). This proves (b). \[Intersection of maximals when E arbitrary\] In the case when $E$ is an arbitrary graph, we have the following (perhaps not a satisfactory) answer to the above question: Every ideal of $L$ is an intersection of maximal ideals if and only if, every ideal of $L$ is graded and for each ideal $A$ of $L$ (including the zero ideal), $L/A$ is a subdirect product of simple Leavitt path algebras. To see this, note that if $A=\bigcap\limits_{i\in I}M_{i}$ where the $M_{i}$ are maximal ideals, then we get a homomorphism $\theta:L\longrightarrow\prod\limits_{i\in I}L/M_{i}$ given by $x\longmapsto (\cdot\cdot\cdot,x+M_{i},\cdot\cdot\cdot)$ with $\ker(\theta)=A$. Clearly $\theta(L)$ maps onto $L/M_{i}$ under the coordinate projection $\eta _{i}:\prod\limits_{i\in I}L/M_{i}\longrightarrow L/M_{i}$.  This shows that $L/A$ is a subdirect product of the simple rings $L/M_{i}$ each of which can be realized as a Leavitt path algebra as each $M_{i}$ is a graded ideal of $L$. Conversely, suppose $L/A\subseteq\prod\limits_{i\in I}L_{i}$ is a subdirect product of simple rings $L_{i}$ and, for each $i\in I$, $\eta _{i}:\prod\limits_{i\in I}L_{i}\longrightarrow L_{i}$ is the coordinate projection. For each $i\in I$, let $A_{i}\supseteq A$ denote the ideal of $L$ such that $A_{i}=L\cap\ker(\eta_{i})$. Then it is easy to see that each $A_{i}$ is a maximal ideal of $L$ and $A=\bigcap\limits_{i\in I}A_{i}$. Prime factorization and powers of an ideal ========================================== In this section, we consider the question of factorizing an ideal of a Leavitt path algebra $L$ as a product of prime ideals. We first obtain a unique factorization theorem for a graded ideal of $L$ as a product of prime ideals. A perhaps interesting result is that if $I$ is a graded ideal and $I=P_{1}\cdot\cdot\cdot P_{n}$ is a factorization of $I$ as an irredundant product of prime ideals $P_{i}$, then necessarily all the ideals $P_{i}$ must be graded ideals and moreover, $I=P_{1}\cap\ldots\cap P_{n}$. We also point out a weaker factorization theorem for non-graded ideals as products of primes. We end this section by showing that, given any non-graded ideal $I$ in a Leavitt path algebra $L$, its powers $I^{n}$ $(n\geq1)$ are all non-graded and distinct, but the intersection of its powers $\bigcap\limits_{n=1} ^{\infty}I^{n}$ is a graded ideal and is indeed the largest graded ideal of $L$ contained in $I$. As a corollary, we obtain an analogue of Krull’s theorem for Leavitt path algebra (see [@ZS]): The intersection ${\displaystyle\bigcap\limits_{n=1}^{\infty}} I^{n}=0$ for an ideal $I$ of $L$ if and only if $I$ contains no vertices of $E$. We begin with a useful property of graded ideals. \[Intersection = product\] Let $E$ be an arbitrary graph and let $I$ be a graded ideal of $L:=L_{K}(E)$. Then $I=P_{1}\cdot\cdot\cdot P_{n}$ is a product of arbitrary ideals $P_{i}$ if and only if $I=P_{1}\cap\ldots\cap P_{n}$. Suppose $I=P_{1}\cap\ldots\cap P_{n}$. Clearly $P_{1}\cdot\cdot\cdot P_{n}\subseteq P_{1}\cap\ldots\cap P_{n}=I$. To prove the reverse inclusion, note that, by (Theorem 6.1, [@RT]), the graded ideal $I$ is isomorphic to a Leavitt path algebra of a suitable graph and so it contains local units. Let $a\in P_{1}\cap\ldots\cap P_{n}$. Then there is a local unit $u=u^{2}\in I$ such that $ua=a=au$. Since, for each $i$, $u\in P_{i}$, multiplying $a$ by $u$ on the right $(n-1)$ times, we obtain $a=au\ldots u\in P_{1}\cdot\cdot\cdot P_{n}$. Hence $I=P_{1}\cdot\cdot\cdot P_{n}$. Conversely, suppose $I=P_{1}\cdot\cdot\cdot P_{n}$. If $I\neq P_{1}\cap \ldots\cap P_{n}$, then, in $L/I$, $(P_{1}\cap\ldots\cap P_{n})/I$ is a non-zero nilpotent ideal whose $n-$th power is zero. This is a contradiction since $L/I$ is isomorphic to a Leavitt path algebra, as $I$ is a graded ideal. Hence $I=P_{1}\cap\ldots\cap P_{n}$. As  the operation $\cap$ is commutative, we obtain from the preceding Lemma the following corollary. \[Permuted product\] If a graded ideal $I$ of $L_{K}(E)$ is a product of ideals $I=P_{1}\cdot\cdot\cdot P_{n}$, then $I$ is equal to any permuted product of these ideals, that is, $I=P_{\sigma(1)}\cdot\cdot\cdot P_{\sigma(n)}$ where $\sigma$ is a permutation of the set $\{1,\ldots,n\}$. We now are ready to prove a uniqueness theorem in factorizing a graded ideal of a Leavitt path algebra as a product of prime ideals. Recall that $I=P_{1}\cdot\cdot\cdot P_{n}$ is an **irredundant product** of the ideals $P_{i}$, if $I$ is not the product of a proper subset of this set of $n$ ideals $P_{i}$. \[Graded Prime factorization\]Let $E$ be an arbitrary graph and let $I$ be a graded ideal of $L:=L_{K}(E)$. 1. *If* $I=P_{1}\cdot\cdot\cdot P_{m}$ *is an irredundant product of prime ideals* $P_{i}$*, then all the ideals* $P_{i}$ *are graded and* $I=P_{1}\cap\ldots\cap P_{m}$*.* 2. *If* $$I=P_{1}\cdot\cdot\cdot P_{m}=Q_{1}\cdot\cdot\cdot Q_{n}$$ *are two irredundant products of prime ideals* $P_{i}$ *and* $Q_{j}$*, then* $m=n$ *and* $$\{P_{1},\ldots,P_{m}\}=\{Q_{1},\ldots,Q_{m}\}\text{\textit{.}}$$ \(a) Now, by Lemma \[Intersection = product\], $I=P_{1}\cap\ldots\cap P_{m}$ which is an irredundant intersection as $P_{1}\cdot\cdot\cdot P_{m}$ is an irredundant product. Note that the graded ideal $I\subseteq gr(P_{i})$ for all $i=1,\ldots,n$, where $gr(P_{i})$ denotes the largest graded ideal contained in $P_{i}$ (see Lemma 3.6, [@R-1]). Now $$I\subseteq gr(P_{1})\cap\ldots\cap gr(P_{m})\subseteq P_{1}\cap\ldots\cap P_{m}=I\text{.}$$ So we conclude that $I=gr(P_{1})\cap\ldots\cap gr(P_{m})$. A priori, it is not clear whether $I=gr(P_{1})\cap\ldots\cap gr(P_{m})$ is an irredundant intersection. Suppose $\{gr(P_{i_{1}}),\ldots,gr(P_{i_{k}})\}$ is a subcollection of the ideals $gr(P_{i})$ such that $I=gr(P_{i_{1}})\cap \ldots\cap gr(P_{i_{k}})$ is an irredundant intersection. Note that each $gr(P_{i_{r}})$, for $r=1,\ldots,k$, is a prime ideal, as each $P_{i_{r}}$ is a prime ideal (Lemma 3.8, [@R-1]). Then, by Proposition \[Uniqueness\], $k=m$ and $\{gr(P_{i_{1}}),\ldots,gr(P_{i_{k}})\}=\{P_{1},\ldots,P_{m}\}$. By irredundancy, each $P_{i}=gr(P_{i})$ and hence is a graded ideal. \(b) If $I=P_{1}\cdot\cdot\cdot P_{m}=Q_{1}\cdot\cdot\cdot Q_{n}$ are two irredundant products of prime ideals $P_{i}$ and $Q_{j}$, then, by Lemma \[Intersection = product\], $$I=P_{1}\cap\ldots\cap P_{m}=Q_{1}\cap\ldots\cap Q_{n}$$ are two irredundant intersections of prime ideals and so, by Proposition \[Uniqueness\], $m=n$ and $\{P_{1},\ldots,P_{m}\}=\{Q_{1},\ldots,Q_{m} \}$*.* As noted earlier, for graded ideals $A$ and $B$, the property that $A\cap B=AB$ came in handy in proving the uniqueness theorem. This property does not always hold for non-graded ideals. For an easy example consider Example \[Laurent =&gt; No Prime Intersection\]. Observe that every non-zero ideal of $L_{K}(E)\cong K[x,x^{-1}]$ is non-graded. Let $A=\left\langle (v+c)^{2} \right\rangle $ and $B=\left\langle v-c^{2}\right\rangle $. Then $A\cap B\neq AB$ as $A\cap B=\left\langle (v+c)(v-c^{2})\right\rangle $ while $AB=\left\langle (v+c)^{2}(v-c^{2})\right\rangle $. For ideals which are not necessarily graded, we next prove a weaker version of a uniqueness theorem. For convenience, we call a product of ideals $P_{1} \cdot\cdot\cdot P_{m}$ **tight** if $P_{i}\nsubseteq P_{j}$ for all $i\neq j$. Note that a tight product of prime ideals is necessarily an irredundant product. \[Non-graded factorization\] Let $E$ be an arbitrary graph. Suppose $$A=P_{1}\cdot\cdot\cdot P_{m}=Q_{1}\cdot\cdot\cdot Q_{n}$$ are two representations of an ideal $A$ of $L_{K}(E)$ as tight products of prime ideals $P_{i}$ and $Q_{j}$. Then $m=n$ and $\{P_{1},\ldots ,P_{m}\}=\{Q_{1},\ldots,Q_{m}\}$*.* Now the prime ideal $P_{1}\supseteq Q_{1}\cdot\cdot\cdot Q_{n}$ and so $P_{1}\supseteq Q_{i_{1}}$ for some $i_{1}$. By a similar argument, $Q_{i_{1} }\supseteq P_{j}$ for some $j$. Then $P_{1}\supseteq Q_{i_{1}}\supseteq P_{j}$ and since the product is tight, $P_{1}=P_{j}$. So $P_{1}=Q_{i_{1}}$. Next start with $P_{2}$ and proceed as before to conclude that $P_{2}=Q_{i_{2}}\neq Q_{i_{1}}$. Proceeding like this we conclude that $\{P_{1},\ldots ,P_{m}\}\subseteq\{Q_{1},\ldots,Q_{n}\}$. Reversing the role and starting with the $Q$’s and proceeding as before, we get $\{Q_{1},\ldots,Q_{n} \}\subseteq\{P_{1},\ldots,P_{m}\}$. Thus $m=n$ and $\{P_{1},\ldots ,P_{m}\}=\{Q_{1},\ldots,Q_{m}\}$. Next we consider the powers of an ideal $I$. We begin with the following useful Lemma. \[Power of a non-graded ideal\] Suppose $E$ is an arbitrary graph and $L:=L_{K}(E)$. Let $c$ be a cycle in $E$ with no exits based at a vertex $v$ and let $B=\left\langle p(c)\right\rangle $ be the ideal generated by $p(c)$ in $L$, where $p(x)=1+k_{1}x+\cdot\cdot\cdot+k_{n}x^{n}\in K[x]$. Then 1. $vB^{m}v=(vBv)^{m}$*, for any* $m>0$*;* 2. $B^{m}\neq B^{n}$ *for all* $0<m<n$*.* \(a) Clearly $(vBv)^{m}\subseteq vB^{m}v$. We show that $vB^{m}v\subseteq (vBv)^{m}$. Now a typical element of $vB^{m}v$ is a $K$-linear sum of finitely many terms each of which, being a product of $m$ elements of $B$ followed by multiplication by $v$ on both sides, is of the form $$v[\alpha_{1}\beta_{1}^{\ast}p(c)\gamma_{1}\delta_{1}^{\ast}][\alpha_{2} \beta_{2}^{\ast}p(c)\gamma_{2}\delta_{2}^{\ast}]\cdot\cdot\cdot\lbrack \alpha_{m}\beta_{m}^{\ast}p(c)\gamma_{m}\delta_{m}^{\ast}]v\tag{$\ast$}$$ where the $\alpha_{i},\beta_{i},\gamma_{i},\delta_{i}$ are all paths in $E$. Now $(\ast)$ can be rewritten as[$$\lbrack v\alpha_{1}\beta_{1}^{\ast}p(c)v][v\gamma_{1}\delta_{1}^{\ast} \alpha_{2}\beta_{2}^{\ast}p(c)v][v\gamma_{2}\delta_{2}^{\ast}\alpha_{3} \beta_{3}^{\ast}p(c)v]\cdot\cdot\cdot p(c)v][v\gamma_{m-1}\delta_{m-1}^{\ast }\alpha_{m}\beta_{m}^{\ast}p(c)\gamma_{m}\delta_{m}^{\ast}v]$$ ]{}which is clearly a product of $m$ elements of $vBv$ and hence belongs to $(vBv)^{m}$. Thus $vB^{m}v\subseteq(vBv)^{m}$ and we are done. \(b) Since $c$ has no exits, $vLv\overset{\theta}{\cong}K[x,x^{-1}]$ where $\theta$ maps $v$ to $1$, $c$ to $x$ and $c^{\ast}$ to $x^{-1}$. As $vp(c)v=p(c)$, $vBv=B\cap vLv$ contains $p(c)$ and is the ideal generated by $p(c)$ in $vLv$. Thus $vBv$ is isomorphic to the ideal $\left\langle p(x)\right\rangle $ in $K[x,x^{-1}]$ under the map $\theta$. If $B^{m}=B^{n}$ for some $0<m<n$, then $vB^{m}v=vB^{n}v$. By (a), we then get $(vBv)^{m} =(vBv)^{n}$ and this implies that, in the principal ideal domain $K[x,x^{-1} ]$, $\left\langle p(x)^{m}\right\rangle =\left\langle p(x)^{n}\right\rangle $ for $0<m<n$, a contradiction. Hence $B^{m}\neq B^{n}$ for all $0<m<n$. Observe that if $I$ is an ideal of $L_{K}(E)$ such that $I^{n}$ is a graded ideal for some $n>1$, then $I$ must be a graded ideal. Indeed $I=I^{n}$. Because, if $I\neq I^{n}$, then $I/I^{n}$ becomes a non-zero nilpotent ideal in $L_{K}(E)/I^{n}$, a contradiction as $L_{K}(E)/I^{n}$ is isomorphic to a Leavitt path algebra and its Jacobson radical is zero. It then follows from the preceding observation that if $I$ is a non-graded ideal of $L_{K}(E)$, then for any integer $n>0$, $I^{n}$ must also be a non-graded ideal. But, as Theorem \[Intersection of powers\] below shows, the intersection of its powers $\bigcap\limits_{n=1}^{\infty}I^{n}$ must be a graded ideal. \[Intersection of powers\]Let $I$ be a non-graded ideal of a Leavitt path algebra $L$ of an arbitrary graph $E$. If $H=I\cap E^{0}$ and $S=\{v\in B_{H}:v^{H}\in I\}$, then $\bigcap\limits_{n=1}^{\infty}I^{n}=I(H,S)$, the largest graded ideal contained in $I$. Now $L/I(H,S)\cong L_{K}(E\backslash(H,S))$. Identifying $\bar{I}=I/I(H,S)$ with its isomorphic image in $L_{K}(E\backslash(H,S))$, $\bar{I}$ is an ideal containing no vertices and so, by (Proposition 2, [@R-2]), $\bar{I}$ is generated by an orthogonal set $\{p_{j}(c_{j}):j\in Y\}$, where $Y$ is an index set and, for each $j\in Y$, $c_{j}$ is a cycle without exits in $E\backslash(H,S)$ and $p_{j}(x)\in K[x]$. For each $j\in Y$, let $A_{j}$ be the ideal generated by the vertices on $c_{j}$. It was shown in (Proposition 3.5(ii), [@AAPS]) that the ideal sum $\sum\limits_{j\in Y}A_{j}=\bigoplus\limits_{j\in Y}A_{j}$. Since $p_{j}(c_{j})\in A_{j}$, $\bar {I}=\bigoplus\limits_{j\in Y}B_{j}$ where $B_{j}$ is the ideal generated by $p_{j}(c_{j})$. By Proposition 3.5(iii) of [@AAPS], each $A_{j}$ is isomorphic to $M_{\Lambda_{j}}(K[x,x^{-1}])$, where $\Lambda_{j}$ is a suitable index set representing the number of paths that end at $c_{j}$ but do not contain all the edges of $c_{j}$. So $B_{j}$ is isomorphic to an ideal $N_{j}$ of $M_{\Lambda_{j}}(K[x,x^{-1}])$. As $M_{\Lambda_{j}}(K[x,x^{-1}])$ is Morita equivalent to $K[x,x^{-1}]$ ([@A], [@AM]), there is a lattice isomorphism $\phi:\mathbf{L\longrightarrow L}^{\prime}$, where $\mathbf{L,L}^{\prime}$ are the lattices of ideals of $M_{\Lambda_{j} }(K[x,x^{-1}])$ and $K[x,x^{-1}]$, respectively [@AF]. Now, for a fixed $j$, $B_{j}^{m}\neq B_{j}^{n}$ for all $0<m<n$, by Lemma \[Power of a non-graded ideal\]. Hence the corresponding ideal $N_{j}$ also satisfies $N_{j}^{m}\neq N_{j}^{n}$ for all $0<m<n$. So we get an infinite descending chain of ideals $$\phi(N_{j})\supset\phi(N_{j}^{2})\supset\ldots\supset\phi(N_{j}^{n} )\supset\ldots\tag{$\ast\ast$}$$ in $K[x,x^{-1}]$. Let $N=\bigcap\limits_{n=1}^{\infty}\phi(N_{j}^{n})$. If $N\neq0$, then $K[x,x^{-1}]/N$ satisfies the descending chain condition, as $K[x,x^{-1}]$ is principal ideal domain (see e.g. Theorem 32, Ch. IV-15, [@ZS]). This is a contradiction since the chain $(\ast\ast)$ induces an infinite descending chain of ideals in $K[x,x^{-1}]/N$. Hence $N=\bigcap \limits_{n=1}^{\infty}\phi(N_{j}^{n})=0$. Then Lemma \[Lattice Iso\] implies that $\bigcap\limits_{n=1}^{\infty}B_{j}^{n}=0$ for each $j$. Since $\bar{I}$ is a (ring) direct sum of the ideals $B_{j}$, $\bigcap\limits_{n=1}^{\infty }(\overline{I})^{n}=0$. This means that $\bigcap\limits_{n=1}^{\infty} I^{n}=I(H,S)$. As noted in Lemma 3.6 of [@R-1], $I(H,S)$ is the largest graded ideal of $L$ contained in $I$. This completes the proof. In the proof of Theorem \[Intersection of powers\], observe that the direct sum of ideals $\bar{I}=\bigoplus\limits_{j\in Y}B_{j}$ satisfies $(\bar {I})^{m}\neq(\bar{I})^{n}$ for all positive integers $m\neq n$, as each $B_{j}$ satisfies the same property. Clearly, the same holds for the ideal $I$. Thus every non-graded ideal $I$ of a Leavitt path algebra satisfies $I^{m}\neq I^{n}$ for all integers $0<m<n$. Moreover, by the statements preceding Theorem \[Intersection of powers\], each $I^{n}$ must be a non-graded ideal. W. Krull showed that if $I$ is an ideal of a noetherian integral domain $R$, then $\bigcap\limits_{n=1}^{\infty}I^{n}=0$ and more generally, if $R$ is a commutative noetherian ring, then $\bigcap\limits_{n=1}^{\infty}I^{n}=0$ if and only if $1-x$ is not a zero divisor for all $x\in I$ (see Theorem 12, Section 7 in [@ZS]). From Theorem \[Intersection of powers\] and its proof, using the fact that a non-zero graded ideal always contains a vertex, one can easily obtain the following analogue of Krull’s theorem for Leavitt path algebras. \[Krull’s theorem\] Suppose $I$ is an ideal of a Leavitt path algebra $L_{K}(E)$ of an arbitrary graph $E$. Then the intersection ${\displaystyle\bigcap\limits_{n=1}^{\infty}}I^{n}=0$ if and only if $I$ contains no vertices of $E$. **Acknowledgement:** We thank Professor Astrid an Huef for raising the question of intersections of primitive ideals in a Leavitt path algebra and Professor Pere Ara for helpful preliminary discussion on this question during the CIMPA conference. The first and the second authors also thank Duzce University Distance Learning Center (UZEM) for using UZEM facilities while conducting this research. [99]{} G. Abrams, Morita equivalence for rings with local units, Commun. Algebra, **11** (1983)**,** 801 - 837. G. Abrams, G. Aranda Pino, The Leavitt path algebra of a graph, J. Algebra **293** (2005)**,** 319-334. G. Abrams, G. Aranda Pino, F. Perera and M. Siles Molina, Chain conditions for Leavitt path algebras, Forum Math. **22** (2010), 95 - 114. G. Abrams, J. Bell, P. Colak and K.M. Rangaswamy, Two-sided chain conditions in Leavitt path algebras over arbitrary graphs, Journal Alg. Applications, **11** (2012), 125044 (23 pages). G. Abrams, J. Bell and K.M. Rangaswamy, On prime non-primitive von Neumann regular algebras, Trans. Amer. Math. Soc. **366** (2014), 2375 - 2392. F.W. Anderson and K.R. Fuller, Rings and categories of modules, Graduate Texts in Math. **13**, Springer-Verlag, Berlin-Heidelberg-New York (1974). P.N. Anh and L. Marki, Morita equivalence for rings without identity, Tsuka J. Math. **11** (1987), 1 - 16. J. Dixmier, C\*-algebras, North Holland Publ. CO., Amsterdam (1977). K.M. Rangaswamy, The theory of prime ideals of Leavitt path algebras over arbitrary graphs, J. Algebra **375** (2013), 73 - 96. K.M. Rangaswamy, On generators of two-sided ideals of Leavitt path algebras over arbitrary graphs, Communications in Algebra **42** (2014), 2859 - 2868. E. Ruiz and M. Tomforde, Ideals in graph algebras, Algebr. Repres. Theory **17** (2014), 849 - 861. M. Tomforde, Uniqueness theorems and ideal structure of Leavitt path algebras, J. Algebra **318** (2007), 270 - 299. O. Zariski and P. Samuel, Commutative algebra, vol. 1, van Nostrand Company (1969).
--- abstract: 'We propose the application of multiple-bases belief-propagation, an optimized iterative decoding method, to a set of rate-1/2 LDPC codes from the IEEE 802.16e WiMAX standard. The presented approach allows for improved decoding performance when signaling over the AWGN channel. As all required operations for this method can be run in parallel, the decoding delay of this method and standard belief-propagation decoding are equal. The obtained results are compared to the performance of LDPC codes optimized with the progressive edge-growth algorithm and to bounds from information theory. It will be shown that the discussed method mitigates the gap to the well-known random coding bound by about 20 percent.' author: - | Thorsten Hehn, Johannes B. Huber, Stefan Laendner\ Chair for Information Transmission (LIT)\ University of Erlangen-Nuremberg, Germany\ $\{$hehn, huber, laendner$\}$@LNT.de bibliography: - 'LDPC\_Group\_Bibfile.bib' title: '[**[MBBP for improved iterative channel decoding in 802.16e WiMAX systems]{}**]{}' --- Introduction {#sec:introduction} ============ The use of belief-propagation (BP) decoding [@pearl88] with redundant parity-check matrix representations has drawn a lot of attention. Several authors [@schwartzetal06; @hanetal07; @hollmannetal07; @weberetal05; @hehnetal07b] presented pioneering work on the binary erasure channel (BEC) and provided results on the number of redundant parity-check equations required to prevent certain decoder failures. The concepts used on the BEC cannot be transferred to the additive white Gaussian noise (AWGN) channel in a straightforward manner. For this reason, several authors designed algorithms to use redundant code descriptions for BP decoding of data signaled over the AWGN channel. A proof of concept using the extended Golay code of length $24$ was already given in [@andrewsetal02]. In [@kothiyaletal05] and [@jiangetal04a], adaptive BP algorithms were proposed. These algorithms adjust the parity-check information for each iteration, taking into account the current decoder state. They require additional operations which cannot be parallelized and hence increase the delay of the data stream. The random redundant decoding (RRD) algorithm [@halfordetal06] uses multiple parity-check matrix representations in a serial fashion to decode block codes in an iterative manner and achieves good performance improvements. After a given number of iterations it stores the current decoding state, changes the parity-check matrix and resumes decoding. For this reason it has to conduct many iterations and thus imposes a high decoding delay. A recent paper by the authors of the RRD algorithm [@halfordetal08] shows that the field of application of this algorithm is obviously restricted to algebraic codes. We proposed the multiple-bases belief-propagation (MBBP) [@hehnetal07] algorithm, which uses redundant parity information in a completely parallel setting and reported good results with algebraic codes [@hehnetal07], as well as LDPC codes optimized by the progressive edge-growth (PEG) algorithm [@hehnetal08; @hehn09]. In [@hehnetal08] we introduced the Leaking algorithm, which is a modified BP decoding algorithm. It was shown that the combination of MBBP and the Leaking algorithm is a valuable tool to improve the decoding performance if a low number of redundant parity checks is available. In this paper, we extend the field of application of MBBP to iteratively decoded channel codes from the Worldwide Interoperability for Microwave Access (WiMAX) standard [@ieee_std_802_16e_05] and demonstrate the effectiveness of the algorithm for this class of codes. Also, we compare the performance of these codes to the performance of optimized PEG codes of comparable length, both for BP and MBBP decoding. The paper is structured as follows. In Section \[sec:mbbp\_decoding\] we describe the transmission setup and review the basic principles of MBBP decoding. Section \[sec:matrix\_representations\] states how a set of different parity-check matrix representations is generated, and Section \[sec:results\] presents a selection of results including the comparison to PEG codes with equal rate and similar length. Transmission setup and channel coding {#sec:mbbp_decoding} ===================================== In this section we introduce a consistent notation and give a proper definition of the channel setup. A source emits non-redundant binary information symbols $u$, which are encoded and mapped to binary antipodal symbols $x$. As systematic encoding leads to several advantages [@hehnetal07e], this type of encoding is used throughout this work. Due to the fact that exclusively $[n,k,d]$ block codes are used in this investigation, the encoded symbols are denoted as vectors $\ve{x}$ of length $n$. These vectors are transmitted over the AWGN channel. In this context, $\ve{y}$ denotes the noisy received vector corresponding to $\ve{x}$. At the receiver, an iterative decoding scheme is used to estimate $\ve{x}$ and the corresponding source symbols. This scheme is either a standard BP decoder or the MBBP decoding setup. Let us briefly review the basic properties of MBBP, which allows for the performance improvements discussed in this paper. MBBP is an iterative decoding scheme, originally designed to decode block codes with dense parity-check matrices [@hehnetal07c]. To this end, it runs multiple instances of the BP decoding algorithm in parallel. Each of these decoders is provided with the received signal $\ve{y}$ and a different parity-check matrix for the code. In this context, we denote the $l$ parity-check matrix representations by $\ve{H}_1$ to $\ve{H}_{l}$. The corresponding codeword estimates are $\hat{\ve{x}}_1$ to $\hat{\ve{x}}_l$ and the candidate forwarded to the information sink is $\hat{\ve{x}}$. \[lb\]\[lb\][$\ve{y}$]{} \[l\]\[l\]\[1\] \[l\]\[l\]\[1\] \[l\]\[l\]\[1\] \[c\]\[c\]\[1\][$\hat{\ve{x}}$]{} \[c\]\[c\]\[1\] \[c\]\[c\]\[1\] \[l\]\[l\] \[l\]\[l\] \[l\]\[l\] \[c\]\[c\]\[1\] \[c\]\[c\]\[1\] \[c\]\[c\]\[1\] \[c\]\[c\]\[1\] \[c\]\[c\]\[1\] \[lb\]\[lb\]\[1\] ![MBBP decoding setup[]{data-label="fig:MBBP_NX_S"}](./eps/parallel_decoder_structure.eps "fig:") This algorithm is motivated by the fact that different parity-check matrices allow for decoding of different error patterns when the suboptimal BP decoding algorithm is used. This can be understood as “decoder diversity”. It is the task of the post-processing unit to combine these capabilities such that one single decoder estimate can be forwarded to the information sink. This is a degree of freedom in the MBBP design. Moreover, the opportunity of allowing the decoders to communicate with each other exists. A multitude of variations of this type have been introduced in [@hehnetal07c]. In this work, we focus on non-communicating parallel BP decoders and a post-processing unit that deploys the usual Euclidean distance metric. It uses this metric to select the codeword which is closest to the received vector $\ve{y}$ from the decoder outputs (MBBP-NX-S setup in [@hehnetal07c]). Figure \[fig:MBBP\_NX\_S\] visualizes this approach. We will also make use of the Leaking algorithm to improve the decoding performance of WiMAX LDPC codes. This method keeps channel information from the decoder and allows it only to “leak” into the decoding progress with rising iteration number. It was shown that this method mitigates the problems of BP decoding with short cycles. In order to use this approach, two variables need to set: $p_{\mathcal{L}}$, the probability of a variable node being informed on the channel output in the first iteration as well as the parameter $I'_{\mathrm{max}}$, the (hypothetical) iteration number for which all channel information is included in the decoding process. Detailed information on this algorithm can be found in [@hehnetal08]. We refer to an MBBP setup using the Leaking algorithm by L-MBBP. The WiMAX standard provides a multitude of channel codes. A performance comparison for signaling over the AWGN channel, confirming that the class of LDPC codes is among the most powerful codes in this setup, can be found in [@baumgartneretal05]. Inspired by its short length and a discussion on the practical relevance of these codes [@kienle08], the rate-$1/2$ LDPC codes proposed in the IEEE 802.16e standard [@ieee_std_802_16e_05] are investigated in this work. As we are generally interested in codes of short length, we restrict our attention to codes of length $576\leq n\leq 960$. For comparison we also consider PEG-optimized codes of rate $1/2$ and length $500\leq n\leq 1000$. Parity-check matrix representations {#sec:matrix_representations} =================================== We review a general method to construct redundant parity-checks from a given matrix and present a novel method for LDPC codes from the WiMAX standard. A comparison shows the advantage of the second approach. Set of matrix representations for MBBP decoding {#sec:matrix_representations_for_mbbp} ----------------------------------------------- The most crucial parameter for the success of MBBP decoding is the set of parity-check matrices used in the decoding instances. Especially simulations with PEG codes [@hehnetal08] have shown that the applied matrices need to fulfill two criteria. First, the Tanner graphs [@tanner81] of the matrices need to differ sufficiently in their structure such that the decoders obtain a decoding diversity and a performance improvement. Second, the decoders running on the parity-check matrices need to obtain comparable performance results. Adding a representation to an existing MBBP setup can only increase the overall performance if its standard BP performance is comparable to the performance of the current MBBP setup. In [@hehnetal08] a general method to construct a set of redundant parity-check matrices for a given code was presented. This method was originally intended to provide good redundant parity-check matrices for PEG-constructed codes of short length and makes use of the fact that there exist cycles of length $4$ and $6$ in the Tanner graph. Especially for code lengths $n\leq 1000$, many additional parity-check equations can be found with this method. The aim of this approach, to be described in detail shortly, is to approximate the property “low-density” for the additional checks. Let $c$ be the length of the considered cycle and let ${\mathcal{G}}_c$ be one set of indices of parity checks closing a cycle of length $c$. A linear combination of the parity checks indexed by the set ${\mathcal{G}}_c$ leads to a novel parity-check equation with a (Hamming) weight of at most $$w_{\mathrm{r}}=\sum\limits_{i\in{\mathcal{G}}_c} w_i - c, \label{eq:weight_redundant_row}$$ where $w_i$ denotes the weight of parity check $i$. This is a general approach and can be used for any parity-check matrix with a local cycle length of $c$. It was shown in [@hehnetal08] and [@hehn09] that this approach leads to desirable performance results when using PEG codes. However, the parity-check matrix of the WiMAX codes show more structure, what allows for a better construction algorithm. Parity-check matrices for codes specified in the IEEE 802.16e standard {#sec:matrix_representations_80216e} ---------------------------------------------------------------------- The LDPC codes of rate $1/2$ standardized in [@ieee_std_802_16e_05] are all deducted from one base matrix $\ve{H}'_{\mathrm{b}}$. The realizations of different lengths are created from this matrix by [*lifting*]{} [@tanner81]. Prior to this step, a renormalization is done, i.e. the lifting procedure is applied to the elements $$H_{\mathrm{b}}(i,j)=\left\{\begin{array}{ccc}\left\lfloor\frac{H'_{\mathrm{b}}(i,j)\cdot z}{96}\right\rfloor&\mbox{ if }& H'_{\mathrm{b}}(i,j)>0\\H'_{\mathrm{b}}(i,j)&\mbox{ if }&H'_{\mathrm{b}}(i,j)\leq 0\end{array}\right.$$ of the matrix $\ve{H}_{\mathrm{b}}$. $$\arraycolsep0.65mm \ve{H}'_{\mathrm{b}}=\left( \begin{array}{rrrrr rrrrr rrrrr rrrrr rrrr} -1 & 94 & 73 & -1 & -1 & -1 & -1 & -1 & 55 & 83 & -1 & -1 & 7 & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & 27 & -1 & -1 & -1 & 22 & 79 & 9 & -1 & -1 & -1 & 12 & -1 & 0 & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & -1 & -1 & 24 & 22 & 81 & -1 & 33 & -1 & -1 & -1 & 0 & -1 & -1 & 0 & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ 61 & -1 & 47 & -1 & -1 & -1 & -1 & -1 & 65 & 25 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & -1 & 39 & -1 & -1 & -1 & 84 & -1 & -1 & 41 & 72 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & -1 & -1 & -1 & 46 & 40 & -1 & 82 & -1 & -1 & -1 & 79 & 0 & -1 & -1 & -1 & -1 & 0 & 0 & -1 & -1 & -1 & -1 & -1 \\ -1 & -1 & 95 & 53 & -1 & -1 & -1 & -1 & -1 & 14 & 18 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & -1 & -1 & -1 & -1 \\ -1 & 11 & 73 & -1 & -1 & -1 & 2 & -1 & -1 & 47 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & -1 & -1 & -1 \\ 12 & -1 & -1 & -1 & 83 & 24 & -1 & 43 & -1 & -1 & -1 & 51 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & -1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 94 & -1 & 59 & -1 & -1 & 70 & 72 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & -1 \\ -1 & -1 & 7 & 65 & -1 & -1 & -1 & -1 & 39 & 49 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 & 0 \\ 43 & -1 & -1 & -1 & -1 & 66 & -1 & 41 & -1 & -1 & -1 & 26 & 7 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & 0 \\ \end{array}\right) \label{eq:base_matrix}$$ In this context, $z$ is the *expansion factor* and depends on the code realization. The lifting procedure, from which the parity-check matrix $\ve{H}$ results, is described as follows. Each negative entry in the base matrix $\ve{H}_{\mathrm{b}}$ is replaced by a $z\times z$ zero matrix and each non-negative element $H_{\mathrm{b}}(i,j)$ is substituted by an identity matrix which is cyclically shifted to the right by $H_{\mathrm{b}}(i,j)$ positions. Equation (\[eq:base\_matrix\]) specifies the base matrix $\ve{H}_{\mathrm{b}}'$ for the rate-$1/2$ LDPC code [@ieee_std_802_16e_05 p. 628]. Performing the lifting approach leads to the binary matrix $\ve{H}$. Considering that any entry in $\ve{H}_{\mathrm{b}}$ is replaced by a permutation matrix with constant row weight one, it is easy to see from Equation (\[eq:base\_matrix\]) that the weight of any parity check of $\ve{H}$ is $6$ or $7$, regardless of the actual length $n$ of the code. The girth of the code was found to be $6$ for all lengths considered. It is now our task to determine parity checks which are linear combinations of the given parity checks and have as low as possible weight. Redundant parity checks from $\ve{H}_{\mathrm{b}}$ {#sec:redundant_checks_from_base_matrix} -------------------------------------------------- The novel approach for creating redundant parity-check equations uses the base matrix $\ve{H}_{{\mathrm{b}}}$ instead of the binary matrix $\ve{H}$ to find valid linear combinations. Subsequently it performs the lifting operation on the redundant checks. Let us elaborate on the generation of these checks. In a binary matrix, a redundant check can be found as a linear combination of two or more existing checks. This proceeding is in general not possible when the base matrix $\ve{H}_{\mathrm{b}}$ is considered, as the addition of two entries is not defined. However, the addition of a negative and a non-negative element, as well as the addition of two zero elements is a straightforward task. The result of the addition is the non-negative element and the element $-1$, respectively. Using this approach, redundant checks can be created by the linear combination of two existing checks, which do not share a positive element in any column. Lifting a redundant check leads to a set of $z$ checks for the binary matrix $\ve{H}$, which are subsequently used to create sets of non-equal, binary parity check matrices. As an example, we state that the linear combination of rows $11$ and $12$ in $\ve{H}_{\mathrm{b}}$ leads to $z$ binary redundant checks of weight $10$, since the non-negative entries in rows $11$ and $12$ have disjoint column positions, except for the last column, which contains zero entries. Depending on the length of the code, we replace $10$ to $16$ parity checks in the existing parity-check matrix to generate a new representation. At this step, we ensure that the resulting parity-check matrix has full rank. Let us now compare this result to the approach from Section \[sec:matrix\_representations\_for\_mbbp\] by means of the WiMAX code of length $n=576$. The local girth of its parity-check matrix varies between $c=6$ and $c=8$. Using Equation (\[eq:weight\_redundant\_row\]) and $c=6$, it can be deducted that additional parity-check representations have a weight of at most $12$ to $15$, depending on the actual parity checks used to create the linear combinations. It was verified by computer simulations that this bound is met by the realizations. The authors are aware of the fact that this novel method is still a suboptimal approach and therefore assess it in a more general manner. The methods provided in [@Huetal04] allow for an efficient search of low-weight codewords. Using these methods to the dual of the IEEE 802.16e rate-$1/2$-code of length $576$ did not return any codewords which are not already present in the rows of the original parity-check matrix of weight below $10$. This allows us to conclude that the proposed method is well suited for the considered class of codes. Results and Comparison {#sec:results} ====================== We present simulation results for codes from the WiMAX standard of different length. In this context, we apply standard BP decoding as well as the (L)-MBBP approach to show the performance improvements obtained with this method. We allow all BP decoding units to perform at most $200$ iterations. This is a proper choice for which a further increase does not improve the standard BP decoding performance significantly. We also limit the number of different parity-check matrices in an MBBP setup to 15 and allow leaking with an initial setting of $p_{\mathcal{L}}=0.9$, what results to a maximum number of $30$ decoders in parallel. The current development of multiprocessor techniques [@vangaletal07] allows us to state that this setting can easily be parallelized with upcoming microcontroller techniques. Furthermore we set the parameter $I'_{\mathrm{max}}=300$, as this setting leads to desirable results in our computer simulations. Figure \[fig:ieee\_80216\_15\_decoders\] shows performance results for the WiMAX codes of length $n=576$ and $n=960$. In order to emphasize that the bigger part of the decoding gain is already obtained by a low number of decoder representations, we show different MBBP settings. To be precise, we allow $l=7$, $l=15$, and $l=30$ representations to run in parallel. In Figure \[fig:ieee\_80216\_15\_decoders\] we observe that the most prominent part of the decoding gain is already achieved with $7$ decoders in parallel and another small gain is achieved for $l=15$. The setup using L-MBBP and utilizing $30$ decoders in total compares favorably but the difference is small in relation to the number of decoders additionally required. Using all decoding units, the proposed multi-decoding approach improves the performance of WiMAX codes for about $0.15$ dB. The random coding bound (Gallager bound) [@gallager68] marks desirable performance values and is shown for comparison reasons. In order to provide performance results on the ${{\mathrm{BER}}}$, we estimate the minimum distance $d$ for random codes of given length and rate by means of the Gilbert-Varshamov-bound [@macwilliamsetal77] and assume for the random coding bound that $d$ errors happen in an erroneously decoded frame. Details on this approach can be found in [@hehnetal07e]. In a next step, we assess our results in a more general setting and compare the codes defined in the WiMAX standard to PEG-optimized codes of equal rate and length $500\leq n \leq 1000$. We also compare these results to the Gallager bound and the sphere packing bound. Detailed information on the latter bound can be found in [@shannon59]. For the PEG codes, we use the optimized degree distribution $$\begin{aligned} L(x)&=&0.5043865558\cdot x^2+0.2955760529\cdot x^3+\nonumber\\&&0.0572634080\cdot x^5+0.0362602194\cdot x^6+\nonumber\\&&0.0049622081\cdot x^7+0.0292344776\cdot x^9+\nonumber\\&&0.0650312477\cdot x^{11}+0.0072858305\cdot x^{12} \label{eq:degree_distribution}\end{aligned}$$ from [@urbanke], which has a gap to capacity of about $0.2$ dB and leads to desirable results for the code lengths of interest [@hehnetal07e]. Within the error region of interest, the created ensembles show strictly concentrated behavior, what allows us to study the subsequent results independent of the random seed used for the construction algorithm. \[ct\]\[ct\] \[cb\]\[cb\] \[l\]\[l\] \[l\]\[l\] \[l\]\[l\] \[l\]\[l\] \[c\]\[c\] ![image](./eps/wimax_and_peg_required_SNR_over_codelength_ber.eps) \[ct\]\[ct\] \[cb\]\[cb\] \[l\]\[l\] \[l\]\[l\] \[l\]\[l\] \[l\]\[l\] \[c\]\[c\] ![image](./eps/wimax_and_peg_required_SNR_over_codelength_fer.eps) \[l\]\[l\] \[l\]\[l\][$n=576$]{} \[l\]\[l\][$n=672$]{} \[l\]\[l\][$n=768$]{} \[l\]\[l\][$n=864$]{} \[l\]\[l\][$n=960$]{} \[l\]\[l\] \[l\]\[l\][$n=500$]{} \[l\]\[l\][$n=600$]{} \[l\]\[l\][$n=700$]{} \[l\]\[l\][$n=800$]{} \[l\]\[l\][$n=900$]{} \[l\]\[l\][$n=1000$]{} ![image](./eps/wimax_and_peg_external_legend.eps) Figure \[fig:wimax\_and\_peg\_required\_SNR\_over\_codelength\] shows the signal-to-noise ratio [$10\cdot\log_{10}(E_{\mathrm{b}}/N_0)$]{}, which is required to obtain the reliability criterion ${{\mathrm{BER}}}=10^{-5}$ and ${{\mathrm{FER}}}=10^{-3}$, respectively. Plotted are results for WiMAX codes and PEG-optimized codes for both BP and L-MBBP decoding as well as the Gallager bound. In order to keep an appropriate presentation for the numerical results, but still provide the reader with an idea of the position of the sphere packing bound (SPB), we choose to plot the left-most part of it and state that its shape is shown to be similar to the shape of the Gallager bound in [@hehnetal07e]. Let us first consider the results for the WiMAX codes. In our simulations, the codes showed slightly different error-floor behavior. This holds in particular for the code of length $n=864$. It is to observe that a gain of about $0.15$ dB is achieved for all code lengths considered. From the plot for ${{\mathrm{FER}}}=10^{-3}$ and the code of length $n=960$ we observe that the gap to the Gallager bound reads about $0.7$ dB, which can be lowered by $0.14$ dB (or $20$ $\%$) with the L-MBBP approach. Similar results are presented for the PEG-optimized LDPC codes, where we also restrict the maximum number of decoders in parallel to $30$. The actual number is however lower due to lack of well-performing presentations. The PEG codes show the desired performance results at about $0.15$ dB lower signal-to-noise ratios. Again, the L-MBBP approach mitigates the gap to the random coding bound by about $20$ $\%$. It is worth mentioning that the codes defined in the WiMAX standard have a significantly lower density compared to the considered PEG codes. This allows for faster decoding with the BP algorithm. If one considers not only the length but also the decoding speed as a system parameter, the standardized codes are comparable to the PEG-optimized codes discussed in this work. Detailed results on this comparison can be found in [@hehn09]. Conclusions =========== The contribution of this paper is two-fold. First, we adapted the scheme of MBBP decoding to codes from the WiMAX standard what allowed us to improve the decoding performance for about $0.15$ dB. As a second contribution, we compared the performance of the WiMAX codes with the performance of PEG codes. Both for BP decoding and MBBP decoding, the PEG codes obtained a measurable performance improvement compared to the codes in the WiMAX standard.
--- abstract: 'We study the K-theory of actions of diagonalizable group schemes on noetherian regular separated algebraic spaces: our main result shows how to reconstruct the K-theory ring of such an action from the K-theory rings of the loci where the stabilizers have constant dimension. We apply this to the calculation of the equivariant K-theory of toric varieties, and give conditions under which the Merkurjev spectral sequence degenerates, so that the equivariant K-theory ring determines the ordinary K-theory ring. We also prove a very refined localization theorem for actions of this type.' address: - | Dipartimento di Matematica Applicata\ Università di Firenze\ I-50139 Firenze\ Italy - | Dipartimento di Matematica\ Università di Bologna\ 40126 Bologna\ Italy author: - Gabriele Vezzosi - Angelo Vistoli date: 'September 18, 2004' title: | Higher algebraic K-theory for actions\ of diagonalizable groups --- [^1] Introduction {#introduction .unnumbered} ============ Fix a basis noetherian separated connected scheme $S$, and let $G$ be a diagonalizable group scheme of finite type over $S$ (see [@sga3 Exposé VII]); recall that this means that $G$ is the product of finitely many multiplicative groups ${\mathbb{G}_{\mathrm{m},S}}$ and group schemes $\boldsymbol{\mu}_{n,S}$ of $n^\mathrm{th}$ roots of 1 for various values of $n$. Suppose that $G$ acts on a separated noetherian regular algebraic space $X$ over $S$. If $G$ acts on $X$ with finite stabilizers, then [@vevi] gives a decomposition theorem for the equivariant higher K-theory ring ${\operatorname{K}_{*}}(X,G)$; it says that, after inverting some primes, ${\operatorname{K}_{*}}(X,G)$ is a product of certain factor rings ${\operatorname{K}_{*}}(X^\sigma,G)_\sigma$ for each subgroup schemes $\sigma \subseteq G$ with $\sigma \simeq \boldsymbol{\mu}_n$ for some $n$ and $X^\sigma \neq \emptyset$ (the primes to be inverted are precisely the ones dividing the orders of the $\sigma$). A slightly weaker version of this theorem was given in [@toen]. From this one can prove analogous formulas assuming that the stabilizers are of constant dimension (Theorem \[thm:refinedconstant\]). This paper deals with the general case, when the dimensions of the stabilizers are allowed to jump. In this case one sees already in the simplest examples that ${\operatorname{K}_{*}}(X,G)$ will not decompose as a product, not even after tensoring with $\mathbb{Q}$; for example, if $S$ is a field, $G$ is a torus and $X$ is a representation of $G$, then ${\operatorname{K}_{0}}(X,G)$ is the ring of representations ${\mathrm{R}}G$, which is a ring of Laurent polynomials over $\mathbb{Z}$. However, we show that the ring ${\operatorname{K}_{*}}(X,G)$ has a canonical structure of fibered product. More precisely, for each integer $s$ we consider the locus $X_s$ of $X$ where the stabilizers have dimension precisely equal to $s$; this is a locally closed regular subspace of $X$. For each $s$ consider the normal bundle ${\mathrm{N}}_s$ of $X_s$ in $X$, and the subspace ${\mathrm{N}}_{s,s-1}$ where the stabilizers have dimension precisely $s-1$. There is a pullback map ${\operatorname{K}_{*}}(X_s,G) \to {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)$; furthermore in Section \[sec:specializations\] we define a specialization homomorphism ${\operatorname{Sp}}_{X,s}^{s-1} \colon {\operatorname{K}_{*}}(X_{s-1},G) \to {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)$, via deformation to the normal bundle. Our first main result (Theorem \[thm:maintheorem\]) show that these specializations homomorphisms are precisely what is needed to reconstruct the equivariant K-theory of $X$ from the equivariant K-theory of the strata. \[main:maintheorem\] Let $n$ be the dimension of $G$. The restriction homomorphisms $${\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(X_s,G)$$ induce an isomorphism $$\begin{aligned} {\operatorname{K}_{*}}(X,G) \simeq {}&{\operatorname{K}_{*}}(X_n,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1},G)} {\operatorname{K}_{*}}(X_{n-1},G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n-1,n-2},G)}\\ &\quad\ldots \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{2,1},G)} {\operatorname{K}_{*}}(X_1,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{1,0},G)} {\operatorname{K}_{*}}(X_0,G). \end{aligned}$$ In other words: the restrictions ${\operatorname{K}_{*}}(X,G) \to {\operatorname{K}_{*}}(X_s,G)$ induce an injective homomorphism ${\operatorname{K}_{*}}(X,G) \to \prod_s {\operatorname{K}_{*}}(X_s,G)$, and an element $(\alpha_n, \ldots, \alpha_0)$ of the product $\prod_s {\operatorname{K}_{*}}(X_s,G)$ is in the image of ${\operatorname{K}_{*}}(X,G)$ if and only if the pullback of $\alpha_s \in {\operatorname{K}_{*}}(X_s,G)$ to ${\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)$ coincides with ${\operatorname{Sp}}_{X,s}^{s-1}(\alpha_{s-1}) \in {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)$ for all $s = 1$, …, $n$. This theorem is a powerful tool in studying the K-theory of diagonalizable group actions. From it one gets easily a description of the higher equivariant K-theory of regular toric varieties (Theorem \[thm:describetoric\]). This is analogous to the description of their equivariant Chow ring in [@brion1 Theorem 5.4]. One can put Theorem \[main:maintheorem\] above together with the main result of [@vevi] to give a very refined description of ${\operatorname{K}_{*}}(X,G)$; this is Theorem \[thm:refineddecomposition\], which can be considered the ultimate localization theorem for actions of diagonalizable groups. However, notice that it does not supersede Theorem \[main:maintheorem\], because Theorem \[main:maintheorem\] holds with integral coefficients, while for the formula of Theorem \[thm:refineddecomposition\] to be correct we have to invert some primes. Many results are known for equivariant intersection theory, or for equivariant cohomology; often one can use our theorem to prove their K-theoretic analogues. For example, consider the following theorem of Brion, inspired in turn by results in equivariant cohomology due to Atiyah ([@atiyah]), Bredon ([@bredon]), Hsiang ([@hsiang]), Chang and Skjelbred ([@ch-sk]), Kirwan ([@kirwan]), Goresky, Kottwitz and MacPherson ([@gkmp]); see also the very useful discussion in [@brion2]. [Theorem]{}\[[[@brion1 3.2, 3.3]]{}\] \[thm:brion\] Suppose that $X$ is a smooth projective algebraic variety over an algebraically closed field with an action of an algebraic torus $G$. \[[testing]{};1\] The rational equivariant Chow ring ${\operatorname{A}^{*}}_G(X)_{\mathbb{Q}}$ is free as a module over ${\operatorname{A}^{*}}_G(\mathrm{pt})_{\mathbb{Q}}$. \[[testing]{};2\] The restriction homomorphism $${\operatorname{A}^{*}}_G(X)_{\mathbb{Q}} \longrightarrow {\operatorname{A}^{*}}_G(X^G)_{\mathbb{Q}} = {\operatorname{A}^{*}}(X^G)_{\mathbb{Q}} \otimes {\operatorname{A}^{*}}_G(\mathrm{pt})_{\mathbb{Q}}$$ is injective, and its image is the intersection of all the images of the restriction homomorphisms ${\operatorname{A}^{*}}_G(X^T)_{\mathbb{Q}} \to {\operatorname{A}^{*}}_G(X^G)_{\mathbb{Q}}$, where $T$ ranges over all the subtori of codimension 1. From this one gets a very simple description of the rational equivariant Chow ring when the fixed point locus $X^G$ is zero dimensional, and the fixed point set $X^T$ is at most 1-dimensional for any subtorus $T \subseteq G$ of codimension 1 ([@brion1 Theorem 3.4]). In this paper we prove a version of Brion’s theorem for algebraic K-theory. Remarkably, it holds with integral coefficients: we do not need to tensor with $\mathbb{Q}$. This confirms the authors’ impression that when it comes to torsion, K-theory tends to be better behaved than cohomology, or intersection theory. The following is a particular case of Corollary \[cor:maincorollary\]; when $G$ is a torus, it is an an analogue of part [(\[thm:brion;2\])]{} of Brion’s theorem. \[main:theorem2\] Suppose that $G$ is a diagonalizable group acting a smooth proper scheme $X$ over a perfect field; denote by $G_0$ the toral component of $G$, that is, the largest subtorus contained in $G$. Then the restriction homomorphism ${\operatorname{K}_{*}}(X, G) \to {\operatorname{K}_{*}}(X^{G_0}, G)$ is injective, and its image equals the intersection of all the images of the restriction homomorphisms ${\operatorname{K}_{*}}(X^T, G) \to {\operatorname{K}_{*}}(X^{G_0}, G)$ for all the subtori $T \subseteq G$ of codimension 1. From this one gets a very complete description of ${\operatorname{K}_{*}}(X,G)$ when $G$ is a torus and $X$ is smooth and proper over an algebraically closed field, in the “generic” situation when $X$ contains only finitely many invariant points and finitely many invariant curves (Corollary \[cor:generic\]). We also analyze the case of smooth toric varieties in detail in Section \[sec:toric\]. The analogue of Theorem \[main:theorem2\] should hold for the integral equivariant topological K-theory of a compact differentable manifold with the action of a compact torus. Some related topological results are contained in [@rokn]. Description of contents {#description-of-contents .unnumbered} ----------------------- Section \[sec:notation\] contains the setup that will be used throughout this paper. The K-theory that we use is the one described in [@vevi]: see the discussion in Subsection \[sub:equiK-theory\]. Section \[sec:preliminary\] contains some preliminary technical results; the most substantial of these is a very general self-intersection formula, proved following closely the proof of Thomason of the analogous formula in the non-equivariant case ([@th1 Théorème 3.1]). Here we also discuss the stratification by dimensions of stabilizers, which is our basic object of study. In Section \[sec:specializations\] we define various types of specializations to the normal bundle in equivariant K-theory. This is easy for ${\operatorname{K}_{0}}$, but for the whole higher K-theory ring we do not know how to give a definition in general without using the language of spectra. Section \[sec:reconstruction\] contains the proof of Theorem \[main:maintheorem\]. Section \[sec:limits\] is dedicated to the analysis of the case when $X$ is complete, or, more generally, admits enough limits (Definition \[def:admitslimits\]). The condition that ${\operatorname{K}_{*}}(X,G)$ be free as a module over the representation ring ${\mathrm{R}}G$ is not adequate when working with integral coefficients: here we analyze a rather subtle condition on the ${\mathrm{R}}G$-module ${\operatorname{K}_{*}}(X,G)$ that ensures that the analogue of Brion’s theorem above holds, then we show, using a Białynicki-Birula stratification, that this condition is in fact satisfied when $X$ admits enough limits over a perfect field. We also apply our machinery to show that the degeneracy of the Merkurjev spectral sequence in [@merk], that he proves when $X$ is smooth and projective, in fact happens for torus actions with enough limits. Section \[sec:toric\] is dedicated to the K-theory of smooth toric varieties. For any smooth toric variety $X$ for a torus $T$, we give two descriptions of ${\operatorname{K}_{*}}(X,T)$. First of all, we show how Theorem \[thm:maintheorem\] in this case gives a simple description of it as a subring of a product of representation rings, analogous to the description of its equivariant Chow ring in [@brion1 Theorem 5.4]. Furthermore, we give a presentation of ${\operatorname{K}_{*}}(X,T)$ by generators and relations over the K-theory ring of the base field (Theorem \[thm:SR\]), analogous to the classical Stanley–Reisner presentation for its equivariant cohomology first obtained in [@bdcp]. For ${\operatorname{K}_{0}}$ the result is essentially stated in [@kly]. In Section \[sec:decomposition\] we generalize the result of [@vevi] by giving a formula that holds for all actions of diagonalizable groups on regular noetherian algebraic spaces, irrespective of the dimensions of the stabilizers (Theorem \[thm:refineddecomposition\]). Acknowledgments {#acknowledgments .unnumbered} --------------- The first author is thankful for the hospitality at the University of Utah and the Université de Grenoble, where some of the work on this paper has been done. During the writing of the first draft of this paper the second author held a visiting position at the Department of Mathematics of the University of Utah: he is very grateful for the hospitality. He would like to thank Jim Carlson, Steve Gersten, Dragan Milicic, Anne and Paul Roberts, Paula and Domingo Toledo, and, most particularly, Aaron Bertram, Herb Clemens and their families for making his stay such an enjoyable experience. Both of us would like to thank Michel Brion for his help, and for many very helpful conversations; among other things, he suggested the possibility that we might be able to prove Corollary \[cor:maincorollary\]. Also, his excellent articles [@brion1] and [@brion2] were a source of inspiration. The results of Subsection \[subsec:multSR\] were obtained after stimulating conversations with Bernd Sturmfels, Howard Thompson, Allen Knutson and Corrado De Concini, that took place while the second author was a visitor at the Mathematical Sciences Research Institute in Berkeley. Knutson also pointed out the references [@atiyah] and [@bredon] to us. We are grateful to all of them, and to MSRI for the hospitality. We also want to thank the referee, who did an unusually careful and fair job. Finally, we would like to remark how much we owe to the articles of the late Robert W. Thomason: without his groundbreaking work on K-theory, equivariant and non, this paper might never have been written. His premature death has been a great blow to the mathematical community. Notations and conventions {#sec:notation} ========================= Throughout the paper we fix a base scheme $S$, that is assumed to be connected, separated and noetherian. We will denote by $G$ a diagonalizable group scheme of finite type over $S$ (see [@sga3]), except when otherwise mentioned. Its groups of characters is $\widehat G {\overset{\mathrm{def}} =}{\operatorname{Hom}}_S(G, {\mathbb{G}_{\mathrm{m},S}})$; the contravariant functor from the category of diagonalizable groups schemes of finite type over $S$ to the category of finitely generated abelian groups given by $G \mapsto \widehat G$ is an antiequivalence of categories. The ring of representations of $G$ is, by definition, ${\mathrm{R}}G = \mathbb{Z} \widehat G$, and furthermore $G = {\operatorname{Spec}}{\mathrm{R}}G \times_{{\operatorname{Spec}}\mathbb{Z}} S$. We will denote by $G_0$ the *toral part* of $G$, that is, the largest subtorus of $G$. The group of characters $\widehat G_0$ is the quotient of $\widehat G$ by its torsion subgroup. A $G$-space will always be a regular separated noetherian algebraic space over $S$ over which $G$ acts; sometimes we will talk about a *regular* $G$-space, for emphasis. We notice explicitly that if $S' \to S$ is a morphism of schemes, with $S'$ connected, then every diagonalizable subgroup scheme of $G\times_S S'$ is obtained by base change from a unique diagonalizable subgroup scheme of $G$. This will be used as follows: if $p: {\operatorname{Spec}}\Omega \to X$ is a geometric point, then we will refer to its stabilizer, which is a priori a subgroup scheme of $G\times_S {\operatorname{Spec}}\Omega$, as a subgroup scheme of $G$. If $Y {\hookrightarrow}X$ is a regular embedding, we denote by ${\mathrm{N}}_Y X$ the normal bundle. Equivariant K-theory {#sub:equiK-theory} -------------------- In this subsection $G$ will be a group scheme over $S$ that is flat, affine and of finite type. We use the same K-theoretic setup as in [@vevi], that uses the language of [@thtr]. The following is a slight extension of [@vevi Theorem 6.4]. \[prop:K-theory\] Let $G$ be flat affine separated group scheme of finite type over $S$, acting over a noetherian regular separated scheme $X$ over $S$. Consider the following complicial bi-Waldhausen categories: \[[testing]{};qc\] the category ${\operatorname{W}}_1(X,G)$ of complexes of quasicoherent $G$-equivariant $\mathcal{O}_X$-modules with bounded coherent cohomology; \[[testing]{};c\] the category ${\operatorname{W}}_2(X,G)$ of bounded complexes of coherent $G$-equivariant $\mathcal{O}_X$-modules; \[[testing]{};fqc\] the category ${\operatorname{W}}_3(X,G)$ of complexes of flat quasicoherent $G$-equivariant $\mathcal{O}_X$-modules with bounded coherent cohomology, and \[[testing]{};fqcba\] the category ${\operatorname{W}}_4(X,G)$ of bounded above complexes of $G$-equivariant quasi-coherent flat $\mathcal{O}_X$-Modules with bounded coherent cohomology. Then the inclusions $${\operatorname{W}}_2(X,G) \subseteq {\operatorname{W}}_1(X,G)\quad \text{and}\quad{\operatorname{W}}_4(X,G) \subseteq{\operatorname{W}}_3(X,G) \subseteq {\operatorname{W}}_1(X,G)$$ induce isomorphisms on the corresponding Waldhausen K-theories. Furthermore the K-theory of any of the categories above coincides with the Quillen K-theory ${\operatorname{K}_{*}}'(X,G)$ of the category of $G$-equivariant coherent $\mathcal{O}_X$-modules. For the first three categories, and the Quillen K-theory, the statement is precisely [@vevi Theorem 6.4]. Let us check that the inclusion ${\operatorname{W}}_4(X,G) \subseteq {\operatorname{W}}_1(X,G)$ induces an isomorphism in K-theory. By [@vevi Proposition 6.2], which shows that hypothesis 1.9.5.1 is satisfied, we can apply [@thtr Lemma 1.9.5], in the situation where $\mathcal{A}$ is the category of $G$-equivariant quasicoherent $\mathcal{O}_X$-Modules, $\mathcal{C}$ the category of cohomologically bounded complexes in $\mathcal{A}$, $\mathcal{D}$ the category of $G$-equivariant quasicoherent flat $\mathcal{O}_X$-Modules, $F:\mathcal{D} {\hookrightarrow}\mathcal{A}$ is the natural inclusion. In particular, any complex in ${\operatorname{W}}_1(X,G)$ receives a quasi-isomorphism from a complex in ${\operatorname{W}}_4(X,G)$. That is, [@thtr 1.9.7.1], applied to the inclusion ${\operatorname{W}}_4(X,G){\hookrightarrow}{\operatorname{W}}_1(X,G)$, is satisfied; since the other hypothesis 1.9.7.0 of [@thtr 1.9.7] is obviously satisfied, we conclude by [@thtr Theorem 1.9.8]. We will denote by ${\operatorname{\mathbb{K}}}(X,G)$ the Waldhausen K-theory spectrum and by ${\operatorname{K}_{*}}(X,G)$ the Waldhausen K-theory group of any of the categories above. As observed in [@vevi p. 39], it follows from results of Thomason that ${\operatorname{K}_{*}}(-,G)$ is a covariant functor for proper maps of noetherian regular separated $G$-algebraic spaces over $S$; furthermore, each ${\operatorname{K}_{*}}(X,G)$ has a natural structure of a graded ring, and each equivariant morphism $f \colon X \to Y$ of noetherian regular separated $G$-algebraic spaces over $S$ induces a pullback $f^* \colon {\operatorname{K}_{*}}(Y,G) \to {\operatorname{K}_{*}}(X,G)$, making ${\operatorname{K}_{*}}(-,G)$ into a contravariant functor from the category of noetherian regular separated $G$-algebraic spaces over $S$ to graded-commutative rings. Furthermore, if $i \colon Y {\hookrightarrow}X$ is a closed embedding of noetherian regular separated $G$-algebraic spaces and $j \colon X \setminus Y {\hookrightarrow}X$ is the open embedding, then ${\operatorname{\mathbb{K}}}(X\setminus Y, G)$ is the cone of the pushforward map $i_* \colon {\operatorname{\mathbb{K}}}(Y,G) \to {\operatorname{\mathbb{K}}}(X,G)$ ([@th5 Theorem 2.7]), so there is an exact localization sequence $$\cdots\longrightarrow {\operatorname{K}_{n}}(Y,G) \overset{i_*} \longrightarrow {\operatorname{K}_{n}}(X,G) \overset{j^*} \longrightarrow {\operatorname{K}_{n}}(X \setminus Y,G) \overset{\partial} \longrightarrow {\operatorname{K}_{n-1}}(Y,G) \longrightarrow \cdots$$ Furthermore, if $\pi\colon E \to X$ is a $G$-equivariant vector bundle, the pullback $$\pi^*\colon {\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(E,G)$$ is an isomorphism ([@th5 Theorem 4.1]). Preliminary results {#sec:preliminary} =================== The self-intersection formula ----------------------------- Here we generalize Thomason’s self-intersection formula ([@th1 Théorème 3.1]) to the equivariant case. \[thm:selfintersection\] Suppose that a flat group scheme $G$ separated and of finite type over $S$ acts over a noetherian regular separated algebraic space $X$. Let $i \colon Z {\hookrightarrow}X$ be a regular $G$-invariant closed subspace of $X$. Then $$i^*i_* \colon {\operatorname{\mathbb{K}}}(Z,G) \longrightarrow {\operatorname{\mathbb{K}}}(Z,G),$$ coincides up to homotopy with the cup product $$\lambda_{-1}({\mathrm{N}}_Z^\vee X)\smile(-) \colon {\operatorname{\mathbb{K}}}(Z,G) \longrightarrow {\operatorname{\mathbb{K}}}(Z,G),$$ where ${\mathrm{N}}_Z^\vee X$ is the conormal sheaf of $Z$ in $X$. In particular, we have the equality $$i^*i_* = \lambda_{-1}({\mathrm{N}}_Z^\vee X)\smile(-) \colon {\operatorname{K}_{*}}(Z,G)\longrightarrow {\operatorname{K}_{*}}(Z,G).$$ The proof follows closely Thomason’s proof of , therefore we will only indicate the changes we need for that proof to adapt to our situation. Let us denote by ${\operatorname{W}}'(Z,G)$ the Waldhausen category consisting of pairs $(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })$ where $E^{\bullet }$ is a bounded above complex of $G$-equivariant quasi-coherent flat $\mathcal{O}_Z$-Modules with bounded coherent cohomology, $L^{\bullet \bullet }$ is a bicomplex of $G$-equivariant quasi-coherent flat $\mathcal{O}_X$-Modules with bounded coherent total cohomology such that $L^{ij}=0$ for $j\leq 0$, any $i$ and also $L^{ij}=0$ for $i>N$, for some integer $N$, any $i$; finally $\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet }$ is an exact augmentation of the bicomplex $L^{\bullet \bullet }$. In particular, for any $i$, the horizontal complex $L^{i\bullet }$ is a flat resolution of $i_{*}E^i$. The morphisms, cofibrations and weak equivalences in ${\operatorname{W}}'(Z,G)$ are as in [@th1 3.3, p. 209]. Thomason [@th1 3.3] shows that the forgetful functor $(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })\mapsto E^{\bullet }$ from ${\operatorname{W}}'(Z,G)$ to the category ${\operatorname{W}}_4(Z,G)$ of bounded above complexes of $G$-equivariant quasi-coherent flat $\mathcal{O}_Z$-Modules with bounded coherent cohomology induces a homotopy equivalence between the associated Waldhausen $K$-theory spectra. In other words, by Proposition \[prop:K-theory\], we can (and will) use ${\operatorname{W}}'(Z,G)$ as a “model” for ${\operatorname{\mathbb{K}}}(Z,G)$. With these choices, the morphism of spectra $i^*i_*:{\operatorname{\mathbb{K}}}( Z,G) \longrightarrow {\operatorname{\mathbb{K}}}(Z,G)$ can be represented by the exact functor ${\operatorname{W}}'(Z,G)\longrightarrow {\operatorname{W}}_1(Z,G)$ which sends $(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })$ to the total complex of the bicomplex $i^{*}(L^{\bullet \bullet })$. The rest of the proof is exactly the same as in [@th1 3.3, pp. 210-212]. One first consider functors $T_k:{\operatorname{W}}'(Z,G)\to {\operatorname{W}}_1(Z,G)$ sending an object $(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })$ to the total complex of the bicomplex $$\xymatrix@-6pt{&{}\vdots\ar[d]&{}\vdots\ar[d] &&{}\vdots\ar[d]&{}\vdots\ar[d]\\ 0\ar[r]& {}{\operatorname{im}}\partial_h^{i,-k-1}\ar[d]\ar[r]& i^{*}L^{i,-k}\ar[d]\ar[r] & {}\cdots\ar[r]& i^*L^{i,-1}\ar[d]\ar[r] & i^*{L^{i,0}}\ar[d]\ar[r]& 0\\ 0\ar[r]&{} {\operatorname{im}}\partial_h^{i+1,-k-1}\ar[r]\ar[d]& i^{*}L^{i+1,-k}\ar[r]\ar[d]& {}\cdots\ar[r]& i^*L^{i+1,-1}\ar[r]\ar[d] &{} i^*{L^{i+1,0}}\ar[r]\ar[d]& 0\\ &{}\vdots&{}\vdots&&{}\vdots&{}\vdots }$$ which results from truncating all the horizontal complexes of $i^{*}L^{\bullet \bullet }$ at the $k$-th level. The functors $T_k$ are zero for $k<0$ and come naturally equipped with functorial epimorphisms $T_k\twoheadrightarrow T_{k-1}$ whose kernel $h_k$ has the property that $h_k(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })$ is quasi isomorphic to $\Lambda ^k(\mathrm{N}_Z^{\vee }X)\otimes _{\mathcal{O}_Z}i^{*}E^{\bullet }[k]$ ([@th1 3.4.4], essentially because each horizontal complex in $L^{i\bullet }$ is a flat resolution of $i_{*}E^i$. Therefore, by induction on $k\geq -1$, starting from $T_{-1}=0$, each $T_k$ has values in ${\operatorname{W}}_1(Z,G)$ and preserves quasi-isomorphisms. Moreover, the arguments in [@th1 3.4, pp. 211-212], show that $T_k$ actually preserves cofibrations and pushouts along cofibrations; hence each $T_k:{\operatorname{W}}'(Z,G)\longrightarrow {\operatorname{W}}_1(Z,G)$ is an exact functor of Waldhausen categories. As in [@th1 3.4, p. 212], the quasi-isomorphism $$h_k(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })\simeq \Lambda ^k({\mathrm{N}}_Z^{\vee }X)\otimes _{\mathcal{O} _Z}i^{*}E^{\bullet }[k]$$ shows that the canonical truncation morphism $i^{*}(L^{\bullet \bullet })\rightarrow T_d(E^{\bullet },\lambda :L^{\bullet \bullet }\rightarrow i_{*}E^{\bullet })$, $d$ being the codimension of $Z$ in $X$, is a quasi-isomorphism, i.e. the morphism of spectra $i^{*}i_{*}:{\operatorname{\mathbb{K}}}\left( Z,G\right) \longrightarrow {\operatorname{\mathbb{K}}}\left( Z,G\right) $ can also be represented by the exact functor $T_d:{\operatorname{W}}'(Z,G)\longrightarrow {\operatorname{W}}_1(Z,G)$. Now, the Additivity Theorem ([@thtr 1.7.3 and 1.7.4]) shows that the canonical exact sequences of functors $ h_k{\hookrightarrow}T_k\twoheadrightarrow T_{k-1}$ yield up-to-homotopy equalities $T_k=T_{k-1}+h_k$ between the induced map of spectra. And finally, recalling that a shift $[k]$ induces multiplication by $(-1)^k$ at the level of spectra, by induction on $k\geq -1$ we get equalities up to homotopy $$\begin{aligned} i_{}^{*}i_{*} &= T_d(-)\\ &= \sum_kh_k(-)\\ &= \sum_k[\Lambda ^k({\mathrm{N}}_Z^{\vee }X)]\otimes _{\mathcal{O}_Z}i^{*}(-)[k]\\ &= \lambda _{-1}({\mathrm{N}}_Z^{\vee }X)\smile (-) \end{aligned}$$ of morphisms of spectra ${\operatorname{\mathbb{K}}}(Z,G) \to {\operatorname{\mathbb{K}}}(Z,G)$ ([@th1 p. 212]). Stratification by dimensions of stabilizers {#subsec:stratdimension} ------------------------------------------- Let $G$ be a diagonalizable group scheme of finite type acting on $X$ as usual. Consider the group scheme $H \to X$ of stabilizers of the action. Since for a point $x \in X$ the dimension of the fiber $H_x$ equals its dimension at the point $\gamma(x)$, where $\gamma\colon X \to H$ is the unit section, it follows from Chevalley’s theorem ([@ega4 13.1.3]) that there is an open subset $X_{\le s}$ where the fibers of $H$ have dimension at most $s$. We will use also $X_{<s}$ with a similar meaning. We denote by $X_s$ the locally closed subset $X_{\le s} \setminus X_{<s}$; we will think of it as a subspace of $X$ with the reduced scheme structure. Finally, we call ${\mathrm{N}}_s$ the normal bundle of $X_s$ in $X$, and ${\mathrm{N}}^0_s$ the complement of the 0-section in ${\mathrm{N}}_s$. Notice that $G$ acts on ${\mathrm{N}}_s$, so we may consider the subscheme $({\mathrm{N}}_s)_{<s} \subseteq {\mathrm{N}}_s$. \[prop:firststratification\] Let $s$ be a nonzero integer. \[[testing]{};3\] There exists a finite number of $s$-dimensional subtori $T_1$, …, $T_r$ in $G$ such that $X_s$ is the disjoint union of the $X_{\le s}^{T_j}$. \[[testing]{};1\] $X_s$ is a regular locally closed subspace of $X$. \[[testing]{};2\] ${\mathrm{N}}_s^0 = ({\mathrm{N}}_s)_{<s}$. To prove part[(\[prop:firststratification;3\])]{} we may restrict the action of $G$ to its toral component. By Thomason’s generic slice theorem ([@th2 Proposition 4.10]) there are only finitely many possible diagonalizable subgroup schemes of $G$ that appear as stabilizers of a geometric point of $X$. Then we can take the $T_j$ to be the toral components of the $s$-dimensional stabilizers. Parts [(\[prop:firststratification;1\])]{} and [(\[prop:firststratification;2\])]{} follow from [(\[prop:firststratification;3\])]{} and [@th3 Proposition 3.1]. Specializations {#sec:specializations} =============== In this section $G$ will be a flat, affine and separated group scheme of finite type over $S$, acting on a noetherian regular separated algebraic space $Y$ over $S$. A $G$-invariant morphism $Y \to \mathbb{P}^1_S$ is *regular at infinity* if the inverse image $Y_\infty$ of the section at infinity in $\mathbb{P}^1_S$ is a regular effective Cartier divisor on $Y$. \[thm:specializations\] Let $\pi\colon Y \to \mathbb{P}^1_S$ be a $G$-invariant morphism over $S$ that is regular at infinity. Denote by $i_\infty \colon Y_\infty {\hookrightarrow}Y$ the inclusion of the fiber at infinity, $j_\infty \colon Y \setminus Y_\infty {\hookrightarrow}Y$ the inclusion of the complement. Then there exists a specialization homomorphism of graded groups $${\operatorname{Sp}}_Y \colon {\operatorname{K}_{*}}(Y \setminus Y_\infty,G) \longrightarrow {\operatorname{K}_{*}}(Y_\infty,G)$$ such that the composition $${\operatorname{K}_{*}}(Y,G) \overset{j_\infty^*}\longrightarrow {\operatorname{K}_{*}}(Y \setminus Y_\infty) \overset{{\operatorname{Sp}}_Y}\longrightarrow {\operatorname{K}_{*}}(Y_\infty,G)$$ coincides with $i_\infty^* \colon {\operatorname{K}_{*}}(Y,G) \to {\operatorname{K}_{*}}(Y_\infty,G)$. Furthermore, if $Y'$ is another noetherian separated regular algebraic space over $S$ and $f \colon Y' \to Y$ is a $G$-equivariant morphism over $S$ such that the composition $\pi f \colon Y' \to \mathbb{P}^1_S$ is regular at infinity, then the diagram $$\xymatrix{{}{\operatorname{K}_{*}}(Y \setminus Y_\infty,G)\ar[r]^>>>>>{{\operatorname{Sp}}_Y} \ar[d]^{f^*} & {}{\operatorname{K}_{*}}(Y_\infty,G)\ar[d]^{f_\infty^*}\\ {}{\operatorname{K}_{*}}(Y' \setminus Y'_\infty,G) \ar[r]^>>>>>{{\operatorname{Sp}}_{Y'}}& {}{\operatorname{K}_{*}}(Y'_\infty,G)}$$ commutes; here $f_\infty$ is the restriction of $f$ to $Y'_\infty \to Y_\infty$. We refer to this last property as *the compatibility of specializations*. Let us denote by ${\operatorname{\mathbb{K}}}(X,G)$ the Quillen K-theory spectrum associated with the category of coherent equivariant $G$-sheaves on a noetherian separated algebraic space $X$. There is a homotopy equivalence $${\operatorname{Cone}}\bigl({\operatorname{\mathbb{K}}}(Y_\infty,G) \xrightarrow {i_{\infty *}} {\operatorname{\mathbb{K}}}(Y,G)\bigr) \simeq {\operatorname{\mathbb{K}}}(Y\setminus Y_\infty, G).$$ The commutative diagram $$\xymatrix{{}{\operatorname{\mathbb{K}}}(Y_\infty, G)\ar[r]^{i_{\infty*}} \ar@{=} [d]& {}{\operatorname{\mathbb{K}}}(Y,G) \ar[d]^{i_\infty^*}\\ {}{\operatorname{\mathbb{K}}}(Y_\infty, G))\ar[r]^{i_\infty^* i_{\infty*}} & {}{\operatorname{\mathbb{K}}}(Y_\infty, G)}$$ induces a morphism of spectra $$\begin{split}\label{morspectra} {\operatorname{\mathbb{K}}}(Y \setminus Y_\infty, G) {}&\simeq {\operatorname{Cone}}\bigl({\operatorname{\mathbb{K}}}(Y_\infty,G) \overset {i_{\infty *}}\longrightarrow{\operatorname{\mathbb{K}}}(Y,G)\bigr)\\ &\longrightarrow {\operatorname{Cone}}\bigl({\operatorname{\mathbb{K}}}(Y_\infty,G) \overset {i_\infty^* i_{\infty*}}\longrightarrow {\operatorname{\mathbb{K}}}(Y_\infty,G)\bigr). \end{split}$$ By the self-intersection formula (Theorem \[thm:selfintersection\]) there is a homotopy $$i_\infty^* i_{\infty*} \simeq \lambda_{-1}({\mathrm{N}}^\vee_{Y_\infty})\smile(-)\colon {\operatorname{\mathbb{K}}}(Y_\infty,G) \to{\operatorname{\mathbb{K}}}(Y_\infty,G);$$ on the other hand $\lambda_{-1}(N^\vee_{Y_\infty})\smile(-)$ is homotopic to zero, because ${\mathrm{N}}^\vee_{Y_\infty}$ is trivial. So we have that $${\operatorname{Cone}}\bigl({\operatorname{\mathbb{K}}}(Y_\infty,G) \xrightarrow{i_\infty^* i_{\infty*}} {\operatorname{\mathbb{K}}}(Y_\infty,G)\bigr) \simeq {\operatorname{\mathbb{K}}}(Y_\infty,G)[1] \oplus {\operatorname{\mathbb{K}}}(Y_\infty,G),$$ where $(-)[1]$ is the suspension of $(-)$. We define the specialization morphism of spectra $$\mathcal{S}_Y \colon {\operatorname{\mathbb{K}}}(Y \setminus Y_\infty, G) \longrightarrow {\operatorname{\mathbb{K}}}(Y_\infty, G)$$ by composing the morphism with the canonical projection $${\operatorname{\mathbb{K}}}(Y_\infty,G)[1] \oplus {\operatorname{\mathbb{K}}}(Y_\infty,G) \longrightarrow {\operatorname{\mathbb{K}}}(Y_\infty,G).$$ Finally, ${\operatorname{Sp}}_Y$ is defined to be the homomorphism induced by $\mathcal{S}_Y$ on homotopy groups. Let us check compatibility; it suffices to show that the diagram of spectra $$\xymatrix{{}{\operatorname{\mathbb{K}}}\bigl(Y'_\infty,G\bigr)\ar[r]^{i'_{\infty*}} \ar[d]^{f_\infty^*} & {}{\operatorname{\mathbb{K}}}\bigl(Y',G\bigr)\ar[d]^{f^*}\\ {}{\operatorname{\mathbb{K}}}\bigl(Y_\infty,G\bigr)\ar[r]^{i_{\infty*}} & {}{\operatorname{\mathbb{K}}}\bigl(Y,G\bigr)}$$ commutes up to homotopy. The essential point is that the diagram of algebraic spaces $$\xymatrix{Y'_\infty\ar[r]^{i'_{\infty}}\ar[d]^{f_\infty} & Y'\ar[d]^{f}\\ Y_\infty\ar[r]^{i_\infty} & Y}$$ is Tor-independent, that is, ${\operatorname{Tor}}_i^{\mathcal{O}_Y}(\mathcal{O}_{Y'}, \mathcal{O}_{Y_\infty}) = 0$ for all $i>0$. Write ${\operatorname{W}}(Y',G)$ and ${\operatorname{W}}(Y'_\infty,G)$ for the Waldhausen categories of $G$-equivariant complexes of quasicoherent $\mathcal{O}_{Y'}$-modules and $\mathcal{O}_{Y'_\infty}$-modules with bounded coherent cohomology, while ${\operatorname{W}}(Y,G)$ and ${\operatorname{W}}(Y_\infty,G)$ will denote the Waldhausen categories of complexes of $G$-equivariant quasicoherent $\mathcal{O}_Y$-modules and $\mathcal{O}_{Y_{\infty}}$ bounded coherent cohomology, that are respectively degreewise $f^*$-acyclic and degreewise $f_\infty^*$-acyclic. By the Tor-independence of the diagram above, we have that $i_{\infty *}$ gives a functor ${\operatorname{W}}(Y_\infty,G) \to {\operatorname{W}}(Y,G)$, and the diagram $$\xymatrix{{}{\operatorname{W}}(Y_\infty, G)\ar[r]^{i_{\infty*}} \ar[d]^{f_\infty^*}& {}{\operatorname{W}}(Y,G) \ar[d]^{f^*}\\ {}{\operatorname{W}}(Y'_\infty,G)\ar[r]^{i'_{\infty*}} & {}{\operatorname{W}}(Y',G)}$$ commutes. By [@thtr 1.5.4] this concludes the proof of the theorem. \[rmrk:stupidspecialization\] For the projection $\mathrm{pr}_2 \colon X \times_S \mathbb{P}^1_S \to \mathbb{P}^1_S$ the specialization homomorphism $${\operatorname{Sp}}_{X \times_S \mathbb{P}^1_S} \colon {\operatorname{K}_{*}}(X \times_S \mathbb{A}^1_S,G) \longrightarrow {\operatorname{K}_{*}}(X,G)$$ coincides with the pullback $s_0^* \colon {\operatorname{K}_{*}}(X \times_S \mathbb{A}^1_S,G) \longrightarrow {\operatorname{K}_{*}}(X,G)$ via the zero-section $s_0 \colon X \to X \times_S \mathbb{A}^1_S$. In fact, the pullback $j_\infty^* \colon {\operatorname{K}_{*}}(X \times_S \mathbb{P}^1_S) \to {\operatorname{K}_{*}}(X \times_S \mathbb{A}^1_S)$ is surjective, and we have $${\operatorname{Sp}}_{X \times_S \mathbb{P}^1_S} \circ j_\infty^* = s_0^* \circ j_\infty^* = i_\infty^* \colon {\operatorname{K}_{*}}(X \times_S \mathbb{P}^1_S) \longrightarrow {\operatorname{K}_{*}}(X,G).$$ Since the restriction homomorphism ${\operatorname{K}_{0}}(Y,G) \to {\operatorname{K}_{0}}(Y \setminus Y_\infty,G)$ is a surjective ring homomorphism, and its composition with ${\operatorname{Sp}}_Y \colon {\operatorname{K}_{0}}(Y \setminus Y_\infty,G) \to {\operatorname{K}_{0}}(Y_\infty,G)$ is a ring homomorphism, it follows that the specialization in degree 0 ${\operatorname{Sp}}_Y \colon {\operatorname{K}_{0}}(Y \setminus Y_\infty,G) \to {\operatorname{K}_{0}}(Y_\infty,G)$ is also a ring homomorphism. This should be true for the whole specialization homomorphism ${\operatorname{Sp}}_Y \colon {\operatorname{K}_{*}}(Y \setminus Y_\infty,G) \to {\operatorname{K}_{*}}(Y_\infty,G)$, but this is not obvious from the construction, and we do not know how to prove it. From the proof of Theorem \[thm:specializations\] we see that one can define a specialization homomorphism ${\operatorname{K}_{*}}(Y \setminus Z,G) \to {\operatorname{K}_{*}}(Z,G)$ if $Z$ is a regular effective $G$-invariant divisor on $Y$ whose normal sheaf is $G$-equivariantly trivial. Specializations to the normal bundle {#subsec:specializationsnormal} ------------------------------------ Let us go back to our standard situation, in which $G$ is a diagonalizable group scheme of finite type acting on a regular separated noetherian algebraic space $X$. Fix a nonnegative integer $s$, and consider the closed immersion $X_s {\hookrightarrow}X_{\le s}$; denote by ${\mathrm{N}}_s$ its normal bundle. Consider the deformation to the normal cone $\pi\colon \mathrm{M}_s \to \mathbb{P}^1_S$, the one denoted by $\mathrm{M}^0_{X_s} X_{\le s}$ in [@fulton Chapter 5]. The morphism $\pi\colon \mathrm{M}_s \to \mathbb{P}^1_S$ is flat and $G$-invariant. Furthermore $\pi^{-1}( \mathbb{A}^1_S) = X_{\le s} \times_S ( \mathbb{A}^1_S)$, while the fiber at infinity of $\pi$ is ${\mathrm{N}}_s$. Consider the restriction $\pi^0 \colon \mathrm{M}_s^0 \to \mathbb{P}^1_S$ to the open subset $\mathrm{M}_s^0 = (\mathrm{M}_s)_{<s}$; then $(\pi^0)^{-1}( \mathbb{A}^1_S) = X_{< s} \times_S \mathbb{A}^1_S$, while the fiber at infinity of $\pi^0$ is ${\mathrm{N}}_s^0 = ({\mathrm{N}}_s)_{<s}$. We define a specialization homomorphism $${\operatorname{Sp}}_{X,s} \colon {\operatorname{K}_{*}}(X_{<s},G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_s^0, G)$$ by composing the pullback $${\operatorname{K}_{*}}(X_{<s},G) \longrightarrow {\operatorname{K}_{*}}(X_{<s}\times_S \mathbb{A}^1_S,G) = {\operatorname{K}_{*}}(\mathrm{M}_s^0 \setminus{\mathrm{N}}_s^0, G)$$ with the specialization homomorphism $${\operatorname{Sp}}_{\mathrm{M}_s^0} \colon {\operatorname{K}_{*}}(\mathrm{M}_s^0 \setminus{\mathrm{N}}_s^0, G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_s^0, G)$$ defined in the previous subsection. We can also define more refined specializations. \[prop:restrictregularatinfinity\] Let $Y \to \mathbb{P}^1_S$ be regular at infinity. \[[testing]{};1\] If $H\subseteq G$ is a diagonalizable subgroup scheme, then the restriction $Y^H \to \mathbb{P}^1$ is regular at infinity. \[[testing]{};2\] If $s$ is a nonnegative integer, the restriction $Y_s \to \mathbb{P}^1_S$ is also regular at infinity. Part [(\[prop:restrictregularatinfinity;1\])]{} and Proposition [\[prop:firststratification\] (\[prop:firststratification;3\])]{} imply part [(\[prop:restrictregularatinfinity;2\])]{}. To prove [(\[prop:restrictregularatinfinity;1\])]{}, notice that, by [@th3 Prop. 3.1], $Y^H$ is regular, and so is $Y_\infty^H$. Let $f$ be the pullback to $Y$ of a local equation for the section at infinity of $\mathbb{P}^1_S \to S$, and let $p$ be a point of $Y^H_\infty$. Since the conormal space to $Y^H$ in $Y$ has no nontrivial $H$-invariants, clearly the differential of $f$ at $p$ can not lie in this conormal space, hence $f$ is not zero in any neighborhood of $p$ in $Y^H$. This implies that $Y^H_\infty$ is a regular Cartier divisor on $Y^H$, as claimed. If $t$ is an integer with $t < s$, let us set ${\mathrm{N}}_{s,t} {\overset{\mathrm{def}} =}({\mathrm{N}}_s)_t$. We have that the restriction $(\mathrm{M}_s)_t \to \mathbb{P}^1_S$ is still regular at infinity, by Proposition [\[prop:restrictregularatinfinity\] (\[prop:restrictregularatinfinity;2\])]{}; so we can also define a specialization homomorphism $${\operatorname{Sp}}_{X,s}^t \colon {\operatorname{K}_{*}}(X_{t},G) \longrightarrow {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,t}, G\bigr)$$ by composing the pullback $${\operatorname{K}_{*}}(X_t,G) \longrightarrow {\operatorname{K}_{*}}(X_t\times_S \mathbb{A}^1_S,G) = {\operatorname{K}_{*}}\bigl((\mathrm{M}_s^0)_t \setminus({\mathrm{N}}_{s,t}, G\bigr)$$ with the specialization homomorphism $${\operatorname{Sp}}_{{(\mathrm{M}_s^0})_t} \colon {\operatorname{K}_{*}}\bigl((\mathrm{M}_s^0)_t \setminus {\mathrm{N}}_{s,t}, G\bigr) \longrightarrow {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,t}, G\bigr).$$ The specializations above are compatible, in the following sense. \[prop:compspecializations\] In the situation above, the diagram $$\xymatrix{{}{\operatorname{K}_{*}}(X_{<s},G)\ar[r]\ar[d]^{{\operatorname{Sp}}_{X,s}}& {}{\operatorname{K}_{*}}(X_t,G)\ar[d]^{{\operatorname{Sp}}_{X,s}^t}\\ {}{\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)\ar[r]& {}{\operatorname{K}_{*}}({\mathrm{N}}_{s,t},G)},$$ where the rows are restriction homomorphisms, commutes. This follows immediately from the compatibility of specializations (Theorem \[thm:specializations\]). Reconstruction from the strata {#sec:reconstruction} ============================== K-rigidity ---------- Let $Y$ be a $G$-invariant regular locally closed subspace of $X$. We say that $Y$ is *K-rigid* inside $X$ if $Y$ is regular and $\lambda_{-1}({\mathrm{N}}_Y^\vee X)$ is not a zero-divisor in the ring ${\operatorname{K}_{*}}(Y,G)$. This condition may seem unlikely to ever be verified: in the non-equivariant case $\lambda_{-1}({\mathrm{N}}_Y^\vee X)$ is always a nilpotent element, since it has rank zero over each component of $X$. However, in the equivariant case this is not necessary true. Here is the basic criterion that we will use use to check that a subspace is K-rigid. \[lem:criterionK-rigid\] Let $Y$ be a $G$-space, $E$ an equivariant vector bundle on $Y$. Suppose that there is a subtorus $T$ of $G$ acting trivially on $Y$, such that in the eigenspace decomposition of $E$ with respect to $T$ the subbundle corresponding to the trivial character is 0. Then $\lambda_{-1}(E)$ is not a zero-divisor in ${\operatorname{K}_{*}}(Y,G)$. Choose a splitting $G \simeq D\times T$; by [@th4 Lemme 5.6], we have $${\operatorname{K}_{*}}(Y,G) = {\operatorname{K}_{*}}(Y,D) \otimes {\mathrm{R}}T = {\operatorname{K}_{*}}(Y,D) \otimes \mathbb{Z}[t_1^{\pm 1}, \ldots, t_n^{\pm 1}].$$ If $E = \bigoplus_{\chi\in \widehat T}E_\chi$ is the eigenspace decomposition of $E$, we have that $\lambda_{-1}(E)$ corresponds to the element $\prod_{\chi\in \widehat T}\lambda_{-1}(E_\chi \otimes \chi)$ of ${\operatorname{K}_{*}}(Y,D) \otimes {\mathrm{R}}T$, so it enough to show that $\lambda_{-1}(E_\chi \otimes \chi)$ is not a zero-divisor in ${\operatorname{K}_{*}}(Y,D) \otimes {\mathrm{R}}T$. But we can write $$\lambda_{-1}(E_\chi \otimes \chi) = 1 + r_1\chi + r_2\chi^2+ \cdots + r_n\chi^n \in {\operatorname{K}_{*}}(Y,D) \otimes {\mathrm{R}}T,$$ where $r_n = (-1)^n[\det E_\chi]$ is a unit in ${\operatorname{K}_{*}}(Y,D)$. Now we can apply the following elementary fact: suppose that $A$ is a ring, $r_1$, …, $r_n$ central elements of $A$ such that $r_n$ is a unit, $\chi \in A[t_1^{\pm 1}, \ldots, t_n^{\pm 1}]$ a monomial different from 1. Then the element $1 + r_1\chi + r_2\chi^2+ \cdots + r_n\chi^n$ is not a zero-divisor in $A[t_1^{\pm 1}, \ldots, t_n^{\pm 1}]$. The next Proposition is a K-theoretic variant of [@brion2 Proposition [3.2]{}]. \[prop:K-rigid-&gt;\] Let $Y$ be a closed K-rigid subspace of $X$, and set $U = X \setminus Y$. Call $i \colon Y {\hookrightarrow}X$ and $j \colon U {\hookrightarrow}X$ the inclusions. \[[testing]{};1\] The sequence $$0 \longrightarrow {\operatorname{K}_{*}}(Y,G) \overset {i_*} \longrightarrow {\operatorname{K}_{*}}(X,G)\overset{j^*} \longrightarrow {\operatorname{K}_{*}}(U,G) \longrightarrow 0$$ is exact. \[[testing]{};2\] The two restriction maps $$i^* \colon {\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(Y,G)\quad \mbox{and} \quad j^* \colon {\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(U,G)$$ induce a ring isomorphism $$(i^*,j^*) \colon {\operatorname{K}_{*}}(X,G) \overset{\sim}\longrightarrow {\operatorname{K}_{*}}(Y,G) \displaytimes_{{\operatorname{K}_{*}}(Y,G)/(\lambda_{-1}({\mathrm{N}}_Y^\vee X))} {\operatorname{K}_{*}}(U,G)$$ where ${\mathrm{N}}_Y^\vee X$ is the conormal bundle of $Y$ in $X$, the homomorphism $${\operatorname{K}_{*}}(Y,G) \longrightarrow{\operatorname{K}_{*}}(Y,G)/(\lambda_{-1}({\mathrm{N}}_Y X))$$ is the projection, while the homomorphism $${\operatorname{K}_{*}}(U,G) \simeq {\operatorname{K}_{*}}(X,G)/i_*{\operatorname{K}_{*}}(Y,G) \longrightarrow {\operatorname{K}_{*}}(Y,G)/(\lambda_{-1}({\mathrm{N}}_Y X))$$ is induced by $i^* \colon {\operatorname{K}_{*}}(X,G) \to {\operatorname{K}_{*}}(Y,G)$. From the self-intersection formula (Theorem \[thm:selfintersection\]) we see that the composition $i^*i_* \colon {\operatorname{K}_{*}}(Y,G) \to {\operatorname{K}_{*}}(Y,G)$ is multiplication by $\lambda_{-1}({\mathrm{N}}_Y^\vee X)$, so $i_*$ is injective. We get part [(\[prop:K-rigid-&gt;;1\])]{} from this and from the localization sequence. Part [(\[prop:K-rigid-&gt;;2\])]{} follows easily from part [(\[prop:K-rigid-&gt;;1\])]{}, together with the following elementary fact. Let $A$, $B$ and $C$ be rings, $f \colon B \to A$ and $g \colon B \to C$ ring homomorphisms. Suppose that there exist a homomorphism of abelian groups $\phi\colon A \to B$ such that: The sequence $$0 \longrightarrow A \overset{\phi}\longrightarrow B \overset g \longrightarrow C \longrightarrow 0.$$ is exact; the composition $f\circ \phi \colon A \to A$ is the multiplication by a central element $a \in A$ which is not a zero divisor. Then $f$ and $g$ induce an isomorphism of rings $$(f,g) \colon B \to A \displaytimes_{A/(a)} C,$$ where the homomorphism $A \to A/(a)$ is the projection, and the one $C \to A/(a)$ is induced by the isomorphism $C \simeq B/{\operatorname{im}}\phi$ and the projection $g \colon B \to A$. The theorem of reconstruction from the strata {#sec:maintheorem} --------------------------------------------- This section is entirely dedicated to the proof of our main theorem. Let us recall what it says. Let $G$ act on $X$ with the usual hypotheses. Consider the strata $X_s$ defined in Subsection \[subsec:stratdimension\], and the specialization homomorphisms $${\operatorname{Sp}}_{X,s}^t \colon {\operatorname{K}_{*}}(X_{t},G) \longrightarrow {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,t}, G\bigr)$$ defined in Subsection \[subsec:specializationsnormal\]. \[thm:maintheorem\] The homomorphism $${\operatorname{K}_{*}}(X,G) \longrightarrow \prod_{s=0}^n {\operatorname{K}_{*}}(X_s,G)$$ obtained from the restrictions ${\operatorname{K}_{*}}(X,G) \to {\operatorname{K}_{*}}(X_s,G)$ is injective. Its image consists of the sequences $(\alpha_s) \in \prod_{s=0}^n {\operatorname{K}_{*}}(X_s,G)$ with the property that for each $s= 1$, …, $n$ the pullback of $\alpha \in {\operatorname{K}_{*}}(X_s,G)$ to ${\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,s-1}, G\bigr)$ coincides with ${\operatorname{Sp}}_{X,s}^{s-1}(\alpha_{s-1}) \in {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,s-1}, G\bigr)$. In other words, we can view ${\operatorname{K}_{*}}(X,G)$ as a fiber product $$\begin{aligned} {\operatorname{K}_{*}}(X,G) \simeq {}&{\operatorname{K}_{*}}(X_n,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1},G)} {\operatorname{K}_{*}}(X_{n-1},G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n-1,n-2},G)}\\ &\quad\ldots \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{2,1},G)} {\operatorname{K}_{*}}(X_1,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{1,0},G)} {\operatorname{K}_{*}}(X_0,G). \end{aligned}$$ Here is our starting point. $X_s$ is K-rigid in $X$. This follows from Proposition [\[prop:firststratification\] (\[prop:firststratification;2\])]{}, and Lemma \[lem:criterionK-rigid\]. So from Proposition [\[prop:K-rigid-&gt;\] (\[prop:K-rigid-&gt;;2\])]{} applied to the closed embedding $i_s \colon X_s {\hookrightarrow}X_{\le s}$, we get an isomorphism $${\operatorname{K}_{*}}(X_{\le s},G) \simeq {\operatorname{K}_{*}}(X_s,G) \displaytimes_{{\operatorname{K}_{*}}(X_s,G)/(\lambda_{-1}({\mathrm{N}}_s^\vee))} {\operatorname{K}_{*}}(X_{<s},G).$$ We can improve on this. \[prop:fiberproduct-1stratum\] The restrictions $${\operatorname{K}_{*}}(X_{\le s},G) \longrightarrow {\operatorname{K}_{*}}(X_s,G)\quad \text{and} \quad {\operatorname{K}_{*}}(X_{\le s},G)\longrightarrow {\operatorname{K}_{*}}(X_{<s},G)$$ induce an isomorphism $${\operatorname{K}_{*}}(X_{\le s},G) {\overset{\sim}\longrightarrow}{\operatorname{K}_{*}}(X_s,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)} {\operatorname{K}_{*}}(X_{<s},G),$$ where the homomorphism ${\operatorname{K}_{*}}(X_s,G) \to {\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)$ is the pullback, while $${\operatorname{Sp}}_{Y,s} \colon {\operatorname{K}_{*}}(X_{<s},G) \to {\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)$$ is the specialization. Let us start with a lemma. \[lem:zerosection\] The restriction homomorphism ${\operatorname{K}_{*}}(X_s,G) \to {\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)$ is surjective, and its kernel is the ideal $\bigl(\lambda_{-1}({\mathrm{N}}_s^\vee)\bigr) \subseteq {\operatorname{K}_{*}}(X_s,G)$. Since the complement of the zero section $s_0 \colon X_s {\hookrightarrow}{\mathrm{N}}_s$ coincides with $({\mathrm{N}}_s)_{<s}$ (Proposition [\[prop:firststratification\] (\[prop:firststratification;2\])]{}), we can apply Proposition [\[prop:K-rigid-&gt;\] (\[prop:K-rigid-&gt;;1\])]{} to the normal bundle ${\mathrm{N}}_s$, and conclude that there is an exact sequence $$0 \longrightarrow {\operatorname{K}_{*}}(X_s,G) \overset{s_{0*}}\longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_s,G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_s^0,G) \longrightarrow 0.$$ Now, $s_0^* \colon {\operatorname{K}_{*}}({\mathrm{N}}_s,G) \to {\operatorname{K}_{*}}(X_s,G)$ is an isomorphism, and the composition $s_0^*s_{0*} \colon {\operatorname{K}_{*}}(X_s,G) \to {\operatorname{K}_{*}}(X_s,G)$ is multiplication by $\lambda_{-1}({\mathrm{N}}_s^\vee)$, because of the self-intersection formula \[thm:selfintersection\], and this implies the thesis. Therefore the restriction homomorphism ${\operatorname{K}_{*}}(X_s,G) \to {\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)$ induces an isomorphism of ${\operatorname{K}_{*}}(X_s,G)/\bigl(\lambda_{-1}({\mathrm{N}}_s^\vee)\bigr)$ with ${\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)$; the Proposition follows from this, and from Proposition \[prop:compspecializations\]. Now we proceed by induction on the largest integer $s$ such that $X_s \neq \emptyset$. If $s = 0$ there is nothing to prove. If $s > 0$, by induction hypothesis the homomorphism $${\operatorname{K}_{*}}(X_{<s},G) \longrightarrow {\operatorname{K}_{*}}(X_{s-1},G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)} \dots \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{1,0},G)} {\operatorname{K}_{*}}(X_0,G)$$ induced by restrictions is an isomorphism; so from Proposition \[prop:fiberproduct-1stratum\] we see that to prove Theorem \[thm:maintheorem\] it is sufficient to show that if $\alpha_s \in {\operatorname{K}_{*}}(X_s,G)$, $\alpha_{<s} \in {\operatorname{K}_{*}}(X<s,G)$, $\alpha_{s-1}$ is the restriction of $\alpha_{<s}$ to ${\operatorname{K}_{*}}(X_{s-1},G)$, $\alpha_s^0$ is the pullback of $\alpha_s$ to ${\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)$ and $\alpha_{s,s-1}$ is the pullback of $\alpha_s$ to ${\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)$, then ${\operatorname{Sp}}_{X,s}(\alpha_{<s}) = \alpha_s^0$ if and only if ${\operatorname{Sp}}_{X,s}^{s-1}(\alpha_{s-1}) = \alpha_{s,s-1}$. But the diagram $$\xymatrix{ {}{\operatorname{K}_{*}}(X_{<s},G)\ar[r]^{{\operatorname{Sp}}_{X,s}}\ar[d]& {}{\operatorname{K}_{*}}({\mathrm{N}}_s^0,G)\ar[d]\\ {}{\operatorname{K}_{*}}(X_{s-1},G)\ar[r]^{{\operatorname{Sp}}_{X,s}^{s-1}}&{}{\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G) },$$ where the colums are restriction homomorphisms, is commutative (Proposition \[prop:compspecializations\]); hence it suffices to show that the restriction homomorphism $${\operatorname{K}_{*}}({\mathrm{N}}_s^0,G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G)$$ is injective. To prove this we may suppose that the action of $G$ on $X_s$ is connected, that is, $X_s$ is not a nontrivial disjoint union of open invariant subspaces. In this case the toral component of the isotropy group of a point of $X_s$ is constant. Set $E = {\mathrm{N}}_s$, and consider the eigenspace decomposition $E = \bigoplus_{\chi \in \widehat T} E_\chi$. We obtain a decomposition $E = \bigoplus_i E_i$ by grouping together $E_\chi$ and $E_{\chi'}$ when the characters $\chi$ and $\chi'$ are multiple of a common primitive character in $\widehat T$. Then clearly a geometric point of $E$ is in $E_{s-1}$ if and only if exactly one of its components according to the decomposition $E = \bigoplus_i E_i$ above is nonzero. In other words, ${\mathrm{N}}_{s,s-1}$ is the disjoint union $\coprod_i E_i^0$, where $E_i^0$ is embedded in $E$ by setting all the other components equal to 0. The same argument as in the proof of Lemma \[lem:zerosection\] shows that the kernel of the pullback ${\operatorname{K}_{*}}(X_s, G) \to {\operatorname{K}_{*}}(E_i^0,G)$ is generated by $\lambda_{-1} E_i$, so the kernel of the pullback $${\operatorname{K}_{*}}(X_s,G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1},G) = \bigoplus_i{\operatorname{K}_{*}}(E_i^0,G)$$ equals $\cap_i (\lambda_{-1}E_i)$; hence we need to show that $(\lambda_{-1} E) = \cap_i (\lambda_{-1} E_i)$. This is done as follows. Choose a splitting $G = D\times T$: we have ${\operatorname{K}_{*}}(X_s,G) = {\operatorname{K}_{*}}(X_s,D) \otimes {\mathrm{R}}T$, as in the beginning of the proof of Lemma \[lem:criterionK-rigid\]. First of all, we have $\lambda_{-1}E = \prod_i \lambda_{-1} E_i$. Furthermore, for each $i$ we can choose a primitive character $\chi_i$ in $\widehat T$ such that all the characters which appear in the decomposition of $E_i$ are multiples of $\chi_i$; from this we see that $\lambda_{-1}E_i$ is of the form $\sum_{k=m_i}^{n_i} r_{i,k} \chi_i^k$, where $m_i$ and $n_i$ are (possibly negative) integers, $r_{i,k} \in {\operatorname{K}_{0}}(X_s,G)$, $r_{i,m}$ and $r_{i,n}$ are invertible. Then the conclusion of the proof follows from the following fact. \[lem:int=prod\] Let $A$ be a ring, $H$ a free finitely generated abelian group, $\chi_1$, …, $\chi_r$ linearly independent elements of $H$. Let $\gamma_1$, …, $\gamma_r$ be elements of the group ring $AH$ of the form $\gamma_i = \sum_{k=m_i}^{n_i} r_{i,k} \chi_i^k$, where the $r_{i,k}$ are central elements of $A$ such that $r_{i,m_i}$ and $r_{i,n_i}$ are invertible. Then we have an equality of ideals $(\gamma_1 \ldots \gamma_r) = (\gamma_1)\cap \ldots\cap(\gamma_r)$ in $AH$. By multiplying each $\gamma _i$ by $r_{i,m_i}^{-1}\chi _i^{-m_i}$ we may assume that $ \gamma _i$ has the form $1+a_{i,1}\chi _i+\cdots +a_{i,s_i}\chi _i^{s_i}$ with $s_i\geq 0$ and $a_{i,s_i}$ a central unit in $A$. We will show that for any $i\neq j$ the relation $\gamma _i\mid q\gamma _j$, $q\in AH$ implies $\gamma _i\mid q$; from this the thesis follows with a straightforward induction. We may assume that $r =2$, $i = 1$ and $j = 2$. Since $\chi _1$, $\chi_2$ are $\mathbb{Z}$-linearly independent elements of $H$, we may complete them to a maximal $\mathbb{Z}$-independent sequence $\chi _1,\dots ,~\chi _n$ of $H$; this sequence generates a subgroup $H' \subseteq H$ of finite index. Suppose at first that $H' = H$, so that $AH = A\bigl[\chi _1^{\pm 1},...,\chi _n^{\pm 1}\bigr]$. Replacing $A$ by $A\bigl[\chi _3^{\pm 1}, \dots, \chi_n^{\pm 1}\bigr]$, we may assume that $AH = A\bigl[\chi _1^{\pm 1},\chi _2^{\pm 1}\bigr]$. If $p\gamma _1=q\gamma _2$, we can multiply this equality by a sufficiently high power of $\chi_1 \chi_2$ and assume that $p$ and $q$ are polynomials in $A[\chi_1, \chi_2]$. Since $\gamma_2$ is a polynomial in $A[\chi_2]$ with central coefficients and invertible leading coefficient, the usual division algorithm allows us to write $p = s \gamma_2 + r \in A[\chi_1, \chi_2]$, where $r$ is a polynomial whose degree in $\chi_2$ is less than $s_2 = \deg_{\chi_2}\gamma_2$. By comparing the degrees in $\chi_2$ in the equality $r\gamma_1 = (q - s\gamma_1)\gamma_2$ we see that $q - s\gamma_1$ must be zero, and this proves the result. In the general case, choose representatives $u_1$, …, $u_r$ for the cosets of $H'$ in $H$; then any element $f$ of $AH$ can be written uniquely as $\sum_{i=1}^r u_i f_i$ with $f_i \in AH'$. Then from the equality $\bigl(\sum_i u_i p_i\bigr) \gamma_1 = \sum_i u_i q_i$ we get $p_i \gamma_1 = q_i \gamma_2$ for all $i$, because $\gamma_1$ and $\gamma_2$ are in $AH'$; hence the thesis follows from the previous case. Actions with enough limits {#sec:limits} ========================== Let us start with some preliminaries in commutative algebra. Sufficiently deep modules ------------------------- Let $A$ be a finitely generated flat [Cohen–Macaulay]{}$\mathbb{Z}$-algebra, such that each of the fibers of the morphism ${\operatorname{Spec}}A \to {\operatorname{Spec}}\mathbb{Z}$ has pure dimension $n$. If $V$ is a closed subset of ${\operatorname{Spec}}A$, we define the *fiber dimension* of $V$ to be the largest of the dimensions of the fibers of $V$ over ${\operatorname{Spec}}\mathbb{Z}$, and its *fiber codimension* to be $n$ minus its fiber dimension. We say that $V$ has *pure fiber dimension* if all the fibers of $V$ have the same fiber dimension at all points of $V$ (of course some of the fibers may be empty). The fiber dimension and codimension of an ideal in $A$ will be the fiber dimension and codimension of the corresponding closed subset of ${\operatorname{Spec}}A$. \[def:suff-deep\] Let $M$ be an $A$-module. Then we say that $M$ is *sufficiently deep* if the following two conditions are satisfied. \[[testing]{};1\] All associated primes of M have fiber codimension 0. ${\operatorname{Ext}}_A^1(A/\mathfrak{p}, M) = 0$ for all primes $\mathfrak{p}$ in $A$ of fiber codimension at least 2. Here are the properties that we need. \[prop:suff-deep\] \[[testing]{};1\] If $0 \to M' \to M \to M'' \to 0$ is an exact sequence of $A$-modules, $M'$ and $M''$ are sufficiently deep, then $M$ is sufficiently deep. \[[testing]{};2\] Direct limits and direct sums of sufficiently deep modules are sufficiently deep. \[[testing]{};3\] If $N$ is an abelian group, then $N\otimes_\mathbb{Z} A$ is sufficiently deep. \[[testing]{};4\] If $M$ is a sufficiently deep $A$-module, then ${\operatorname{Ext}}_A^1(N,M) = 0$ for all $A$-modules $N$ whose support has fiber codimension at least 2. Part [(\[prop:suff-deep;1\])]{} is obvious. Part [(\[prop:suff-deep;2\])]{} follows from the fact that $A$ is noetherian, so formation of ${\operatorname{Ext}}_A^1(A/\mathfrak{p}, -)$ commutes with direct sums and direct limits. Let us prove part [(\[prop:suff-deep;3\])]{}. From part [(\[prop:suff-deep;2\])]{} we see that we may assume that $N$ is cyclic. If $N = \mathbb{Z}$, then $M = A$, and the statement follows from the facts that $A$ is [Cohen–Macaulay]{}, and that the height of a prime ideal is at least equal to its fiber codimension. Assume that $N = \mathbb{Z}/m\mathbb{Z}$, so that $M = A/mA$. The associated primes of $M$ are the generic points of the fibers of $A$ over the primes dividing $m$, so condition [\[def:suff-deep\] (\[def:suff-deep;1\])]{} is satisfied. Take a prime $\mathfrak{p}$ of $A$ of fiber codimension at least 2, and consider the exact sequence $$0 = {\operatorname{Ext}}_A^1(A/\mathfrak{p}, A) \to {\operatorname{Ext}}_A^1(A/\mathfrak{p}, A/mA) \to {\operatorname{Ext}}_A^2(A/\mathfrak{p}, A) \overset m{\to} {\operatorname{Ext}}_A^2(A/\mathfrak{p}, A).$$ If the characteristic of $A/\mathfrak{p}$ is positive, then the height of $\mathfrak{p}$ is at least 3, so ${\operatorname{Ext}}_A^2(A/\mathfrak{p}, A) =0$, because $A$ is [Cohen–Macaulay]{}, and we are done. Otherwise, we have an exact sequence $$0 \longrightarrow A/\mathfrak{p} \overset m{\longrightarrow} A/\mathfrak{p} \longrightarrow A/\bigl((m) + \mathfrak{p}\bigr)\longrightarrow 0;$$ but the height of $(m) + \mathfrak{p}$ is at least 3, so ${\operatorname{Ext}}_A^2\bigl(A/\bigl((m) + \mathfrak{p} \bigr), A \bigr) = 0$. From this we deduce that multiplication by $m$ is injective on ${\operatorname{Ext}}_A^2(A/\mathfrak{p}, A)$, and this concludes the proof of part [(\[prop:suff-deep;3\])]{}. For part [(\[prop:suff-deep;4\])]{}, notice first of all that if $N$ is a finitely generated $A$-module of fiber codimension at least 2 then we can filter $N$ with successive quotients of type $A/\mathfrak{p}$, where $\mathfrak{p}$ is a prime of fiber codimension at least 2, so ${\operatorname{Ext}}_A^1(N,M) = 0$. If $N$ is not finitely generated and $0 \to M \to E \to N \to 0$ is an exact sequence of $A$-modules, $N'$ is a finitely generated submodule of $N$, and $E'$ is the pullback of $E$ to $N'$, then the sequence $0 \to M \to E' \to N'$ splits; but because of part [(\[prop:suff-deep;1\])]{} of the definition we have ${\operatorname{Hom}}_A(N',M) = 0$, hence there is a unique copy of $N'$ inside $E'$. Hence there is a unique copy of $N$ inside $E$, and the sequence splits. This completes the proof of the Proposition. Sufficiently deep actions ------------------------- Let $G$ be a diagonalizable group scheme of finite type over $S$; all the actions will be upon noetherian separated regular algebraic spaces, as in our setup. The ring of representations ${\mathrm{R}}G = \mathbb{Z} \widehat G$ is a finitely generated flat [Cohen–Macaulay]{}$\mathbb{Z}$-algebra, and each of the fibers of the morphism ${\operatorname{Spec}}{\mathrm{R}}G \to {\operatorname{Spec}}\mathbb{Z} $ has pure dimension equal to the dimension of $G$. We say that the action of $G$ on $X$ is *sufficiently deep* when the $\mathrm{R}G$-module ${\operatorname{K}_{*}}(X,G)$ is sufficiently deep. \[thm:suff-deep-&gt;\] Suppose that a diagonalizable group scheme of finite type $G$ acts on a noetherian regular separated algebraic space $X$, and that the action is sufficiently deep. Then the restriction homomorphism ${\operatorname{K}_{*}}(X,G) \to {\operatorname{K}_{*}}(X^{G_0},G)$ is injective, and its image is the intersection of the images of the restriction homomorphisms ${\operatorname{K}_{*}}(X^T,G)\to {\operatorname{K}_{*}}(X^{G_0},G)$, where $T$ ranges over all subtori of $G$ of codimension 1. We need some preliminaries. \[lem\] Suppose that $G$ acts on $X$ with stabilizers of constant dimension $s$. Then the support of ${\operatorname{K}_{*}}(X,G)$ as an $\mathrm{R}G$-module has pure fiber dimension $s$, and any associated prime of ${\operatorname{K}_{*}}(X,G)$ has pure fiber dimension $s$. Suppose first of all that $s$ is 0. Then it follows easily from Thomason’s localization theorem ([@th3]) that the support of ${\operatorname{K}_{*}}(X,G)$ has fiber dimension 0, and from this we see that every associated prime must have fiber dimension 0. In the general case, we may assume that the action is connected (that is, $X$ is not a nontrivial disjoint union of open invariant subspaces); then there will be a splitting $G = H \times_S T$, where $H$ is a diagonalizable group scheme of finite type acting on $X$ with finite stabilizers, and $T$ is a totally split torus that acts trivially on $X$. In this case $${\operatorname{K}_{*}}(X,G) = {\operatorname{K}_{*}}(X,H)\otimes_\mathbb{Z} RT = {\operatorname{K}_{*}}(X,H)\otimes_{\mathrm{R}H} \mathrm{R}G.$$ The proof is concluded by applying the following lemma. Let $A$ be a flat [Cohen–Macaulay]{}$\mathbb{Z}$-algebra of finite type, $A\to B$ a smooth homomorphism of finite type with fibers of pure dimension $s$. Suppose that $M$ is an $A$-module whose support has fiber dimension 0. Then $M\otimes_A B$ has support of pure dimension $s$, and each of its associated primes has fiber dimension $s$. Since tensor product commutes with taking direct limits and $B$ is flat over $A$, we may assume that $M$ is of finite type over $A$. By an obvious filtration argument, we may assume that $M$ is of the form $A/\mathfrak{p}$, where $\mathfrak{p}$ is a prime ideal of fiber dimension 0. In this case the only associated primes of $M \otimes_A B$ are the generic components of the fiber of ${\operatorname{Spec}}B$ over $\mathfrak{p}$, and this proves the result. \[lem:unmixing\] Suppose that $X$ and $Y$ are algebraic spaces on which $G$ acts with stabilizers of constant dimension respectively $s$ and $t$. If $N$ is an $\mathrm{R}G$-submodule of ${\operatorname{K}_{*}}(Y,G)$ and $t<s$, then there is no nontrivial homomorphism of ${\mathrm{R}}G$-modules from $N$ to ${\operatorname{K}_{*}}(X,G)$. Given such a nontrivial homomorphism $N \to {\operatorname{K}_{*}}(X,G)$, call $I$ its image. The support of $I$ has fiber dimension at most $t$, so there is an associated prime of fiber dimension at most $t$ in ${\operatorname{K}_{*}}(X,G)$, contradicting Lemma \[lem\]. Now we prove Theorem \[thm:suff-deep-&gt;\]. Let $n$ be the dimension of $G$, so that $X_n = X^{G_0}$. First of all, let us show that the natural projection $${\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(X_n,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1}, G)} {\operatorname{K}_{*}}(X_{n-1},G)$$ is an isomorphism. This will be achieved by showing that for all $s$ with $0 \le s \le n-1$ the natural projection ${\operatorname{K}_{*}}(X,G) \longrightarrow P_s$ is an isomorphism, where we have set $$\begin{aligned} P_s = {}&{\operatorname{K}_{*}}(X_n,G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1}, G)} {\operatorname{K}_{*}}(X_{n-1},G) \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{n-1,n-2},G)}\\ &\quad\cdots \displaytimes_{{\operatorname{K}_{*}}({\mathrm{N}}_{s+1,s},G)} {\operatorname{K}_{*}}(X_s,G). \end{aligned}$$ For $s = 0$ this is our main Theorem \[thm:maintheorem\], so we proceed by induction. If $s < n-1$ and the projection above is an isomorphism, we have an exact sequence $$0 \longrightarrow {\operatorname{K}_{*}}(X,G) \longrightarrow P_{s+1} \times {\operatorname{K}_{*}}(X_s,G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_{s+1,s},G),$$ where the last arrow is the difference of the composition of the projection $P_{s+1} \to {\operatorname{K}_{*}}(X_{s+1}, G)$ with the pullback ${\operatorname{K}_{*}}(X_{s+1}, G) \to {\operatorname{K}_{*}}({\mathrm{N}}_{s+1,s},G)$, and of the specialization homomorphism ${\operatorname{K}_{*}}(X_s,G) \to {\operatorname{K}_{*}}({\mathrm{N}}_{s+1,s},G)$. If we call $N$ the image of this difference, we have an exact sequence of $\mathrm{R}G$-modules $$0 \longrightarrow {\operatorname{K}_{*}}(X,G) \longrightarrow P_{s+1} \times {\operatorname{K}_{*}}(X_s,G) \longrightarrow N \to 0,$$ and the support of $N$ is of fiber dimension at most $s \le n-2$ by Lemma \[lem\], hence it is of fiber codimension at least 2. It follows from the fact that ${\operatorname{K}_{*}}(X,G)$ is sufficiently deep and from Proposition [\[prop:suff-deep\] (\[prop:suff-deep;4\])]{} that this sequence splits. From the fact that ${\operatorname{K}_{*}}(X,G)$ has only associated primes of fiber dimension 0 we see that the pullback map ${\operatorname{K}_{*}}(X_s,G) \to N$ must be injective, and from Lemma \[lem:unmixing\] that a copy of $N$ living inside $ P_{s+1} \times {\operatorname{K}_{*}}(X_s,G)$ must in fact be contained in ${\operatorname{K}_{*}}(X_s,G)$; this implies that the projection ${\operatorname{K}_{*}}(X,G) \to P_{s+1}$ is an isomorphism. So the projection $${\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(X_n,G) \times_{{\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1}, G)} {\operatorname{K}_{*}}(X_{n-1},G)$$ is an isomorphism. Then the kernel of the specialization homomorphism from ${\operatorname{K}_{*}}(X_{n-1},G)$ to ${\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1}, G)$ maps injectively in ${\operatorname{K}_{*}}(X,G)$, so it must be 0, again because ${\operatorname{K}_{*}}(X,G)$ has only associated primes of fiber dimension 0. Furthermore $X_{n-1}$ is the disjoint union of the $X_{n-1}^T$ when $T$ ranges over all finite subtori of $G$ of codimension 1, and similarly for ${\mathrm{N}}_{n,n-1}$. On the other hand, because of our main theorem applied to the action of $G$ on $X^T$, we have the natural isomorphism ${\operatorname{K}_{*}}(X^T,G) \to {\operatorname{K}_{*}}(X^{G_0}, G) \times_{{\operatorname{K}_{*}}({\mathrm{N}}_{n,n-1}^T,G)}{\operatorname{K}_{*}}(X_{n-1}^T,G)$, and this completes the proof of Theorem \[thm:suff-deep-&gt;\]. Actions with enough limits are sufficiently deep ------------------------------------------------ For the rest of this section $S$ will be the spectrum of a field $k$, $G$ is a diagonalizable group scheme of finite type acting on a smooth separated scheme $X$ of finite type over $k$; call $M$ the group of one-parameter subgroups ${\mathbb{G}_{\mathrm{m},k}} \to G$ of $G$. There is a natural Zariski topology on $M \simeq \mathbb{Z}^n$ in which the closed subsets are the loci of zeros of sets of polynomials in the symmetric algebra $\mathop{\mathrm{Sym}}_{\mathbb{Z}}^{\bullet} M^\vee$; we refer to this as the *Zariski topology on $M$*. We will denote, as usual, by $G_0$ the toral component of $G$. If $n$ is the dimension of $G$, then $X_n$ = $X^{G_0}$. Furthermore, if we choose a splitting $G \simeq G_0 \times G/G_0$ we obtain an isomorphism of rings $${\operatorname{K}_{*}}(X^{G_0},G) \simeq {\operatorname{K}_{*}}(X^{G_0}, G/G_0) \otimes {\mathrm{R}}G_0$$ ([@th4 Lemme 5.6]). \[def:admitslimits\] Suppose that $k$ is algebraically closed. Consider a one parameter subgroup $H = {\mathbb{G}_{\mathrm{m},k}} \to G$, with the corresponding action of ${\mathbb{G}_{\mathrm{m},k}}$ on $X$. We say that this one parameter subgroup *admits limits* if for every closed point $x \in X$, the morphism ${\mathbb{G}_{\mathrm{m},k}} \to X$ which sends $t \in G$ to $tx$ extends to a morphism $\mathbb{A}^1 \to X$. The image of $0 \in \mathbb{A}^1(k)$ in $X$ is called *the limit of $x$ for the one parameter subgroup $H$*. We say that the action of $G$ on $X$ *admits enough limits* if the one parameter subgroups of $G$ which admit limits form a Zariski-dense subset of $M$. If $k$ is not algebraically closed, then we say that the action admits enough limits if the action obtained after base change to the algebraic closure of $k$ does. One can show that the locus of $1$-parameter subgroups of $G$ admitting limits is defined by linear inequalities, so the definition can be stated in more down to earth terms (we are grateful to the referee for pointing this out). The notion of action with enough limits is a weakening of the notion of *filtrable* action due to Brion. More precisely, an action has enough limits if it satisfies condition (i) in [@brion1 Definition 3.2]; there is also a condition (ii) on closures of strata. The main case when the action admits enough limits is when $X$ is complete; in this case of course every one-parameter subgroup admits limits. Another case is when the action of $G_0 = {\mathbb{G}_{\mathrm{m},k}}^n$ on $X$ extends to an action of the multiplicative monoid $\mathbb{A}^n$. Also, we give a characterization of toric varieties with enough limits in Prop.  \[charact\]. \[thm:-&gt;suff-deep\] Suppose that a diagonalizable group scheme of finite type $G$ over a perfect field $k$ acts on a smooth separated scheme of finite type $X$ over $k$. If the action of $G$ admits enough limits, then it is sufficiently deep. By putting this together with Theorem \[thm:suff-deep-&gt;\] we get the following. \[cor:maincorollary\] Suppose that a diagonalizable group scheme of finite type $G$ over a perfect field $k$ acts on a smooth separated scheme of finite type $X$ over $k$. If the action of $G$ admits enough limits, then the restriction homomorphism $${\operatorname{K}_{*}}(X,G) \longrightarrow {\operatorname{K}_{*}}(X^{G_0},G)$$ is injective, and its image is the intersection of the images of the restriction homomorphisms ${\operatorname{K}_{*}}(X^T,G)\to {\operatorname{K}_{*}}(X^{G_0},G)$, where $T$ ranges over all subtori of $G$ of codimension 1. For example, consider the following situation, completely analogous to the one considered in [@brion2 Corollary 7] and in [@gkmp]. Let $G$ be an $n$-dimensional torus acting on a smooth complete variety $X$ over an algebraically closed field $k$. Assume that the fixed point set $X^{G_0} = X_n$ is zero-dimensional, while $X_{n-1}$ is 1-dimensional. Set $X^{G_0} = \{x_1, \dots, x_t\}$, and call $P_1$, …, $P_r$ the closures in $X$ of the connected components of $X_{n-1}$. Then each $P_j$ is isomorphic to $\mathbb{P}^1$, and contains precisely two of the fixed points, say $x_i$ and $x_{i'}$. Call $D_j$ the kernel of the action of $G$ on $P_j$; then the image of the restriction homomorphism $${\operatorname{K}_{*}}(P_j,G) \to {\operatorname{K}_{*}}(x_i,G) \times {\operatorname{K}_{*}}(x_{i'},G) = {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}G \times {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}G$$ consists of the pairs of elements $$( \alpha, \beta) \in {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}G \times {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}G$$ whose images in ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}D_j$ coincide (this follows immediately from Theorem \[thm:maintheorem\]). From this and from Corollary \[cor:maincorollary\] we get the following. \[cor:generic\] In the situation above, the restriction map $${\operatorname{K}_{*}}(X,G) \longrightarrow \prod_{i=1}^t {\operatorname{K}_{*}}(x_i,G) = \prod_{i=1}^n {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}G$$ is injective. Its image consist of all elements $(\alpha_i)$ such that if $x_i$ and $x_{i'}$ are contained in some $P_j$, then the restrictions of $\alpha_i$ and $\alpha_{i'}$ to ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}D_j$ coincide. Theorem \[thm:-&gt;suff-deep\] is proved in the next subsection. Białynicki-Birula stratifications --------------------------------- Let us prove Theorem \[thm:-&gt;suff-deep\]: like in [@brion1], the idea is to use a Białynicki-Birula stratification. We will prove the following. \[prop:describesufflimits\] Suppose that a diagonalizable group scheme of finite type $G$ over a perfect field $k$ acts with enough limits on a smooth separated scheme of finite type $X$ over $k$. Then the ${\mathrm{R}}G$-module ${\operatorname{K}_{*}}(X,G)$ is obtained by taking finitely many successive extensions of ${\mathrm{R}}G$-modules of the form $N \otimes_\mathbb{Z}{\mathrm{R}}G$, where $N$ is an abelian group. Theorem \[thm:-&gt;suff-deep\] follows from this, in view of Proposition \[prop:suff-deep\], parts [(\[prop:suff-deep;1\])]{} and [(\[prop:suff-deep;3\])]{}. Let us prove the Proposition. First of all, let us assume that $k$ is algebraically closed. We will only consider closed points, and write $X$ for $X(k)$. It is a standard fact that the one-parameter subgroups $H = {\mathbb{G}_{\mathrm{m},k}} \to G_0$ with the property that $X^{G_0} = X^H$ form a nonempty Zariski open subset of $M$, so we can choose one with this property that admits enough limits. There is a (discontinuous) function $X \to X^{G_0}$ sending each point to its limit. Let $T_1$, …, $T_s$ be the connected components of $X^{G_0}$; call $E_i$ the inverse image of $T_i$ in $X$, and $\pi_j\colon E_j \to T_j$ the restriction of the limit function. The following is a fundamental result of Białynicki-Birula. \[thm:BB\] In the situation above: The $E_j$ are smooth locally closed $G$-invariant subvarieties of $X$. The functions $\pi_j\colon E_j \to T_j$ are $G$-invariant morphisms. \[[testing]{};1\] For each $j$ there is a representation $V_j$ of $H$ and an open cover $\{U_\alpha\}$ of $T_j$, together with equivariant isomorphisms $\pi_j^{-1}(U_\alpha) \simeq U_\alpha \times V_j$, such that the restriction $\pi_j\colon \pi_j^{-1}(U_\alpha) \to U_\alpha$ corresponds to the projection $U_\alpha \times V_j \to U_\alpha$. If $x$ is a point of $T_j$, then the normal space to $E_j$ in $X$ at $x$ is the sum of the negative eigenspaces in the tangent space to $X$ at $x$ under the action of $H$. Of course in part [(\[thm:BB;1\])]{} we may take $V_j$ to be the normal bundle to $E_j$ in $X$ at any point of $T_j$. This theorem is proved in [@b-b]; we should notice that the condition that $X$ is covered by open invariant quasiaffine subsets is always verified, thanks to a result of Sumihiro ([@sumihiro]). Now we remove the hypothesis that $k$ is algebraically closed: here is the variant of Białynicki-Birula’s theorem that we need. \[thm:BBvariant\] Suppose that a diagonalizable group scheme of finite type $G$ over a perfect field $k$ acts with enough limits on a smooth separated scheme of finite type $X$ over $k$. Let $Y_1$, …, $Y_r$ be the connected components of $X^{G_0}$; there exists a stratification $X_1$, …, $X_r$ of $X$ in locally closed $G$-invariant smooth subvarieties, together with $G$-equivariant morphisms $\rho_i\colon X_i \to Y_i$, such that: \[[testing]{};1\] $X_i$ contains $Y_i$ for all $i$, and the restriction of $\rho_i$ to $Y_i$ is the identity. \[[testing]{};2\] If $U$ is an open affine subset of $Y_i$ and ${\mathrm{N}}_U$ is the restriction of the normal bundle ${\mathrm{N}}_{Y_i}{X_i}$ to $U$, then there is a $G$-equivariant isomorphism $\rho_i^{-1}(U) \simeq {\mathrm{N}}_U$ of schemes over $U$. \[[testing]{};3\] In the eigenspace decomposition of the restriction of ${\mathrm{N}}_{X_i}{X}$ to $Y_i$, the subbundle corresponding to the trivial character of $G_0$ is 0. Let $\overline X = X \times_{{\operatorname{Spec}}k} {\operatorname{Spec}}\overline k$, and call $\Gamma$ the Galois group of $\overline k$ over $k$. Choose a one parameter subgroup $H = {\mathbb{G}_{\mathrm{m},k}} \to G$ as before. Let $T_1$, …, $T_s$ be the connected components of $\overline X^{G_0}$, $\pi_j\colon E_j \to T_j$ as in Białynicki-Birula’s theorem. The $Y_i$ correspond to the orbits of the action of $\Gamma$ on $\{T_1, \ldots, T_s\}$; obviously $\Gamma$ also permutes the $E_j$, so we let $X_1$, …, $X_r$ be the smooth subvarieties of $X$ corresponding to the orbits of $\Gamma$ on $\{E_1, \ldots, E_s\}$. The $\pi_j\colon E_j \to T_j$ descend to morphisms $X_i \to Y_i$. Properties [(\[thm:BBvariant;1\])]{} and [(\[thm:BBvariant;3\])]{} are obviously satisfied, because they are satisfied after passing to $\overline k$. We have to prove [(\[thm:BBvariant;2\])]{}. Let $E$ be the inverse image of $U$ in $X_i$, $I$ the ideal of $U$ in the algebra $k[E]$. Because $U$ is affine, $I/I^2$ is a projective $k[U]$-module, and $G$ is diagonalizable, the projection $I \to I/I^2$ has a $k[U]$-linear and $G$-equivariant section $I/I^2 \to I$. This induces a $G$-equivariant morphism of $U$-schemes $E \to {\mathrm{N}}_U$, sending $U$ to the 0-section, whose differential at the zero section is the identity (notice that ${\mathrm{N}}_U$ is also the restriction to $U$ of the relative tangent bundle ${\rm T}_{X/U}$). We want to show that this is an isomorphism; it is enough to check that this is true on the fibers, so, let $V$ be one of the fibers of ${\mathrm{N}}_U$ on some point $p \in U$. According to part [(\[thm:BBvariant;2\])]{} of the theorem of Białynicki-Birula, the fiber of $X$ on $p$ is $H$-equivariantly isomorphic to $V$; hence an application of the following elementary lemma concludes the proof of Theorem \[thm:BBvariant\]. Suppose that ${\mathbb{G}_{\mathrm{m},k}}$ acts linearly with positive weights on a finite dimensional vector space $V$ over a field $k$. If $f \colon V \to V$ is an equivariant polynomial map whose differential at the origin is an isomorphism, then $f$ is also an isomorphism. First all, notice that because of the positivity of the weights, we have $f(0) = 0$. By composing $f$ with the inverse of the differential of $f$ at the origin, we may assume that this differential of $f$ is the identity. Consider the eigenspace decomposition $V = V_1 \oplus V_2 \oplus \dots \oplus V_r$, where ${\mathbb{G}_{\mathrm{m},k}}$ acts on $V_i$ with a character $t \mapsto t^{m_i}$, and $0 < m_1 < m_2 < \dots < m_r$. Choose a basis of eigenvectors of $V$; we will use groups of coordinates $x_1$, …, $x_r$, where $x_i$ represents the group of elements of the dual basis corresponding to basis elements in $V_i$, so that the action of ${\mathbb{G}_{\mathrm{m},k}}$ is described by $t\cdot(x_1, \dots, x_r) = (t^{m_1}x_1, \dots, t^{m_r}x_r)$. Then it is a simple matter to verify that $f$ is given by a formula of the type $$f(x_1, \dots, x_r) = \bigl(x_1, x_2 + f_2(x_1), x_3+ f_3(x_1, x_2), \dots, x_r + f_r(x_1, \dots, x_{r-1})\bigr)$$ and that every polynomial map of this form is an isomorphism. Now let us show that Theorem \[thm:BBvariant\] implies Proposition \[prop:describesufflimits\]. First of all, Theorem [\[thm:BBvariant\] (\[thm:BBvariant;2\])]{} and a standard argument with the localization sequence imply that the pullback map ${\operatorname{K}_{*}}(Y_i) \otimes_\mathbb{Z} \mathrm{R}G = {\operatorname{K}_{*}}(Y_i,G) \to K(X_i,G)$ is an isomorphism. Now, let us order the strata $X_1$, …, $X_r$ by decreasing dimension, and let us set $U_i = X_1 \cup \ldots \cup X_i$. Clearly $X_i$ is closed in $U_i$. We claim that $X_i$ is K-rigid in $U_i$ for all $i$. In fact, it is enough to show that the restriction of $\lambda_{-1}({\mathrm{N}}_{X_i}X)$ to $Y_i$ is not a zero-divisor, and this follows from Lemma \[lem:criterionK-rigid\] and Theorem [\[thm:BBvariant\] (\[thm:BBvariant;3\])]{}. Then by Proposition [\[prop:K-rigid-&gt;\] (\[prop:K-rigid-&gt;;1\])]{} we have an exact sequence $$0 \longrightarrow {\operatorname{K}_{*}}(X_i,G) \longrightarrow {\operatorname{K}_{*}}(U_i, G) \longrightarrow {\operatorname{K}_{*}}(U_{i-1}, G) \longrightarrow 0;$$ so each ${\operatorname{K}_{*}}(U_i, G)$ is obtained by finitely many successive extensions of ${\mathrm{R}}G$-modules of the form $N \otimes_\mathbb{Z}{\mathrm{R}}G$, and $X = U_r$. This concludes the proof of Proposition \[prop:describesufflimits\], and of Theorem \[thm:-&gt;suff-deep\]. Comparison with ordinary K-theory for torus actions with enough limits ---------------------------------------------------------------------- Assume that $T$ is a totally split torus over a perfect field $k$, acting on a separated scheme $X$ of finite type over $k$. We write $T$ instead of $G$ for conformity with the standard notation. The following is a consequence of Proposition \[prop:describesufflimits\]. \[cor:vanishtor\] If $X$ is smooth and the action has enough limits, we have $${\operatorname{Tor}}_p^{{\mathrm{R}}T}\bigl({\operatorname{K}_{*}}(X, T), \mathbb{Z}\bigr) = 0 \text{ for all $p > 0$.}$$ The interest of this comes from the following result of Merkurjev. \[thm:merk\] There is a homology spectral sequence $$E^2_{pq} = {\operatorname{Tor}}_p^{{\mathrm{R}}T}\bigl(\mathbb{Z}, {\operatorname{K}_{q}}(X, T)\bigr) \Longrightarrow {\operatorname{K}_{p+q}}(X)$$ such that the edge homomorphisms $$\mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{q}}(X,T) \longrightarrow {\operatorname{K}_{q}}(X)$$ are induced by the forgetful homomorphism ${\operatorname{K}_{*}}(X,T) \to {\operatorname{K}_{*}}(X)$. In particular the ring homomorphism $\mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{0}}(X,T) \to{\operatorname{K}_{0}}(X)$ is an isomorphism. Furthemore, if $X$ is smooth and projective we have $E^2_{pq} = 0$ for all $p>0$, so the homomorphism $\mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{*}}(X,T) \to {\operatorname{K}_{*}}(X)$ is an isomorphism. More generally, Merkurjev produces his spectral sequence for actions of reductive groups whose fundamental group is torsion-free. From Corollary \[cor:vanishtor\] we get the following extension of Merkurjev’s degeneracy result. \[thm:enough-&gt;degenerates\] Suppose that $T$ is a totally split torus over a perfect field $k$, acting with enough limits on a smooth scheme separated and of finite type over $k$. Then the forgertful homomorphisms ${\operatorname{K}_{*}}(X, T) \to {\operatorname{K}_{*}}(X)$ induces an isomorphism $$\mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{*}}(X, T) {\overset{\sim}\longrightarrow}{\operatorname{K}_{*}}(X).$$ The K-theory of smooth toric varieties {#sec:toric} ====================================== Our reference for the theory of toric varieties will be [@fultontoric]. In this section we take $T$ to be a totally split torus over a fixed field $k$, $$N = {\operatorname{Hom}}({\mathbb{G}_{\mathrm{m},k}}, T) = \widehat T^\vee$$ its lattice of one-parameter subgroups, $\Delta$ a fan in $N \otimes\mathbb{R}$, $X = X(\Delta)$ the associated toric variety. We will always assume that $X$ is smooth; this is equivalent to saying that every cone in $\Delta$ is generated by a subset of a basis of $N$. We will give two different descriptions of the equivariant K-theory ring of $X$, one as a subring of a product of representation rings, and the second by generators and relations, analogously to what has been done for equivariant cohomology in [@bdcp]. The equivariant K-theory ring as a subring of a product of rings of representations ----------------------------------------------------------------------------------- There is one orbit $O_\sigma$ of $T$ on $X$ for each cone $\sigma\in \Delta$, containing a canonical rational point $x_\sigma \in O_\sigma(k)$. The dimension of $O_\sigma$ is the codimension ${\operatorname{codim}}\sigma {\overset{\mathrm{def}} =}\dim T - \dim \sigma$, and the stabilizer of any of its geometric points is the subtorus $T_\sigma \subseteq T$ whose group of one-parameter subgroups is precisely the subgroup ${\langle \sigma \rangle} = \sigma + (-\sigma)\subseteq N$; the dimension of $T_\sigma$ is equal to the dimension of $\sigma$ (see [@fultontoric 3.1]). Hence $X_s$ is the disjoint union of the orbits $O_\sigma = T/T_\sigma$ with $\dim\sigma = s$. Given a cone $\sigma$ in $N\otimes \mathbb{R}$, we denote by $\partial\sigma$ the union of all of its faces of codimension 1. Since $${\operatorname{K}_{*}}(O_\sigma, T) = {\operatorname{K}_{*}}(T/T_\sigma, T) = {\operatorname{K}_{*}}({\operatorname{Spec}}k, T_\sigma) = {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma$$ we have that $$\prod_{s} {\operatorname{K}_{*}}(X_s, T) = \prod_{\sigma \in \Delta} {\operatorname{K}_{*}}(O_\sigma, T)\ = \prod_{\sigma \in \Delta} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma.$$ Fix an positive integer $s$. Then there is a canonical isomorphism $${\mathrm{N}}_{s, s-1} \simeq \coprod_{\substack{\sigma\in \Delta\\ \dim \sigma = s}} \coprod_{\tau \in \partial\sigma} O_\tau.$$ Furthermore, for each pair $\sigma$, $\tau$ such that $\sigma$ has dimension $s$, $\tau$ has dimension $s-1$, and $\tau$ is a face of $\sigma$, the composition of the specialization homomorphism $$\begin{aligned} {\operatorname{Sp}}_{X,s}^{s-1} \colon {}&{\operatorname{K}_{*}}(X_{s-1},T) = \prod_{\substack{\tau \in \Delta\\ \dim \tau = {s-1}}}{\operatorname{K}_{*}}(O_\tau,T)\\ &\qquad\longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_{s, s-1},T) = \prod_{\substack{\sigma\in \Delta\\ \dim \sigma = s}}\prod_{\tau \in \partial\sigma}{\operatorname{K}_{*}}(O_\tau,T), \end{aligned}$$ with the projection $${\operatorname{pr}}_{\sigma, \tau}\colon \prod_{\substack{\sigma\in \Delta\\ \dim \sigma = s}} \prod_{\tau \in \partial\sigma}{\operatorname{K}_{*}}(O_\tau,T) \longrightarrow {\operatorname{K}_{*}}(O_\tau,T)$$ is the projection $${\operatorname{pr}}_\tau \colon \prod_{\substack{\tau \in \Delta\\ \dim \tau = {s-1}}}{\operatorname{K}_{*}}(O_\tau,T) \longrightarrow {\operatorname{K}_{*}}(O_\tau,T).$$ We us the same notation as in [@fultontoric]; in particular, for each cone $\sigma$ of the fan of $X$, we denote by $U_\sigma$ the corresponding affine open subscheme of $X$. First of all, assume that the fan $\Delta$ consists of all the faces of an $s$-dimensional cone $\sigma$. Call $B$ a part of a basis of $N$ that spans $\sigma$: we have an action of $T_\sigma$ on the $k$-vector space $V_\sigma$ generated by $B$, by letting each 1-parameter subgroup $\mathbb{G}_\mathrm{m} \to G$ in $B$ act by multiplication on the corresponding line in $V_\sigma$, and an equivariant embedding $T_\sigma \subseteq V_\sigma$. Then $X =U_\sigma$ is $T$-equivariantly isomorphic to the $T$-equivariant vector bundle $$T \times^{T_\sigma} V_\sigma = (T \times V_\sigma)/T_\sigma \longrightarrow O_\sigma = T/T_\sigma$$ in such a way that the zero section corresponds to $O_\sigma \subseteq U_\sigma$. Since $X_s = O_\sigma$ and $U_\sigma$ is a vector bundle over $O_\sigma$, we get a canonical isomorphism $U_\sigma \simeq {\mathrm{N}}_s$, and from this a canonical isomorphism $${\mathrm{N}}_{s, s-1} \simeq (U_\sigma)_{s-1} = \coprod_{\tau\in \partial\sigma}O_\tau.$$ It follows that the deformation to the normal bundle of $O_\sigma$ in $U_\sigma$ is also isomorphic to the product $U_\sigma \times_k \mathbb{P}^1$, and from this we get the second part of the statement. In the general case we have $X_s = \coprod_{\dim\sigma = s}O_\sigma$, and if $\sigma$ is a cone of dimension $s$ in $\Delta$, the intersection of $X_s$ with $U_\sigma$ is precisely $O_\sigma$. From this we get the first part of the statement in general. The second part follows by applying the compatibility of specializations to the morphism of deformation to the normal bundle induced by the equivariant morphism $\coprod_{\dim\sigma = s}U_\sigma \to X_s$. Using this lemma together with Theorem \[thm:maintheorem\] we get that ${\operatorname{K}_{*}}(X, T)$ is the subring of $$\prod_{\sigma\in \Delta} {\operatorname{K}_{*}}(O_\sigma,T) = \prod_{\sigma\in \Delta} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma$$ consisting of elements $(a_\sigma)$ with the property that the restriction of $a_\sigma \in {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\sigma$ to $ {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\tau$ coincides with $a_\tau \in {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\tau$ every time $\tau$ is a face of codimension 1 in $\sigma$. Since every face of a cone is contained in a face of codimension 1, this can also be described as the subring of $\prod_{\sigma} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma$ consisting of elements $(a_\sigma)$ with the property that the restriction of $a_\sigma \in {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\sigma$ to $ {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\tau$ coincides with $a_\tau \in {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\tau$ every time $\tau$ is a face of $\sigma$. But every cone in $\Delta$ is contained in a maximal cone in $\Delta$, so we get the following description of the equivariant K-theory of a smooth toric variety. \[thm:describetoric\] If $X(\Delta)$ is a smooth toric variety associated with a fan $\Delta$ in $N \otimes\mathbb{R}$, there is an injective homomorphism of ${\mathrm{R}}T$-algebras $${\operatorname{K}_{*}}\bigl(X(\Delta),T\bigr) {\hookrightarrow}\prod_{\sigma \in \Delta_{\max}}{\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma,$$ where $\Delta_{\max}$ is the set of maximal cones in $\Delta$. An element $(a_\sigma) \in \prod_{\sigma}{\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma$ is in the image of this homomorphism if and only if for any two maximal cones $\sigma_1$ and $\sigma_2$, the restrictions of $a_{\sigma_1}$ and $a_{\sigma_2}$ to ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1\cap \sigma_2}$ coincide. This description of the ring ${\operatorname{K}_{*}}(X, T)$ is analogous to the description of its equivariant cohomology in [@bdcp], and of it equivariant Chow ring in [@brion1 Theorem 5.4]. The multiplicative Stanley–Reisner presentation {#subsec:multSR} ----------------------------------------------- From the description above it is easy to get a presentation of the equivariant K-theory ring of the smooth toric variety $X(\Delta)$, analogous to the Stanley–Reisner presentation of its equivariant cohomology ring obtained in [@bdcp]. Denote by $\Delta_1$ the subset of $\Delta$ consisting of 1-dimensional cones. We will use the following notation: if $\sigma \in \Delta_{\max}$, call $N_\sigma \subseteq N$ the group of 1-parameter subgroups of $T_\sigma$, so that $\widehat T_\sigma = (N_\sigma)^\vee$. We will use multiplicative notation for $\widehat T_\sigma$. Furthermore, for any $\rho \in \Delta_1$ we call $v_\rho \in N$ the generator for the monoid $\rho \cap N$. For each $\rho \in \Delta_1$ we define an element $u_\rho$ of the product $\prod_{\sigma\in \Delta_{\max}} \widehat T_{\sigma}$, as follows. If $\rho$ is not a face of $\sigma$, we set $(u_\rho)_\sigma = 1$. If $\rho \in \Delta_{1}$ is a face of $\sigma \in \Delta_{\max}$ and $\{\rho = \rho_1, \rho_2\dots, \rho_t\}$ is the set of 1-dimensional faces of $\sigma$, then, since the variety $X(\Delta)$ is smooth, we have that $v_{\rho_1}$, …, $v_{\rho_t}$ form a basis for $N_\sigma$. If $v^\vee_{\rho_1}$, …, $v^\vee_{\rho_t}$ is the dual basis in $\widehat T_\sigma = N_\sigma^\vee$, we set $u_\rho = v^\vee_{\rho_1}$. Denote by $V_\Delta \subseteq \prod_{\sigma\in \Delta_{\max}} \widehat T_\sigma$ the subgroup consisting of the elements $(x_\sigma)$ with the property that for all $\sigma_1$, $\sigma_2$ in $\Delta_{\max}$ the restrictions of $x_{\sigma_1} \in \widehat T_{\sigma_1}$ and $x_{\sigma_2} \in \widehat T_{\sigma_2}$ to $\widehat T_{\sigma_1 \cap \sigma_2}$ coincide. Then we have the following fact. \[prop:basis\] The elements $u_\rho$ form a basis of $V_\Delta$. The proof is straightforward. We have the inclusions $$\prod_{\sigma \in \Delta_{\max}} \widehat T_\sigma \subseteq \prod_{\sigma \in \Delta_{\max}} {\mathrm{R}}T_\sigma \subseteq \prod_{\sigma \in \Delta_{\max}} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma;$$ because of the description of the ring ${\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr)$ as a subring of $\prod_{\sigma \in \Delta_{\max}} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma$ given in Theorem \[thm:describetoric\], we see that we can consider the $u_\rho$ as elements of ${\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr)$. There are some obvious relations that the $u_\rho$ satisfy in $${\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr) \subseteq \prod_{\sigma \in \Delta_{\max}} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma.$$ Suppose that $S$ is a subset of $\Delta_1$ not contained in any maximal cone of $\Delta$. Then for all $\sigma$ in $\Delta_{\max}$ there will be some $\rho \in S$ such that $(u_\rho)_\sigma = 1$ in $\widehat T_\sigma$; hence we have the relation $$\prod_{\rho \in S} (u_\rho - 1) = 0 \quad \text{in} \quad {\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr) \subseteq \prod_{\sigma \in \Delta_{\max}} {\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_\sigma.$$ From this we get a homomorphism of ${\operatorname{K}_{*}}(k)$-algebras $$\label{eq:hom} \frac{{\operatorname{K}_{*}}(k)\bigl[x_\rho^{\pm 1}\bigr]} {\bigl(\prod_{\rho \in S} (x_\rho - 1)\bigr)} \longrightarrow {\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr)$$ where the $x_\rho$ are indeterminates, where $\rho$ varies over $\Delta_1$, and $S$ over the subsets of $\Delta_1$ whose elements are not all contained in a maximal cone in $\Delta$, by sending each $x_\rho$ to $u_\rho$. \[thm:SR\] Suppose that $X(\Delta)$ is a smooth toric variety associated with a fan $\Delta$ in $N \otimes\mathbb{R}$. Then the homomorphism (\[eq:hom\]) above is an isomorphism. First of all, let us show that the $u_\rho$ and their inverses generate $${\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr) \subseteq \prod_{\sigma \in \Delta_{\max}} {\operatorname{K}_{*}}(k) \otimes{\mathrm{R}}T_\sigma.$$ Set $\Delta_{\max} = \{\sigma_1, \dots, \sigma_r\}$, and let $\alpha$ be an element of ${\operatorname{K}_{*}}\bigl(X(\Delta), T\bigr)$; we want to show that $\alpha$ can be expressed as a Laurent polynomial in the $x_\rho$ evaluated in the $u_\rho$. The ring ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1}$ is a ring of Laurent polynomials in the images of the $u_\rho$ with $\rho \subseteq \sigma_1$, so we can find a Laurent polynomial $p_1(x_\rho)$, in which only the $x_\rho$ with $\rho \subseteq \sigma_1$ appear, such that the image of $p_1(u_\rho)$ in ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1}$ equals the image of $\alpha$ in the same ring. By subtracting $p_1(u_\rho)$, we see that we may assume that the projection of $\alpha$ into ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1}$ is zero. Now, let us repeat the procedure for the maximal cone $\sigma_2$: find a polynomial $p_1(x_\rho)$, in which only the $x_\rho$ with $\rho \subseteq \sigma_2$ appear, such that the image of $p_2(u_\rho)$ in ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_2}$ equals the image of $\alpha$ in the same ring. The key point is that the restriction of $\alpha$ to ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1 \cap \sigma_2}$ is zero, so in fact $p_2(x_\rho)$ can only contain the variables $x_\rho$ with $\rho$ not in $\sigma_1$. Hence the restriction of $p_2(x_\rho)$ to ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1}$ is also zero, and after having subtracted $p_2(x_\rho)$ from $\alpha$ we may assume that the restriction of $\alpha$ to both ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_1}$ and ${\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T_{\sigma_2}$ is zero. We can continue this process for the remaining cones $\sigma_3$, …, $\sigma_r$; at the end all the projections will be zero, and therefore $\alpha$ will be zero too. Now we have to show that the kernel of the homomorphism $$\label{eq:homo2} {\operatorname{K}_{*}}(k)\bigl[x_\rho^{\pm 1}\bigr] \longrightarrow \prod_{\sigma \in \Delta_{\max}} {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\sigma$$ sending each $x_\rho$ to $u_\rho$ equals the ideal $\bigl(\prod_{\rho \in S} (x_\rho - 1)\bigr)$, where $S$ varies over all subsets of $\Delta_1$ not contained in any maximal cone. The kernel of the projection $${\operatorname{K}_{*}}(k)\bigl[x_\rho^{\pm 1}\bigr] \longrightarrow {\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_\sigma$$ equals the ideal $I_\sigma$ generated by the $x_\rho - 1$, where $\rho$ varies over the set of 1-dimensional cones in $\Delta_1$ not contained in $\sigma$, hence the kernel of the homomorphism \[eq:homo2\] is the intersection of the $I_\sigma$. The result is then a consequence of the following lemma. Let $R$ be a (not necessarily commutative) ring, $\{x_\rho\}_{\rho \in E}$ a finite set of central indeterminates; consider the ring of Laurent polynomials $R\bigl[x_\rho^{\pm 1} \bigr]$. Let $A_1$, …, $A_r$ be subsets of $E$; for each $j = 1$, …, $r$ call $I_j$ the ideal of $R\bigl[x_\rho^{\pm 1} \bigr]$ generated by the elements $x_\rho - 1$ with $\rho \in A_j$. Then the intersection $I_1 \cap \dots \cap I_r$ is the ideal of $R\bigl[x_\rho^{\pm 1} \bigr]$ generated by the elements $\prod_{\rho \in S}(x_\rho - 1)$, where $S$ varies over all subsets of $E$ that meet each $A_j$. When each $A_j$ contains a single element this is a particular case of Lemma \[lem:int=prod\]. The obvious common generalization should also hold. We proceed by induction on $r$; the case $r=1$ is clear. In general, take $p \in I_1 \cap \dots \cap I_r \subseteq I_2 \cap \dots \cap I_r$; by induction hypothesis, we can write $$p = \sum_S \biggl(\prod_{\rho \in S}\ (x_\rho - 1) \biggr) q_S,$$ where $S$ varies over all subsets of $E$ whose intersection with each of the $A_2$, …, $A_r$ is not empty. We can split the sum as $$p = \sum_{S\cap A_1 \neq \emptyset} \biggl(\prod_{\rho \in S} (x_\rho - 1) \biggr) q_S + \sum_{S\cap A_1 = \emptyset}\biggl(\prod_{\rho \in S}(x_\rho - 1) \biggr) q_S;$$ the first summand is in $I_1 \cap \dots \cap I_r$ and is of the desired form, so we may subtract it from $p$ and suppose that $p$ is of the type $$p = \sum_{S\cap A_1 = \emptyset}\biggl(\prod_{\rho \in S}(x_\rho - 1) \biggr) q_S.$$ Now, consider the ring $R\bigl[x_\rho^{\pm 1} \bigr]_{\rho \notin A_1}$ of Laurent polynomials not involving the variables in $A_1$; it is a subring of $R\bigl[x_\rho^{\pm 1} \bigr]$, and there is also a retraction $$\pi \colon R\bigl[x_\rho^{\pm 1} \bigr] \longrightarrow R\bigl[x_\rho^{\pm 1}\bigr]_{\rho \notin A_1},$$ sending each $x_\rho$ with $\rho \in A_1$ to $1$, whose kernel is precisely $I_1$. The elements $\prod_{\rho \in S}(x_\rho - 1)$ are in $R\bigl[x_\rho^{\pm 1}\bigr]_{\rho \notin A_1}$, and $$\pi p = \sum_{S\cap A_1 = \emptyset}\biggl(\prod_{\rho \in S}(x_\rho - 1) \biggr) \pi q_S = 0 \in R\bigl[x_\rho^{\pm 1}\bigr]_{\rho \notin A_1}$$ so we can write $$p = \sum_{S\cap A_1 = \emptyset}\biggl(\prod_{\rho \in S}(x_\rho - 1) \biggr) (q_S - \pi q_S).$$ Then we write each $q_S - \pi q_S \in I_1$ as a linear combination of the polynomials $x_\rho - 1$ with $\rho \in A_1$; this concludes the proofs of the lemma and of the theorem. Ordinary K-theory of smooth toric varieties ------------------------------------------- From Merkurjev’s theorem (\[thm:merk\]) we get that ${\operatorname{K}_{0}}(X) = \mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{0}}(X, T)$. If the Merkurjev spectral sequence degenerates then we also have ${\operatorname{K}_{*}}(X) = \mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{*}}(X, T)$; this gives a way to compute the whole K-theory ring of $X$. In general, the spectral sequence will not degenerate, and the ring ${\operatorname{K}_{*}}(X)$ tends to be rather complicated (for example, when $X = T$). When the toric variety has enough limits we can apply Theorem \[thm:enough-&gt;degenerates\]. Using the description of closures of orbits given in [@fultontoric 3.1], one shows that given a point $x\in X\bigl(\,\overline k\,\bigr)$ lying in an orbit $O_\tau$, and a one-parameter subgroup corresponding to an element $v \in N$, then the point has a limit under the one parameter subgroup if and only if $v$ lies in the subset $$\bigcup_{\sigma \in {\operatorname{Star}}\tau} (\sigma + {\langle \tau \rangle}) \subseteq N \otimes \mathbb{R},$$ where ${\langle \tau \rangle}$ denotes the subvector space $\tau + (-\tau) \subseteq N \otimes \mathbb{R}$, and ${\operatorname{Star}}\tau$ is the set of cones in $\Delta$ containing $\tau$ as a face. From this we obtain the followign. \[charact\] $X$ has enough limits if and only if the subset $$\bigcap_{\tau\in \Delta}\, \bigcup_{\sigma \in {\operatorname{Star}}\tau} (\sigma + {\langle \tau \rangle}) \subseteq N \otimes \mathbb{R}$$ has nonempty interior. \[rmk:describecombcomplete\] If $X = X(\Delta)$ is a smooth toric variety with enough limits, the K-theory ring of $X$ can be describe in a slightly more efficient fashion: there is an injective homomorphism of ${\mathrm{R}}T$-algebras $${\operatorname{K}_{*}}(X,T) {\hookrightarrow}\prod_{\substack{\sigma \in \Delta\\ \dim \sigma = \dim T}}{\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T,$$ and an element $(a_\sigma) \in \prod_{\sigma}{\operatorname{K}_{*}}(k) \otimes {\mathrm{R}}T$ is in the image of this homomorphism if and only if for any two *adjacent* maximal cones $\sigma_1$ and $\sigma_2$, the restrictions of $a_{\sigma_1}$ and $a_{\sigma_2}$ to ${\operatorname{K}_{*}}(k)\otimes {\mathrm{R}}T_{\sigma_1\cap \sigma_2}$ coincide. \[thm:combcomplete-&gt;\] If $X$ is a smooth smooth toric variety with enough limits, then ${\operatorname{K}_{0}}(X,T)$ is a projective module over ${\mathrm{R}}T$ of rank equal to the number of maximal cones in its fan; furthermore the natural ring homomorphism ${\operatorname{K}_{*}}(k) \otimes {\operatorname{K}_{0}}(X,T) \to {\operatorname{K}_{*}}(X,T)$ is an isomorphism. In particular we have $${\operatorname{Tor}}_p^{{\mathrm{R}}T}\bigl({\operatorname{K}_{*}}(X, T), \mathbb{Z}\bigr) = 0 \text{ for all $p > 0$:}$$ so from Merkurjev’s theorem (\[thm:merk\]) we get the following. Let $X$ be a smooth toric variety with enough limits. The natural homomorphism of rings $$\mathbb{Z}\otimes_{{\mathrm{R}}T} {\operatorname{K}_{*}}(X, T) \longrightarrow {\operatorname{K}_{*}}(X)$$ is an isomorphism of ${\mathrm{R}}T$-algebras. ${\operatorname{K}_{0}}(X)$ is a free abelian group of rank equal to the number of maximal cones in $\Delta$. The natural homomorphism ${\operatorname{K}_{*}}(k) \otimes {\operatorname{K}_{0}}(X) \to {\operatorname{K}_{*}}(X)$ is an isomorphism. Suppose that the base field is the field $\mathbb{C}$ of complex numbers. The Merkurjev spectral sequence is an analogue of the Eilenberg–Moore spectral sequence ([@em]) $$E_2^{p,q} = {\operatorname{Tor}}_{p,q}^{H^*(BT, \mathbb{Z})}\bigl(H^*_T(X,\mathbb{Z}), \mathbb{Z}\bigr) \Longrightarrow {\operatorname{H}}^{p+q}(X, \mathbb{Z}).$$ Then [@bbfk] contains a description of the fans of the simplicial toric varieties for which this spectral sequence degenerates after tensoring with $\mathbb{Q}$. Presumably there should be a similar description for the case considered here. The refined decomposition theorem {#sec:decomposition} ================================= The main result of [@vevi] shows that if $G$ is an algebraic group acting with finite stabilizers on a noetherian regular algebraic space $X$ over a field, the equivariant K-theory ring of $X$, after inverting certain primes, splits as a direct product of rings related with the K-theory of certain fixed points subsets. For actions of diagonalizable groups it is not hard to extend this decomposition to the case when the stabilizers have constant dimension. So, in the general case when we do not assume anything about the dimension of the stabilizers, this theorem gives a description of the K-theory of each stratum $X_s$; it should clearly be possible to mix this with Theorem \[thm:maintheorem\] to give a result that expresses ${\operatorname{K}_{*}}(X,G)$, after inverting certain primes, as a fibered product. This is carried out in this section. The material in this section is organized as follows. We first extend the main result of [@vevi] over an arbitrary noetherian separated base scheme $S$ (Theorem \[thm:refined0\]) by giving a decomposition theorem in the case of an action with finite stabilizers of a diagonalizable group scheme $G$ of finite type over $S$ on a noetherian regular separated algebraic space $X$ over $S$. Next, we deduce from this a decomposition theorem in the case where the action of $G$ on $X$ has stabilizers of fixed constant dimension (Theorem \[thm:refinedconstant\]). Finally, we combine the analysis carried out in Section 4, and culminating with Theorem \[thm:maintheorem\], together with Theorem \[thm:refinedconstant\] to prove a general decomposition theorem (Theorem \[thm:refineddecomposition\]) where no restriction is imposed on the stabilizers. Actions with finite stabilizers ------------------------------- Here we recall the main result of [@vevi] for actions of diagonalizable groups, extending it over any noetherian separated base scheme $S$. Suppose that $G$ is a diagonalizable group scheme of finite type acting with finite stabilizers on a noetherian regular separated algebraic space over a noetherian separated scheme $S$. A diagonalizable group scheme of finite type $\sigma$ over $S$ is called *dual cyclic* if its Cartier dual is finite cyclic, that is, if $\sigma$ is isomorphic to a group scheme of the form $\boldsymbol{\mu}_{n,S}$ for some positive integer $n$. A subgroup scheme $\sigma\subseteq G$ is called *essential* if it is dual cyclic, and $X^\sigma \neq \emptyset$. There are only finitely many essential subgroups of $G$; we will fix a positive integer $N$ which is divisible by the least common multiple of their orders. Suppose that $\sigma$ is a dual cyclic group of order $n$. The ring of representations ${\mathrm{R}}\sigma$ is of the form $\mathbb{Z}[t]/(t^n - 1)$, where $t$ corresponds to a generator of the group of characters $\widehat\sigma$. Denote by ${\mathrm{\widetilde R}}\sigma$ the quotient of ${\mathrm{R}}\sigma$ corresponding to the quotient $${\mathrm{R}}\sigma = \mathbb{Z}[t]/(t^n - 1) \twoheadrightarrow \mathbb{Z}[t]/\bigl(\Phi_n(t)\bigr),$$ where $\Phi_n$ is the $n{^\text{th}}$ cyclotomic polynomial. This quotient ${\mathrm{\widetilde R}}\sigma$ is independent of the choice of a generator for $\widehat\sigma$. We have a canonical homomorphism ${\mathrm{R}}G \to {\mathrm{R}}\sigma \twoheadrightarrow {\mathrm{\widetilde R}}\sigma$ induced by the embedding $\sigma \subseteq G$. We also define a multiplicative system $$\mathrm{S}_\sigma \subseteq {\mathrm{R}}G$$ as follows: an element of ${\mathrm{R}}G$ is in $\mathrm{S}_\sigma$ if its image in ${\mathrm{\widetilde R}}\sigma$ is a power of $N$. For any ${\mathrm{R}}G$-module $M$, we define the *$\sigma$-localization* $M_\sigma$ of $M$ to be $\mathrm{S}_\sigma^{-1} M$. Consider the $\sigma$-localization $${\operatorname{K}_{*}}(X^\sigma,G)_\sigma = \mathrm{S}_\sigma^{-1} {\operatorname{K}_{*}}(X^\sigma,G)$$ of the ${\mathrm{R}}G$-algebra ${\operatorname{K}_{*}}(X^\sigma,G)$. The tensor product ${\operatorname{K}_{*}}(X^\sigma,G)_\sigma \otimes \mathbb{Q}$ is the localization $$\bigl({\operatorname{K}_{*}}(X^\sigma,G)\otimes \mathbb{Q}\bigr)_{\mathfrak{m}_\sigma}$$ of the ${\mathrm{R}}\sigma$-algebra ${\operatorname{K}_{*}}(X^\sigma, G)\otimes\mathbb{Q}$ at the maximal ideal $$\mathfrak{m}_\sigma = \ker({\mathrm{R}}\sigma\otimes\mathbb{Q} \twoheadrightarrow {\mathrm{\widetilde R}}\sigma \otimes \mathbb{Q}).$$ We are particularly interested in the $\sigma$-localization when $\sigma$ is the trivial subgroup of $G$; in this case we denote it by ${\operatorname{K}_{*}}(X,G){_{\operatorname{geom}}}$, and call it *the geometric equivariant K-theory of $X$*. The localization homomorphism $${\operatorname{K}_{*}}(X,G)\otimes \mathbb{Z}[1/N] \longrightarrow {\operatorname{K}_{*}}(X,G){_{\operatorname{geom}}}$$ is surjective, and its kernel can be described as follows. Consider the kernel $\mathfrak{p} = \ker {\operatorname{rk}}$ of the localized rank homomorphism $${\operatorname{rk}}\colon {\operatorname{K}_{*}}(X,G)\otimes\mathbb{Z}[1/N] \longrightarrow \mathbb{Z}[1/N];$$ then the power $\mathfrak{p}^k$ is independent of $k$ if $k$ is large, and this power coincides with the kernel of the localization homomorphism. For each essential subgroup $\sigma\subseteq G$, consider the compositions $${\operatorname{loc}}_\sigma \colon {\operatorname{K}_{*}}(X,G)\otimes \mathbb{Z}[1/N] \longrightarrow {\operatorname{K}_{*}}(X^\sigma,G)\otimes\mathbb{Z}[1/N] \longrightarrow {\operatorname{K}_{*}}(X^\sigma,G)_\sigma,$$ where the first arrow is a restriction homomorphism, and the second one is the localization. There is also a homomorphism of ${\mathrm{R}}G$-algebras ${\operatorname{K}_{*}}(X^\sigma, G) \to {\operatorname{K}_{*}}(X^\sigma, G){_{\operatorname{geom}}}\otimes {\mathrm{\widetilde R}}\sigma$, defined as the composition $${\operatorname{K}_{*}}(X^\sigma, G) \longrightarrow {\operatorname{K}_{*}}(X^\sigma, G\times\sigma) \simeq {\operatorname{K}_{*}}(X^\sigma, G)\otimes {\mathrm{R}}\sigma \longrightarrow {\operatorname{K}_{*}}(X^\sigma, G){_{\operatorname{geom}}}\otimes{\mathrm{\widetilde R}}\sigma,$$ where the first morphism is induced by the multiplication $G \times \sigma \to G$, the second one is a natural isomorphism coming from the fact that $\sigma$ acts trivially on $X^\sigma$ ([@vevi Lemma 2.7]), and the third one is obtained from the localization homomorphism ${\operatorname{K}_{*}}(X^\sigma, G) \to {\operatorname{K}_{*}}(X^\sigma, G){_{\operatorname{geom}}}$ and the projection ${\mathrm{R}}G \to {\mathrm{\widetilde R}}\sigma$. Then the homomorphism ${\operatorname{K}_{*}}(X^\sigma, G) \to {\operatorname{K}_{*}}(X^\sigma, G){_{\operatorname{geom}}}\otimes {\mathrm{\widetilde R}}\sigma$ factors through ${\operatorname{K}_{*}}(X^\sigma, G)_\sigma$ ([@vevi Lemma 2.8]), inducing a homomorphism $$\theta_\sigma\colon {\operatorname{K}_{*}}(X^\sigma, G)_\sigma \longrightarrow{\operatorname{K}_{*}}(X^\sigma, G){_{\operatorname{geom}}}\otimes{\mathrm{\widetilde R}}\sigma.$$ \[thm:refined0\] \[[testing]{};1\] There are finitely many essential subgroup schemes in $G$, and the homomorphism $$\prod_\sigma {\operatorname{loc}}_\sigma\colon {\operatorname{K}_{*}}(X,G)\otimes \mathbb{Z}[1/N] \longrightarrow \prod_\sigma {\operatorname{K}_{*}}(X^\sigma,G)_\sigma,$$ where the product runs over all the essential subgroup schemes of $G$, is an isomorphism. \[[testing]{};2\] The homomorphism $$\theta_\sigma\colon {\operatorname{K}_{*}}(X^\sigma, G)_\sigma \longrightarrow {\operatorname{K}_{*}}(X^\sigma, G){_{\operatorname{geom}}}\otimes{\mathrm{\widetilde R}}\sigma$$ is an isomorphism of ${\mathrm{R}}G$-algebras. If the base scheme $S$ is the spectrum of a field, this is a particular case of the main theorem of [@vevi]. If $G$ is a torus, the proof of this statement given in [@vevi] goes through without changes, because it only relies on Thomason’s generic slice theorem for torus actions ([@th2 Proposition 4.10]). In the general case, choose an embedding $G{\hookrightarrow}T$ into some totally split torus $T$ over $S$, and consider the quotient space $$Y {\overset{\mathrm{def}} =}X\times ^GT = (X \times T)/G$$ by the customary diagonal action of $G$; this exists as an algebraic space thanks to a result of Artin ([@lmb Corollaire 10.4]). The same argument as in the beginning of Section 5.1 of [@vevi] shows that $Y$ is separated. Now observe that if $\sigma \subseteq T$ is an essential subgroup relative to the action of $T$ on $Y$, we have $Y^\sigma =\emptyset $ unless $\sigma \subseteq G$ is an essential subgroup; hence the least common multiples of the orders of all essential subgroups are the same for the action of $G$ on $X$ and the action of $T$ on $Y$. Also, if $\sigma \subseteq G$ is an essential subgroup we have $Y^\sigma =X^\sigma \times ^GT$, and therefore, by Morita equivalence ([@th5 Proposition 6.2]), we get an isomorphism $${\operatorname{K}_{*}}(Y^\sigma ,T)\simeq {\operatorname{K}_{*}}(X^\sigma ,G)$$ which is an isomorphism of ${\mathrm{R}}T$-algebras, if we view ${\operatorname{K}_{*}}(X^\sigma ,G)$ as an ${\mathrm{R}}T$-algebra via the restriction homomorphism ${\mathrm{R}}T \to {\mathrm{R}}G$. Moreover, $\mathrm{S}_\sigma ^T\subseteq \mathrm{R}(T)$ is exactly the preimage of $\mathrm{S}_\sigma ^G\subseteq \mathrm{R}(G)$ under the natural surjection ${\mathrm{R}}T\to {\mathrm{R}}G$; therefore we have compatible $\sigma $-localized Morita isomorphisms $$\left(\mathrm{S}_\sigma ^T\right)^{-1}{\operatorname{K}_{*}}(Y^\sigma,T)\simeq \left(\mathrm{S}_\sigma ^G\right)^{-1}{\operatorname{K}_{*}}(X^\sigma,G)$$ and (for $\sigma$ equal to the trivial subgroup) $${\operatorname{K}_{*}}(Y^\sigma ,T)_{\mathrm{geom}}\simeq {\operatorname{K}_{*}}(X^\sigma ,G)_{\mathrm{geom}};$$ hence the theorem for the action of $G$ on $X$ follows from the theorem for the action of $T$ on $Y$. Actions with stabilizers of constant dimension ---------------------------------------------- From the theorem on actions with finite stabilizers we can easily get a decomposition result when we assume that the stabilizers have constant dimension. Assume that $G$ is a diagonalizable group scheme of finite type over $S$, acting on a noetherian regular separated algebraic space $X$ over $S$ with stabilizers of constant dimension equal to $s$. A diagonalizable subgroup scheme $\sigma\subseteq G$ is *dual semicyclic* if $\sigma/\sigma_0$ is dual cyclic, where $\sigma_0$ is the toral component of $\sigma$. The *order* of a dual cyclic group $\sigma$ is by definition equal the order of $\sigma/ \sigma_0$. Equivalently, $\sigma\subseteq G$ is dual semicyclic if it is isomorphic to ${\mathbb{G}_{\mathrm{m},S}}^r \times \boldsymbol{\mu}_{n,S}$ for some $r \ge 0$ and $n > 0$. A subgroup scheme $\sigma \subseteq G$ is called *essential* if it is dual semicyclic and $s$-dimensional, and $X^\sigma \neq \emptyset$. There are finitely many subtori $T_j \subseteq G$ of dimension $s$ in $G$ with $X^{T_j} \neq \emptyset$, and $X$ is the disjoint union of the $X^{T_j}$. The toral part of an essential subgroup of $G$ coincides with one of the $T_j$; hence there are only finitely many essential subgroups of $G$. We fix a positive integer $N$ which is divisible by the least common multiple of the orders of the essential subgroups of $G$. For each dual semicyclic subgroup $\sigma \subseteq G$, we define a multiplicative system $$\mathrm{S}_\sigma {\overset{\mathrm{def}} =}\mathrm{S}_{\sigma/ \sigma_0} \subseteq {\mathrm{R}}(G/\sigma_0) \subseteq {\mathrm{R}}G.$$ as the set of those elements of ${\mathrm{R}}(G/\sigma_0)$ whose image in ${\mathrm{\widetilde R}}(\sigma/\sigma_0)$ is a power of $N$. If $M$ is a module over ${\mathrm{R}}G$, we define, as before, the $\sigma$-localization of $M$ to be $M_\sigma = \mathrm{S}_\sigma^{-1}M$. If $\sigma\subseteq G$ is an essential subgroup, we can choose a splitting $G \simeq (G/\sigma_0) \times \sigma_0$; according to [@th4 Lemme 5.6] this splitting induces an isomorphism $${\operatorname{K}_{*}}(X^\sigma, G) \simeq {\operatorname{K}_{*}}(X^\sigma, G/\sigma_0) \otimes {\mathrm{R}}\sigma_0$$ and also an isomorphism of $\sigma$-localizations $${\operatorname{K}_{*}}(X^\sigma, G)_\sigma \simeq {\operatorname{K}_{*}}(X^\sigma, G/\sigma_0)_{\sigma/\sigma_0} \otimes {\mathrm{R}}\sigma_0.$$ Fix one of the $T_j$, and choose a splitting $G \simeq G/T_j \times T_j$. We have a commutative diagram $$\xymatrix{ {}{\operatorname{K}_{*}}(X^{T_j}, G)\otimes \mathbb{Z}[1/N]\ar[r] \ar[d]^{\sim} & {}\prod\limits_{\substack{\sigma\text{ essential}\\ \sigma_0 = T_j}} \ar[d]^{\sim} {}{\operatorname{K}_{*}}(X^\sigma, G)_\sigma\\ {}{\operatorname{K}_{*}}(X^{T_j}, G/T_j) \otimes {\mathrm{R}}T_j \otimes \mathbb{Z}[1/N] \ar[r]^-{\sim} & {}\prod\limits_{\substack{\sigma\text{ essential}\\ \sigma_0 = T_j}} {}{\operatorname{K}_{*}}(X^\sigma, G/\sigma_0)_{\sigma/\sigma_0 } \otimes {\mathrm{R}}\sigma_0 }$$ where the two columns are isomorphisms induced by the choice of a splitting $G \simeq G/T_j \times T_j$, and the rows are induced by composing the restriction homomorphism from $X^{T_j}$ to $X^\sigma$ with the localization homomorphism. The bottom row is in an isomorphism because of Theorem [\[thm:refined0\] (\[thm:refined0;1\])]{}. Since the product of the restriction homomorphisms $${\operatorname{K}_{*}}(X,G) \longrightarrow \prod_{j} {\operatorname{K}_{*}}(X^{T_j}, G)$$ is an isomorphism, we obtain the following generalization of Theorem \[thm:refined0\]. \[thm:refinedconstant\] Suppose that a diagonalizable group scheme $G$ of finite type over $S$ acts with stabilizers of constant dimension on a noetherian regular separated algebraic space $X$ over $S$. \[[testing]{};1\] There are finitely many essential subgroup schemes in $G$, and the homomorphism $$\prod_\sigma {\operatorname{loc}}_\sigma\colon {\operatorname{K}_{*}}(X,G)\otimes \mathbb{Z}[1/N] \longrightarrow \prod_\sigma {\operatorname{K}_{*}}(X^\sigma,G)_\sigma,$$ where the product runs over all the essential subgroup schemes of $G$, is an isomorphism. \[[testing]{};2\] For any essential subgroup scheme $\sigma\subseteq G$, a choice of a splitting $G \simeq (G/\sigma_0) \times \sigma_0$ gives an isomorphism $${\operatorname{K}_{*}}(X^\sigma, G)_\sigma \longrightarrow {\operatorname{K}_{*}}(X^\sigma, G/\sigma_0){_{\operatorname{geom}}}\otimes{\mathrm{\widetilde R}}\sigma \otimes {\mathrm{R}}\sigma_0.$$ If $s = 0$, then $\sigma_0 = 1$ for each essential subgroup $\sigma\subseteq G$, so there is a unique splitting $G \simeq (G/\sigma_0) \times \sigma_0$, and the isomorphism in [(\[thm:refinedconstant;2\])]{} is canonical. More specializations -------------------- For the refined decomposition theorem we need more specialization homomorphisms. Let a diagonalizable group scheme $G$ of finite type over $S$ act on a noetherian regular separated algebrac space $X$ over $S$, with no restriction on the dimensions of the stabilizers. Given a diagonalizable subgroup scheme $\sigma\subseteq G$, we set $$X^{(\sigma)} = X^{\sigma} \cap X_{\le\dim \sigma}.$$ Equivalently, $X^{(\sigma)} = (X_{\dim \sigma})^{\sigma}$. Obviously $X^{(\sigma)}$ is a locally closed regular subspace of $X$. Let $\sigma$ and $\tau$ be two diagonalizable subgroup schemes of $G$. We say that $\tau$ is *subordinate to $\sigma$*, and we write $\tau\prec \sigma$, if $\tau$ is contained in $\sigma$, and the induced morphism $\tau \to \sigma/\sigma_0$ is surjective. Suppose that $\sigma$ and $\tau$ are diagonalizable subgroup schemes of $G$ of dimension $s$ and $t$ respectively, and that $\tau$ is subordinate to $\sigma$. Consider the deformation to the normal cone $\mathrm{M}_s \to \mathbb{P}^1_S$ of $X_s$ in $X_{\le s}$, considered in Subsection \[subsec:specializationsnormal\]. By Proposition \[prop:restrictregularatinfinity\], the restriction $\mathrm{M}_s^{(\tau)} \to \mathbb{P}^1_S$ is regular at infinity, so we can define a specialization homomorphism $${\operatorname{K}_{*}}(X^{(\tau)}, G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_s^{(\tau)}, G).$$ Denote by ${\mathrm{N}}_{\sigma}$ the restriction of ${\mathrm{N}}_s$ to $X^{(\sigma)}$. We define the specialization homomorphism $${\operatorname{Sp}}_{X, \sigma}^\tau \colon {\operatorname{K}_{*}}(X^{(\tau)}, G) \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)$$ as the composition of the homomorphism ${\operatorname{K}_{*}}(X^{(\tau)},G) \to {\operatorname{K}_{*}}({\mathrm{N}}_s^{(\tau)}, G)$ above with the restriction homomorphism ${\operatorname{K}_{*}}({\mathrm{N}}_s^{(\tau)}, G) \to {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)$. We also denote by $${\operatorname{Sp}}_{X, \sigma}^\tau \colon {\operatorname{K}_{*}}(X^{(\tau)}, G)_\tau \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)_\tau$$ the $\tau$-localization of this specialization homomorphism. Since $\tau$ is subordinate to $\sigma$, it is easy to see that ${\mathrm{N}}_\sigma^{(\tau)}$ is a union of connected components of ${\mathrm{N}}_s^{(\tau)}$. The general case ---------------- The hypotheses are the same as in the previous subsection: $G$ is a diagonalizable group scheme of finite type over $S$, acting on a noetherian regular separated algebraic space $X$ over $S$. An *essential subgroup of $G$* is a dual semicyclic subgroup scheme $\sigma \subseteq G$ such that $X^{(\sigma)} \neq \emptyset$. A semicyclic subgroup scheme of $G$ is essential if and only if it is essential for the action of $G$ on $X_s$ for some $s$; hence there are only finitely many essential subgroups of $G$. We will fix a positive integer $N$ that is divisible by the orders of all the essential subgroups of $G$. If $\sigma$ is a dual semicyclic subgroup of $G$, we define the multiplicative system $\mathrm{S}_\sigma \subseteq {\mathrm{R}}(G/\sigma_0)\subseteq {\mathrm{R}}G$ as before, as the subset of ${\mathrm{R}}(G/\sigma_0) \subseteq {\mathrm{R}}G$ consisting of elements whose image in ${\mathrm{\widetilde R}}(\sigma/ \sigma_0)$ is a power of $N$. Also, $${\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma {\overset{\mathrm{def}} =}\mathrm{S}_\sigma^{-1}{\operatorname{K}_{*}}(X^{(\sigma)}, G),$$ as before. \[prop:inclusion\] Let $\sigma$ and $\tau$ be two semi-cyclic subgroups of $G$. If $\tau\prec \sigma$, then $\mathrm{S}_\sigma \subseteq \mathrm{S}_\tau$. Consider the commutative diagram of group schemes $$\xymatrix{ G/\tau_0\ar@{->>}[rr] && G/\sigma_0 \\ \tau/\tau_0\ar@{->>}[rr]\ar@{ >->}[u] && {}\sigma/\sigma_0 \; ;\ar@{ >->}[u] }$$ by taking representation rings we get a commutative diagram of rings $$\xymatrix{ {}{\mathrm{R}}(G/\tau_0)\ar@{->>}[d] && {}{\mathrm{R}}(G/\sigma_0)\ar@{->>}[d] \ar@{ >->}[ll]\\ {}{\mathrm{R}}(\tau/\tau_0)\ar@{->>}[d] && {}{\mathrm{R}}(\sigma/\sigma_0)\ar@{->>}[d]\ar@{ >->}[ll] \ar@{ >->}[ll]\\ {\mathrm{\widetilde R}}(\tau/\tau_0) && {}{\mathrm{\widetilde R}}(\sigma/\sigma_0)\ar@{.>}[ll] }$$ (without the dotted arrow). But it is easy to see that in fact the composition ${\mathrm{R}}(\sigma/\sigma_0) \to {\mathrm{R}}(\tau/\tau_0) \to {\mathrm{\widetilde R}}(\tau/\tau_0)$ factors through ${\mathrm{\widetilde R}}(\sigma/\sigma_0)$, so in fact the dotted arrows exists; and this proves the thesis. Now, consider the restriction of the projection $\pi_{\sigma, \tau}\colon {\mathrm{N}}_\sigma^{(\tau)} \to X^{(\sigma)}$, where $\sigma$ and $\tau$ are dual semicyclic subgroups of $G$, and $\tau$ is subordinate to $\sigma$. Because of Proposition \[prop:inclusion\], we can consider the composition of the pullback ${\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma \to {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)_\sigma$ with the natural homomorphism ${\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)_\sigma \to {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)_\tau$ coming from the inclusion $\mathrm{S}_\sigma \subseteq \mathrm{S}_\tau$; we denote this homomorphism by $$\pi_{\sigma, \tau}^* \colon {\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma \longrightarrow {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)_\tau.$$ Suppose that $\sigma$ and $\tau$ are dual semicyclic subgroups of $G$ and that $\tau$ is subordinate to $\sigma$. Two elements $a_\sigma \in {\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma$ and $a_\tau \in {\operatorname{K}_{*}}(X_\tau, G)_\tau$ are *compatible* if $$\pi_{\sigma, \tau}^*a_\sigma = {\operatorname{Sp}}_{X, \sigma}^\tau a_\tau \in {\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G)_\tau.$$ For each essential dual semicyclic subgroup $\sigma\subseteq G$ we denote by $${\operatorname{loc}}_\sigma \colon {\operatorname{K}_{*}}(X, G)\otimes\mathbb{Z}[1/N] \to {\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma$$ the composition of the restriction homomorphism $${\operatorname{K}_{*}}(X, G) \otimes\mathbb{Z}[1/N] \to {\operatorname{K}_{*}}(X^{(\sigma)},G) \otimes\mathbb{Z}[1/N]$$ with the localization homomorphism $${\operatorname{K}_{*}}(X^{(\sigma)},G) \otimes\mathbb{Z}[1/N] \to {\operatorname{K}_{*}}(X^{(\sigma)},G)_\sigma.$$ The following is the main result of this section.\ \[thm:refineddecomposition\] The ring homomorphism $$\prod_{\sigma}{\operatorname{loc}}_\sigma \colon {\operatorname{K}_{*}}(X, G) \otimes\mathbb{Z}[1/N] \longrightarrow\prod_{\substack{\sigma \subseteq G\\ \sigma \text{\normalfont{} essential}}} {\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma$$ is injective. Its image consists of the elements $(a_\sigma)$ of $\prod_{\sigma} {\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma$ with the property that if $\sigma$ and $\tau$ are essential, $\tau\prec \sigma$, and $\dim \sigma = \dim \tau + 1$, then $a_{\tau}$ and $a_\sigma$ are compatible. Notice that in the particular case that the action has stabilizers of constant dimension, all essential subgroups of $G$ have the same dimension, and this reduces to Theorem \[thm:refinedconstant\]. Also, if $\sigma$ is an essential subgroup of $G$ then $X^{(\sigma)} = X_{\dim \sigma}^\sigma$, so it follows from Theorem [\[thm:refinedconstant\] (\[thm:refinedconstant;2\])]{} that a splitting $G \simeq (G/\sigma_0) \times \sigma_0$ gives an isomorphism of rings $${\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma \simeq {\operatorname{K}_{*}}(X^{(\sigma)}, G/\sigma_0){_{\operatorname{geom}}}\otimes{\mathrm{\widetilde R}}\sigma \otimes {\mathrm{R}}\sigma_0.$$ However, this isomorphism is not canonical in general, as it depends on the choice of a splitting. To simplify the notation, we will implicitly assume that everything has been tensored with $\mathbb{Z}[1/N]$. We apply Theorem \[thm:maintheorem\] together with Theorem \[thm:refinedconstant\]. According to Theorem \[thm:maintheorem\] we have an injection ${\operatorname{K}_{*}}(X,G)\hookrightarrow \prod_{s}{\operatorname{K}_{*}}(X_{s},G)$ whose image is the subring of sequences $(\alpha_s) \in \prod_{s=0}^n {\operatorname{K}_{*}}(X_s,G)$ with the property that for each $s= 1$, …, $n$ the pullback of $\alpha_s \in {\operatorname{K}_{*}}(X_s,G)$ to ${\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,s-1}, G\bigr)$ coincides with ${\operatorname{Sp}}_{X,s}^{s-1}(\alpha_{s-1}) \in {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s,s-1}, G\bigr)$. Moreover, by Theorem \[thm:refinedconstant\], we can decompose further each ${\operatorname{K}_{*}}(X_{s},G)$ as $\prod_{\sigma}{\operatorname{K}_{*}}(X^{(\sigma}),G)_{\sigma}$, where $\sigma$ varies in the (finite) set of essential subgroups of $G$ of dimension $s$. By compatibility of specializations, for any $s\geq 0$, the following diagram is commutative $$\xymatrix{ {\operatorname{K}_{*}}(X_{s-1}, G)\ar[r]^-\sim \ar[d]_{\prod_\tau{\operatorname{Sp}}^{s-1}_{X,s}} & \prod\limits_{\substack{\tau \text{ essential}\\ \dim \tau = s-1}}{\operatorname{K}_{*}}(X^{(\tau)}, G)_\tau \ar[d]^{{\operatorname{Sp}}^\tau_{X,s}}\ar[dr]^{\prod_{\tau, \sigma}{\operatorname{Sp}}^\tau_{X, \sigma}}\\ {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1}, G)\ar[r]^-\sim & \prod\limits_{\substack{\tau \text{ essential}\\ \dim \tau = s-1}}{\operatorname{K}_{*}}({\mathrm{N}}^{(\tau)}, G)_\tau\ar[r]^-\phi & \prod\limits_{\substack{\tau \text{ essential}\\ \dim \tau = s-1}}\prod\limits_{\substack{\sigma\succ \tau\\ \dim \sigma = s}}{\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G) }$$ where $\phi$ is induced by the obvious pullbacks. On the other hand, the following diagram commutes by definition of $\pi_{\sigma, \tau}^*$ $$\xymatrix{ {\operatorname{K}_{*}}(X_s, G)\ar[r]^-\sim \ar[d]& \prod\limits_{\substack{\sigma \text{ essential}\\ \dim \sigma = s}}{\operatorname{K}_{*}}(X^{(\sigma)}, G)_\sigma \ar[dr]^{\prod_{\sigma, \tau}\pi^*_{\sigma, \tau}}\\ {\operatorname{K}_{*}}({\mathrm{N}}_{s,s-1}, G)\ar[r]^-\sim & \prod\limits_{\substack{\tau \text{ essential}\\ \dim \tau = s-1}}{\operatorname{K}_{*}}({\mathrm{N}}^{(\tau)}, G)_\tau\ar[r]^-\phi & \prod\limits_{\substack{\tau \text{ essential}\\ \dim \tau = s-1}}\prod\limits_{\substack{\sigma\succ \tau\\ \dim \sigma = s}}{\operatorname{K}_{*}}({\mathrm{N}}_\sigma^{(\tau)}, G) }.$$ Then the Theorem will immediately follow if we show that $\phi$ is an isomorphism. This is true because of the following Lemma. Fix an essential subgroup $\tau$ of dimension $s-1$. Then for any $\sigma \succ \tau$ with $\dim \sigma = s$, the scheme ${\mathrm{N}}_\sigma^{(\tau)}$ is open in ${\mathrm{N}}_{s}^{(\tau)}$; furthermore, ${\mathrm{N}}_{s}^{(\tau)}$ is the disjoint union of the ${\mathrm{N}}_\sigma^{(\tau)}$ for all essential $\sigma$ with $\sigma \succ \tau$, $\dim \sigma = s$. We will show that ${\mathrm{N}}_{s}^{(\tau)}$ is the disjoint union of the ${\mathrm{N}}_\sigma^{(\tau)}$; since each ${\mathrm{N}}_\sigma^{(\tau)}$ is closed in ${\mathrm{N}}_s^{(\tau)}$, and there are only finitely many possible $\sigma$, it follows that each ${\mathrm{N}}_\sigma^{(\tau)}$ is also open in ${\mathrm{N}}_{s}^{(\tau)}$. Let us first observe that if $\sigma$ and $\sigma'$ are essential subgroups in $G$ of dimension $s$ to which $\tau$ is subordinate, and ${\mathrm{N}}_{\sigma}^{(\tau)}\bigcap{\mathrm{N}}_{\sigma'}^{(\tau)} \neq \emptyset$, then $X^{(\sigma)}\bigcap X^{(\sigma')}\neq \emptyset$, therefore $\sigma_{0}$ is equal to $\sigma'_{0}$. But this implies that $\sigma=\sigma'$, since $\sigma$ and $\sigma'$ are both equal to the inverse image in $G$ of the image of $\tau\rightarrow G{/}\sigma_{0} = G{/}\sigma'_{0}$. According to Proposition [\[prop:firststratification\] (\[prop:firststratification;3\])]{}, if $T_1$, …, $T_r$ are the essential $s$-dimensional subtori of $G$, then ${\mathrm{N}}_s$ is the disjoint union of the ${\mathrm{N}}_{T_j}$. Clearly, if $\tau_0$ is not contained in $T_j$, then ${\mathrm{N}}_{T_j}^{(\tau)}$ is empty, so ${\mathrm{N}}_s^{(\tau)}$ is the disjoint union of the ${\mathrm{N}}_T^{(\tau)}$, where $T$ ranges over the essential $s$-dimensional subtori of $G$ with $\tau_0 \subseteq T$. But there is a bijective correspondence between $s$-dimensional dual semi-cyclic subgroups $\sigma \subseteq G$ with $\sigma \succ \tau$ and $s$-dimensional subtori of $G$ with $\tau_0 \subseteq T$: in one direction we associate with each $\sigma$ its toral part $\sigma_0$, in the other we associate with each $T$ the subgroup scheme $\tau + T \subseteq G$. The proof is concluded by noticing that if $\sigma$ and $\tau$ are as above, with $\sigma_0 = T$, then ${\mathrm{N}}_T^{(\tau)} = {\mathrm{N}}_\sigma^{(\tau)}$. Addendum: corrections (August 2004) {#addendum-corrections-august-2004 .unnumbered} =================================== Amnon Neeman has noticed a serious error in the proof of Theorem  3.2: the argument given does not yield a uniquely defined specialization map $\mathrm{Sp}_{Y}$, so that in particular compatibility with pullbacks does not hold. This is due to the elementary fact, overlooked in the paper, that if one has a fiber sequence of spectra $$\xymatrix{E \ar[r]^-{f} & E' \ar[r]^-{g} & E'' },$$ then a map $h:E'{\ifinner\to\else\longrightarrow\fi}E$ such that $h\circ f$ is homotopic to zero, does induce a map of spectra $p:E''{\ifinner\to\else\longrightarrow\fi}E$, but this map is not unique, as it can be modified by using any map $E[1]\rightarrow E$ (by adding to any given $p$ the composite $E''\rightarrow E[1] \rightarrow E$). Of course if the nullhomotopy $h\circ f \sim 0$ is specified then this singles out a unique map $p:E''\rightarrow E$; but it is not clear to the authors how to choose a homotopy; thus, they are unable to define a specialization map in the generality claimed in the statement of Theorem 3.2. Fortunately, it is still possible to define the specialization homomorphisms for higher K-theory in a generality that is sufficient for the rest of the paper: thus, all the results in sections 4, 5, 6 and 7, including the two main theorems, still hold unchanged. Also, section 2 which is independent of section 3 where specializations were defined, remains unchanged.\ In what follows we will work in the same setup as in the paper, to which we refer for the unexplained notation. If $Y$ is a closed subscheme of a scheme $X$ over a fixed base $S$, we denote by ${\mathrm{M}}^{0}_{Y}X {\ifinner\to\else\longrightarrow\fi}{\mathbb{P}}^{1}_{S}$ the deformation to the normal bundle, as in [@fulton Chapter 5], and in the paper. We denote by $\infty$ the closed subscheme of ${\mathbb{P}}^{1}_{S}$ that is the image of the section at infinity $S {\ifinner\to\else\longrightarrow\fi}{\mathbb{P}}^{1}_{S}$; the inverse image of $\infty$ in ${\mathrm{M}}^{0}_{Y}X$ is the normal bundle ${\mathrm{N}}_{Y}X$. Assume that $X$ is a regular noetherian algebraic space with the action of a diagonalizable group $G$, $Z$ a $G$-invariant regular Cartier divisor with trivial normal bundle, $i \colon Z {\hookrightarrow}X$ and $j \colon X \setminus Z \subseteq X$ the embeddings. The composition $${\operatorname{K}_{*}}(Z,G) \stackrel{i_{*}} \longrightarrow {\operatorname{K}_{*}}(X, G) \stackrel{i^{*}} \longrightarrow {\operatorname{K}_{*}}(Z,G)$$ is $0$: if we assume that $j^{*} \colon {\operatorname{K}_{*}}(X,G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(X \setminus Z, G)$ is surjective, then we have an exact sequence $$0 {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(Z,G) \stackrel{i_{*}} \longrightarrow {\operatorname{K}_{*}}(X, G) \stackrel{j^{*}} \longrightarrow {\operatorname{K}_{*}}(X \setminus Z, G) {\ifinner\to\else\longrightarrow\fi}0;$$ hence the homomorphism $i^{*}\colon {\operatorname{K}_{*}}(X,G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(Z,G)$ factors through ${\operatorname{K}_{*}}(X \setminus Z, G)$, inducing a specialization ring homomorphism $${\operatorname{Sp}}^{X}_{Z}\colon {\operatorname{K}_{*}}(X \setminus Z, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(Z,G).$$ If we restrict to ${\operatorname{K}_{0}}$, then surjectivity holds, and this is already in [@sga6 X-Appendice, 7.10]. Recall that $X_{s}$ is the regular subscheme of $X$ where the stabilizers have fixed dimension $s$, and that we have set ${\mathrm{M}}_s {\overset{\mathrm{def}} =}{\mathrm{M}}^{0}_{X_{s}}X {\ifinner\to\else\longrightarrow\fi}{\mathbb{P}}^{1}$. Consider the closed embedding ${\mathrm{N}}_{s} \subseteq {\mathrm{M}}_{s}$, whose complement is $X_{\leq s} \times {\mathbb{A}}^{1}$. Looking at the composition $$X_{\leq s} \times {\mathbb{A}}^{1} {\hookrightarrow}{\mathrm{M}}_{s} {\ifinner\to\else\longrightarrow\fi}X_{\leq s} \times {\mathbb{P}}^{1} {\ifinner\to\else\longrightarrow\fi}X_{\leq s}$$ we see that the pullback ${\operatorname{K}_{*}}({\mathrm{M}}_{s}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(X_{\leq s}, G)$ is surjective. Consider now the open embedding $X_{\leq t} \subseteq X_{\leq s}$ (for $s\geq t$): the pullback ${\operatorname{K}_{*}}(X_{\leq s}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(X_{\leq t}, G)$ is also surjective, by K-rigidity. From the commutative diagram $$\xymatrix{ {\operatorname{K}_{*}}({\mathrm{M}}_{s}, G) \ar@{->>}[r]\ar[d] & {\operatorname{K}_{*}}(X_{\leq s}, G)\ar@{->>}[d]\\ {\operatorname{K}_{*}}({\mathrm{M}}_{s, \leq t}, G) \ar[r] & {\operatorname{K}_{*}}(X_{\leq t}, G) }$$ we conclude that the restriction ${\operatorname{K}_{*}}({\mathrm{M}}_{s, \leq t}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(X_{\leq t}, G)$ is surjective, so we have an exact sequence $$0 {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s, \leq t}, G\bigr) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}({\mathrm{M}}_{s, \leq t}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}(X_{\leq t}, G) {\ifinner\to\else\longrightarrow\fi}0.$$ This allows to define specialization maps $${\operatorname{Sp}}^{\leq t}_{X,s}{\overset{\mathrm{def}} =}{\operatorname{Sp}}^{{\mathrm{M}}_{s, \leq t}}_{{\mathrm{N}}_{s, \leq t}} \colon {\operatorname{K}_{*}}(X_{\leq t}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s, \leq t}, G\bigr).$$ To define ${\operatorname{Sp}}^{t}_{X,s}$ consider the commutative diagram with exact rows $$\xymatrix{ 0\ar[r] & {\operatorname{K}_{*}}(X_{t},G)\ar[r]\ar@{-->}[d] & {\operatorname{K}_{*}}(X_{\leq t}, G)\ar[r]\ar[d]^{{\operatorname{Sp}}^{\leq t}_{X,s}} & {\operatorname{K}_{*}}(X_{\leq t-1}), G\ar[d]^{{\operatorname{Sp}}^{\leq t-1}_{X,s}}\ar[r] & 0 \\ 0\ar[r] & {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s, t},G \bigr)\ar[r] & {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s, \leq t},G \bigr) \ar[r] & {\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s, \leq t-1},G \bigr)\ar[r] & 0 }$$ (the commutativity of the second square follows easily from functoriality of pullbacks). \[def:specialization1\] The specialization homomorphism $${\operatorname{Sp}}^{t}_{X,s} \colon {\operatorname{K}_{*}}(X_{t},G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}\bigl({\mathrm{N}}_{s, t},G \bigr)$$ is the unique dotted arrow that fits in the diagram above. These coincide with the usual specialization homomorphisms for ${\operatorname{K}_{0}}$. This is clear for the ${\operatorname{Sp}}^{\leq t}_{X.s}$. For ${\operatorname{Sp}}^{t}_{X,s}$ it follows from the fact the cartesian diagram $$\xymatrix{ {\mathrm{N}}_{s,t} \ar[r]\ar[d] & {\mathrm{M}}_{s,t} \ar[d]\\ {\mathrm{N}}_{s,\leq t} \ar[r]s & {\mathrm{M}}_{s,\leq t} }$$ is Tor-independent, and from the following Lemma. \[lem:tor-independent\] If $$\xymatrix{ X' \ar[r]^{f'} \ar[d]^{\phi} & Y' \ar[d]^{\psi}\\ X \ar[r]^{f} & Y }$$ is a Tor-independent cartesian square of regular algebraic spaces with an action of $G$, where $f$ is a closed embedding. Then the diagram $$\xymatrix{ {\operatorname{K}_{*}}(X,G) \ar[r]^{f_{*}} \ar[d]^{\phi^{*}} & {\operatorname{K}_{*}}(Y,G) \ar[d]^{\psi^{*}}\\ {\operatorname{K}_{*}}(X',G) \ar[r]^{f'_{*}} & {\operatorname{K}_{*}}(Y',G) }$$ commutes. The proof that starts at the top of page 10 is general enough. Now we have to check compatibility of specializations. Denote by $i$ the inclusion of $X_{t}$ in $X_{\leq t}$ and by $i'$ that of ${\mathrm{N}}_{s,t}$ in ${\mathrm{N}}_{s,\leq t}$. Then the diagram $$\xymatrix{ {\operatorname{K}_{*}}(X_{\leq t}, G) \ar[r]^-{i^{*}} \ar[d]^{{\operatorname{Sp}}^{\leq t}_{X, s}} & {\operatorname{K}_{*}}(X_{t}, G) \ar[d]^{{\operatorname{Sp}}^{t}_{X, s}}\\ {\operatorname{K}_{*}}({\mathrm{N}}_{s,\leq t}, G) \ar[r]^-{i'^{*}} & {\operatorname{K}_{*}}({\mathrm{N}}_{s,t}, G) }$$ commutes. By the definition of ${\operatorname{Sp}}^{t}_{X,s}$, we need to check that $$\xymatrix{ {\operatorname{K}_{*}}(X_{\leq t}, G) \ar[r]^-{i^{*}} \ar[d]^{{\operatorname{Sp}}^{\leq t}_{X, s}} & {\operatorname{K}_{*}}(X_{t}, G) \ar[r]^-{i_{*}} \ar[d]^{{\operatorname{Sp}}^{t}_{X, s}} & {\operatorname{K}_{*}}(X_{\leq t}, G) \ar[d]^{{\operatorname{Sp}}^{\leq t}_{X, s}}\\ {\operatorname{K}_{*}}({\mathrm{N}}_{s,\leq t}, G) \ar[r]^-{i'^{*}} & {\operatorname{K}_{*}}({\mathrm{N}}_{s,t}, G) \ar[r]^-{i'_{*}}& {\operatorname{K}_{*}}({\mathrm{N}}_{s,\leq t}, G) }$$ commutes. By the projection formula (see [@vevi Proposition A.5]) we see that the group homomorphisms $i_{*}i^{*}$ and $i'_{*}i'^{*}$ are multiplications by $$[i_{*}{\mathcal{O}}_{X_{t}}] \in {\operatorname{K}_{*}}((X_{\leq t}, G) \quad\text{and}\quad [i_{*}{\mathcal{O}}_{{\mathrm{N}}_{s,t}}] \in {\operatorname{K}_{*}}({\mathrm{N}}_{s,\leq t}, G)$$ respectively: so we have to prove that the diagram $$\xymatrix@C+15pt{ {\operatorname{K}_{*}}(X_{\leq t}, G) \ar[r]^-{\cdot [i_{*}{\mathcal{O}}_{X_{t}}]} \ar[d]^{{\operatorname{Sp}}^{\leq t}_{X, s}} & {\operatorname{K}_{*}}(X_{\leq t}, G) \ar[d]^{{\operatorname{Sp}}^{\leq t}_{X, s}}\\ {\operatorname{K}_{*}}({\mathrm{N}}_{s,\leq t}, G) \ar[r]^-{\cdot [i_{*}{\mathcal{O}}_{{\mathrm{N}}_{s,t}}]} & {\operatorname{K}_{*}}({\mathrm{N}}_{s,\leq t}, G) }$$ commutes. Since ${\operatorname{Sp}}^{\leq t}_{X, s}$ is a ring homomorphism, this is equivalent to saying that $${\operatorname{Sp}}^{\leq t}_{X, s}[i_{*}{\mathcal{O}}_{X_{t}}] = [i_{*}{\mathcal{O}}_{{\mathrm{N}}_{s,t}}] \in {\operatorname{K}_{0}}({\mathrm{N}}_{s,\leq t}, G).$$ But $[i_{*}{\mathcal{O}}_{X_{t}}]$ is the restriction of $[i_{*}{\mathcal{O}}_{{\mathrm{M}}_{s,t}}] \in {\operatorname{K}_{0}}({\mathrm{M}}_{s,\leq t}, G)$, so we have to show that the restriction of $[i_{*}{\mathcal{O}}_{{\mathrm{M}}_{s,t}}]$ to ${\mathrm{N}}_{s, \leq t}$ is $[i_{*}{\mathcal{O}}_{{\mathrm{N}}_{s,t}}]$; and this follows immediately from the fact that the square $$\xymatrix{ {\mathrm{N}}_{s,t} \ar[r]\ar[d] & {\mathrm{M}}_{s,t} \ar[d]\\ {\mathrm{N}}_{s,\leq t} \ar[r] & {\mathrm{M}}_{s,\leq t} }$$ is cartesian and Tor-independent, by Lemma \[lem:tor-independent\]. With this definition, and the compatibility property proved above, everything goes through in Sections 4, 5 and 6. For the theory of Section 7 to work, we need to define specialization maps $${\operatorname{K}_{*}}(X^{(\tau)},G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}({\mathrm{N}}_{s}^{(\tau)}, G)$$ when $\tau$ is a diagonalizable subgroup scheme of $G$ and $s$ is an integer with $s \geq \dim\tau$ (see the bottom of page 39).The cartesian diagram of embeddings $$\xymatrix{ X_s^{\tau} \ar[r]\ar[d] & X^{\tau} \ar[d] \\ X_s \ar[r] & X \\ }$$ yields an embedding of $G$-spaces ${\mathrm{N}}_{X_s^{\tau}}X^{\tau} {\hookrightarrow}{\mathrm{N}}_s$; since $\tau$ acts trivially on ${\mathrm{N}}_{X_s^{\tau}}X^{\tau}$ we get an embedding ${\mathrm{N}}_{X_s^{\tau}}X^{\tau} {\hookrightarrow}{\mathrm{N}}_s^{\tau}$. \[lem:emb-isom\] The embedding ${\mathrm{N}}_{X_s^{\tau}}X^{\tau} {\hookrightarrow}{\mathrm{N}}_s^{\tau}$ is an isomorphism. Consider the natural embedding of deformations to the normal bundle $$\xymatrix{ {\mathrm{M}}^{0}_{X_{s}^{\tau}}X^{\tau} {\hookrightarrow}({\mathrm{M}}_{s})^{\tau}; }$$ generically, that is, over ${\mathbb{A}}^{1}$, they coincide. On the other hand it follows from Proposition 3.6 in the paper that the inverse image of ${\mathbb{A}}^{1}$ in $({\mathrm{M}}_{s})^{\tau}$ is scheme-theoretically dense in $({\mathrm{M}}_{s})^{\tau}$, and this shows that this embedding is an isomorphism. Since the fibers over $\infty$ of ${\mathrm{M}}^{0}_{X_{s}^{\tau}}X^{\tau}$ and $({\mathrm{M}}_{s})^{\tau}$ are ${\mathrm{N}}_{X_s^{\tau}}X^{\tau}$ and ${\mathrm{N}}_s^{\tau}$ respectively, this concludes the proof. Now set $t = \dim \tau$, so that $X^{(\tau)} {\overset{\mathrm{def}} =}X^{\tau}_{t}$. Then we get a specialization map $${\operatorname{Sp}}_{X,s}^{\tau} {\overset{\mathrm{def}} =}{\operatorname{Sp}}_{X^{\tau},s}^{t} \colon {\operatorname{K}_{*}}(X^{\tau}_{t}, G) = {\operatorname{K}_{*}}(X^{(\tau)}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}\bigl(({\mathrm{N}}_{s}^{\tau})_{t}, G\bigr) = {\operatorname{K}_{*}}({\mathrm{N}}_{s}^{(\tau)}, G)$$ that is exactly what we want. This allows to define the specialization map $${\operatorname{Sp}}_{X, \sigma}^{\tau} \colon {\operatorname{K}_{*}}(X^{\tau}_{t}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}({\mathrm{N}}_{\sigma}^{(\tau)}, G)$$ for any pair of dual cyclic subgroups $\sigma$ and $\tau$, with $\tau \prec \sigma$, as on page 41 of the paper, by composing ${\operatorname{Sp}}_{X^{\tau},s}^{t}$ with the restriction homomorphism ${\operatorname{K}_{*}}({\mathrm{N}}_{s}^{(\tau)}, G) {\ifinner\to\else\longrightarrow\fi}{\operatorname{K}_{*}}({\mathrm{N}}_{\sigma}^{(\tau)}, G)$. Note that $X^{(\tau)}=X^{\tau}_{\leq t}$, ${\mathrm{N}}_{s}^{(\tau)}=({\mathrm{N}}^{\tau}_{s})_{\leq t}$, and ${\operatorname{Sp}}_{X,s}^{\tau}$ can also be identified with ${\operatorname{Sp}}_{X^{\tau},s}^{\leq t}$; therefore ${\operatorname{Sp}}_{X,s}^{\tau}$ and ${\operatorname{Sp}}_{X, \sigma}^{\tau}$ are ring homomorphisms. Further corrections {#further-corrections .unnumbered} ------------------- Here we correct a few typos that we have noticed since the publication of the article. In the statement of Proposition 1.1, “scheme” should be replaced by “algebraic space”. The are several typos in the diagrams on p. 42: 1. $\prod {\operatorname{Sp}}^{s-1}_{X,s}$ should be replaced by ${\operatorname{Sp}}^{s-1}_{X,s}$, 2. ${\operatorname{Sp}}^{\tau}_{X,s}$ by $\prod {\operatorname{Sp}}^{\tau}_{X,s}$, 3. ${\mathrm{N}}^{(\tau)}$ by ${\mathrm{N}}^{(\tau)}_{s}$ and 4. ${\operatorname{K}_{*}}({\mathrm{N}}^{(\tau)}_{\sigma},G)$ by ${\operatorname{K}_{*}}({\mathrm{N}}^{(\tau)}_{\sigma},G)_{\tau}$ Finally, in the statement of Lemma 4.9, “linearly independent elements” should read “pairwise linearly independent elements” (we owe this also to Amnon Neeman). Acknowledgments {#acknowledgments-1 .unnumbered} --------------- We are very much in debt with Amnon Neeman who read our paper carefully and kindly pointed out the problem to us. [B-B-F-K02]{} M.F. Atiyah: *Elliptic operators and compact groups*, Lecture Notes in Mathematics **401**, Springer–Verlag, Berlin–New York (1974). G. Barthel, J.P. Brasselet, K.H. Fieseler, L. Kaup: *Combinatorial intersection cohomology for fans*, Tohoku Math. J. **54** (2002), 1–41. A. Białynicki-Birula, *Some theorems on actions of algebraic groups*, Ann. of Math. **98** (1973), 480–497. E. Bifet, C. De Concini, C. Procesi: *Cohomology of regular embeddings*, Adv. Math. **82** (1990), 1–34. G.E. Bredon, *The free part of a torus action and related numerical equalities*, Duke Math. J. **41** (1974), 843–854. M. Brion, *Equivariant Chow groups for torus actions*, Transform. Groups 2 (1997), 225–267. , *Equivariant cohomology and equivariant intersection theory*, in *Representation theories and algebraic geometry (Montreal, PQ, 1997)*, Kluwer Acad. Publ., Dordrecht, (1998), 1–37, . T. Chang, T. Skjelbred, *The topological Shur lemma and related results*, Annals of Mathematics **100** (1974), 307–321. V. Danilov, *The geometry of toric varieties*, Russ. Math. Surveys **33** (1978), 97–154. M. Demazure, *Schémas en groupes*, Lecture Notes in Mathematics **151**, **152** and **153**, Springer-Verlag (1970). D. Edidin, W. Graham, *Equivariant intersection theory*, Invent. Math. **131** (1998), 595–634. S. Eilenberg, J.C. Moore: *Limits and spectral sequences*, Topology **1** (1962), 1–23. W. Fulton, *Intersection Theory*, Springer-Verlag, (1993). W. Fulton, *Introduction to toric varieties*, Princeton University Press (1993). M. Goresky, R. Kottwitz, R. MacPherson, *Equivariant cohomology, Koszul duality, and the localization theorem*, Invent. Math. **131** (1998), 25–84. A. Grothendieck, J. Dieudonne, *Elements de Gèomètrie Algèbrique IV. . Étude locale des schémas et des morphismes de schémas*, Publ. Math. IHES **28** (1966). W.Y. Hsiang, *Cohomology theory of Topological Transformation Groups*, Springer Verlag (1975). F. Kirwan, *Cohomology of quotients in symplectic and algebraic geometry*, Mathematical Notes, **31**, Princeton University Press, Princeton (1984). A.A. Klyachko, *Vector bundles on Demazure models*, Selecta Math. Soviet. **3** (1983/84), 41–44. G. Laumon, L. Moret-Bailly, *Champs Algébriques*, Springer–Verlag (2000). A. Merkurjev, *Comparison of the equivariant and the standard K-theory of algebraic varieties.* Algebra i Analiz **9** (1997),175–214. Translation in St. Petersburg Math. J. **9** (1998), 815–850. I. Rosu, A. Knutson, Equivariant K-theory and Equivariant Cohomology, preprint math.AT/9912088, to appear in Math. Zeit. P. Berthelot, A. Grothendieck, L. Illusie, *Théorie des intersections et théorème de Riemann-Roch*, LNM 225, Springer, Berlin 1971. H. Sumihiro, *Equivariant completion II*, J. Math. Kyoto Univ. **15** (1975), 573–605. R.W. Thomason, *Comparison of equivariant algebraic and topological K-theory*, Duke Math. J. **68** (1986), 447–462. , *Lefschetz–Riemann–Roch theorem and coherent trace formula*, Invent. Math. **85** (1986), 515–543. , *Algebraic K-theory of group scheme actions*, in *Algebraic Topology and Algebraic K-theory*, Annals of Mathematicals Studies **113**, Princeton University Press, Princeton (1987) , *Un formule de Lefschetz en K-thèorie equivariante algèbrique*, Duke Math. J. **68** (1992), 447–462. , *Les K-groupes d’un schéma éclaté et un formule d’intersection excédentaire*, Invent. Math. **112** (1993), 195–216. , T. Trobaugh, *Higher algebraic K-theory of schemes and of derived categories*, Grothendieck Festschrift vol. III, Birkhäuser (1990), 247–435. B. Toen, *Notes on -theory of Deligne–Mumford stacks*, preprint math.AG/9912172. G. Vezzosi, A. Vistoli, *Higher algebraic K-theory of group actions with finite stabilizers*, Duke Math. J., **113** (2002), 1–55. [^1]: Paper published in Invent. Math. **153** (2003), 1–44.\ Partially supported by the University of Bologna, funds for selected research topics.
--- abstract: | In algorithmic graph theory, a classic open question is to determine the complexity of the <span style="font-variant:small-caps;">Maximum Independent Set</span> problem on $P_t$-free graphs, that is, on graphs not containing any induced path on $t$ vertices. So far, polynomial-time algorithms are known only for $t\le 5$ \[Lokshtanov et al., SODA 2014, 570–581, 2014\], and an algorithm for $t=6$ announced recently \[Grzesik et al. Arxiv 1707.05491, 2017\]. Here we study the existence of subexponential-time algorithms for the problem: we show that for any $t\ge 1$, there is an algorithm for <span style="font-variant:small-caps;">Maximum Independent Set</span> on $P_t$-free graphs whose running time is subexponential in the number of vertices. Even for the weighted version MWIS, the problem is solvable in $2^{{\ensuremath{\mathcal{O}}}(\sqrt {tn \log n})}$ time on $P_t$-free graphs. For approximation of MIS in broom-free graphs, a similar time bound is proved. <span style="font-variant:small-caps;">Scattered Set</span> is the generalization of <span style="font-variant:small-caps;">Maximum Independent Set</span> where the vertices of the solution are required to be at distance at least $d$ from each other. We give a complete characterization of those graphs $H$ for which <span style="font-variant:small-caps;">$d$-Scattered Set</span> on $H$-free graphs can be solved in time subexponential in the [*size of the input*]{} (that is, in the number of vertices plus the number of edges): - If every component of $H$ is a path, then <span style="font-variant:small-caps;">$d$-Scattered Set</span> on $H$-free graphs with $n$ vertices and $m$ edges can be solved in time $2^{{\ensuremath{\mathcal{O}}}(|V(H)|\sqrt{n+m}\log (n+m))}$, even if $d$ is part of the input. - Otherwise, assuming the Exponential-Time Hypothesis (ETH), there is no $2^{o(n+m)}$-time algorithm for <span style="font-variant:small-caps;">$d$-Scattered Set</span> for any fixed $d\ge 3$ on $H$-free graphs with $n$-vertices and $m$-edges. author: - 'Gábor Bacsó[^1]' - 'Daniel Lokshtanov[^2]' - 'Dániel Marx[^3]' - 'Marcin Pilipczuk[^4]' - 'Zsolt Tuza[^5]' - 'Erik Jan van Leeuwen [^6]' bibliography: - 'references.bib' title: 'Subexponential-time Algorithms for Maximum Independent Set in $P_t$-free and Broom-free Graphs [^7]' --- [20]{}(0, 11.5) ![image](logo-erc){width="40px"} [20]{}(-0.25, 11.9) ![image](logo-eu){width="60px"} Introduction ============ There are some problems in discrete optimization that can be considered fundamental. The <span style="font-variant:small-caps;">Maximum Independent Set</span> problem (MIS, for short) is one of them. It takes a graph $G$ as input, and asks for the maximum number $\alpha(G)$ of mutually nonadjacent (i.e., independent) vertices in $G$. On unrestricted input, it is not only NP-hard (its decision version “Is $\alpha(G)\ge k$?” being NP-complete), but APX-hard as well, and, in fact, not even approximable within ${\ensuremath{\mathcal{O}}}(n^{1-\eps})$ in polynomial time for any $\eps>0$ unless P=NP, as proved by Zuckerman [@DBLP:journals/toc/Zuckerman07]. For this reason, those classes of graphs on which MIS becomes tractable are of definite interest. One direction of this area is to study the complexity of MIS on *$H$-free graphs*, that is, on graphs not containing any *induced* subgraph isomorphic to a given graph $H$. For the majority of the graphs $H$, we know a negative answer on the complexity question. It is easy to see that if $G'$ is obtained from $G$ by subdividing each edge with $2t$ new vertices, then $\alpha(G')=\alpha(G)+t|E(G)|$ holds. This can be used to show that MIS is NP-hard on $H$-free graphs whenever $H$ is not a forest, and also if $H$ contains a tree component with at least two vertices of degree larger than 2 (first observed in [@Alekseev82], see, e.g., [@LokshtanovVV14]). As MIS is known to be NP-hard on graphs of maximum degree at most 3, the case when $H$ contains a vertex of degree at least 4 is also NP-hard. The above observations do not cover the case when every component of $H$ is either a path, or a tree with exactly one degree-3 vertex $c$ with three paths of arbitrary lengths starting from $c$. There are no further unsolved classes but even this collection means infinitely many cases. For decades, on these graphs $H$ only partial results have been obtained, proving polynomial-time solvability in some cases. A classical algorithm of Minty [@Minty80] and its corrected form by Sbihi [@Sbihi1980] solved the problem when $H$ is a claw (3 paths of length 1 in the model above). This happened in 1980. Much later, in 2004, Alekseev [@Alekseev04] generalized this result by an algorithm for $H$ isomorphic to a fork (2 paths of length 1 and one path of length 2). The seemingly easy case of $P_t$-free graphs is poorly understood (where $P_t$ is the path on $t$ vertices). MIS on $P_t$-free graphs is not known to be NP-hard for any $t$; for all we know, it could be polynomial-time solvable for every fixed $t\ge 1$. $P_4$-free graphs (also known as cographs) have a very simple structure, which can be used to solve MIS with a linear-time recursion, but this does not generalize to $P_t$-free graphs for larger $t$. In 2010, it was a breakthrough when Randerath and Schiermeyer [@randerath] stated that MIS on $P_5$-free graphs was solvable in subexponential time, more precisely within ${\ensuremath{\mathcal{O}}}(C^{n^{1-\eps}})$ for any constants $C>1$ and $\eps<1/4$. Designing an algorithm based on deep results, Lokshtanov et al. [@LokshtanovVV14] finally proved that MIS is polynomial-time solvable on $P_5$-free graphs. More recently, a [*quasipolynomial*]{} ($n^{\log^{{\ensuremath{\mathcal{O}}}(1)} n}$-time) algorithm was found for $P_6$-free graphs [@DBLP:conf/soda/LokshtanovPL16] and finally a polynomial-time algorithm for $P_6$-free graphs was announced [@grzesik]. We explore MIS and some variants on $H$-free graphs from the viewpoint of [*subexponential-time algorithms*]{} in this work. That is, instead of aiming for algorithms with running time $n^{{\ensuremath{\mathcal{O}}}(1)}$ on $n$-vertex graphs, we ask if $2^{o(n)}$ algorithms are possible. Very recently, Brause [@brause] and independently the conference version of this paper [@BMT] observed that the subexponential algorithm of Randerath and Schiermeyer [@randerath] can be generalized to arbitrary fixed $t\ge 5$ with running time roughly $2^{{\ensuremath{\mathcal{O}}}(n^{1-1/t})}$. Our first result shows a significantly improved subexponential-time algorithm for every $t$. \[thm:mainMIS\] For every fixed $t\ge 5$, <span style="font-variant:small-caps;">MIS</span> on $n$-vertex $P_t$-free graphs can be solved in subexponential time, namely, it can be solved by a $2^{{\ensuremath{\mathcal{O}}}(\sqrt{n\log n})}$-time algorithm. The algorithm is based on the combination of two ideas. First, we generalize the observation of Randerath and Schiermeyer [@randerath] stating that in a large connected $P_5$-free graph there exists a high-degree vertex. Namely, we prove that such a vertex always exists in a large connected $P_t$-free graph for general $t\geq 5$ and it can be used for efficient branching. Next we prove the combinatorial result that a $P_t$-free graph of maximum degree $\Delta$ has treewidth ${\ensuremath{\mathcal{O}}}(t\Delta)$; the proof is inspired by Gyárfás’ proof of the $\chi$-boundedness of $P_t$-free graphs [@gyarfas]. Thus if the maximum degree drops below a certain threshold during the branching procedure, then we can use standard algorithmic techniques exploiting bounded treewidth. While our algorithm works for $P_t$-free graphs with arbitrary large $t$, it does not seem to be extendable to $H$-free graphs where $H$ is the subdivision of a $K_{1,3}$. Hence, the existence of subexponential-time algorithms on such graphs remains an open question. However, we are able to give a subexponential-time constant-factor approximation algorithm for the case when $H$ is a $(d,t)$-broom. A *$(d,t)$-broom $B_{d,t}$* is a graph consisting of a path $P_t$ and $d$ additional vertices of degree one, all adjacent to one of the endpoints of the path. In other words, $B_{d,t}$ is a star $K_{1,d+1}$ with one of the edges subdivided to make it a path with $t$ vertices. For $d=2$, we obtain the *generalized forks* and $t=3$, $d=2$ yields the traditional *fork*. We prove the following theorem; here $d$ and $t$ are considered constants, hidden in the big-${\ensuremath{\mathcal{O}}}$ notation. \[thm:Fdt-free\] Let $d,t \geq 2$ be fixed integers. One can find a $d$-approximation to [<span style="font-variant:small-caps;">Maximum Independent Set</span>]{} on an $n$-vertex $B_{d,t}$-free graph $G$ in time $2^{{\ensuremath{\mathcal{O}}}(n^{3/4} \log n)}$. Let us remark that on $K_{1,d+1}$-free graphs, a folklore linear-time (and very simple) $d$-approximation algorithm exists for [<span style="font-variant:small-caps;">Maximum Independent Set</span>]{}; better $d/2$-approximation algorithms also exist [@Bafna1996; @Berman2000; @Halldorsson1995; @Yu1996]. On fork-free graphs, <span style="font-variant:small-caps;">Independent Set</span> can be solved in polynomial time [@Alekseev04]. For general graphs, we do not expect that a constant-factor approximation can be obtained in subexponential time for the problem. Strong evidence for this was given by Chalermsook et al. [@DBLP:conf/focs/ChalermsookLN13], who showed that the existence of such an algorithm would violate the Exponential-Time Hypothesis (ETH) of Impagliazzo, Paturi, and Zane, which can be informally stated as $n$-variable <span style="font-variant:small-caps;">3SAT</span> cannot be solved in $2^{o(n)}$ time (see [@DBLP:books/sp/CyganFKLMPPS15; @DBLP:journals/eatcs/LokshtanovMS11; @DBLP:journals/jcss/ImpagliazzoPZ01]). <span style="font-variant:small-caps;">Scattered Set</span> (also known under other names such as dispersion or distance-$d$ independent set [@DBLP:conf/esa/MarxP15; @DBLP:conf/esa/Thilikos11; @DBLP:journals/dam/AgnarssonDH03; @DBLP:journals/jco/RosenkrantzTR00; @DBLP:conf/isaac/BhattacharyaH99; @DBLP:journals/jco/EtoGM14]) is the natural generalization of <span style="font-variant:small-caps;">MIS</span> where the vertices of the solution are required to be at distance at least $d$ from each other; the size of the largest such set will be denoted by $\alpha_d(G)$. We can consider with $d$ being part of the input, or assume that $d\ge 2$ is a fixed constant, in which case we call the problem <span style="font-variant:small-caps;">$d$-Scattered Set</span>. Clearly, MIS is exactly the same as <span style="font-variant:small-caps;">2-Scattered Set</span>. Despite its similarity to <span style="font-variant:small-caps;">MIS</span>, the branching algorithm of Theorem \[thm:mainMIS\] cannot be generalized: we give evidence that there is no subexponential-time algorithm for <span style="font-variant:small-caps;">3-Scattered Set</span> on $P_5$-free graphs. \[thm:nosubexpdist3\] Assuming the ETH, there is no $2^{o(n)}$-time algorithm for <span style="font-variant:small-caps;">$d$-Scattered Set</span> with $d=3$ on $P_5$-free graphs with $n$ vertices. In light of the negative result of Theorem \[thm:nosubexpdist3\], we slightly change our objective by aiming for an algorithm that is subexponential in the [*size of the input,*]{} that is, in the total number of vertices and edges of the graph $G$. As the number of edges of $G$ can be up to quadratic in the number of vertices, this is a weaker goal: an algorithm that is subexponential in the number of edges is not necessarily subexponential in the number of vertices. We give a complete characterization when such algorithms are possible for <span style="font-variant:small-caps;">Scattered Set</span>. \[thm:scatteredmain\] For every fixed graph $H$, the following holds. 1. If every component of $H$ is a path, then <span style="font-variant:small-caps;">$d$-Scattered Set</span> on $H$-free graphs with $n$ vertices and $m$ edges can be solved in time $2^{{\ensuremath{\mathcal{O}}}(|V(H)|\sqrt{n+m}\log(n+m))}$, even if $d$ is part of the input. 2. Otherwise, assuming the ETH, there is no $2^{o(n+m)}$-time algorithm for <span style="font-variant:small-caps;">$d$-Scattered Set</span> for any fixed $d\ge 3$ on $H$-free graphs with $n$-vertices and $m$-edges. The algorithmic side of Theorem \[thm:scatteredmain\] is based on the combinatorial observation that the treewidth of $P_t$-free graphs is sublinear in the number of edges, which means that standard algorithms on bounded-treewidth graphs can be invoked to solve the problem in time subexponential in the number of edges. It has not escaped our notice that this approach is completely generic and could be used for many other problems (e.g., <span style="font-variant:small-caps;">Hamiltonian Cycle</span>, <span style="font-variant:small-caps;">3-Coloring</span>, and so on), where $2^{{\ensuremath{\mathcal{O}}}(t)}\cdot n^{{\ensuremath{\mathcal{O}}}(1)}$ or perhaps $2^{t\cdot\log^{{\ensuremath{\mathcal{O}}}(1)} t}\cdot n^{{\ensuremath{\mathcal{O}}}(1)}$-time algorithms are known on graphs of treewidth $t$. For the lower-bound part of Theorem \[thm:scatteredmain\], we need to examine only two cases: claw-free graphs and $C_t$-free graphs (where $C_t$ is the cycle on $t$ vertices); the other cases then follow immediately. The paper is organized as follows. Section \[sec:preliminaries\] introduces basic notation and contains some technical tools for bounding the running time of recursive algorithms. Section \[sec:gyarfas\] contains the combinatorial results that allow us to bound the treewidth of $P_t$-free graphs. The algorithmic results for [<span style="font-variant:small-caps;">Maximum Independent Set</span>]{} (Theorems \[thm:mainMIS\] and \[thm:Fdt-free\]) appear in Section \[sec:mis-alg\]. The upper and lower bounds for <span style="font-variant:small-caps;">$d$-Scattered Set</span>, which together prove Theorem \[thm:scatteredmain\], are proved in Section \[sec:scat\]. Preliminaries {#sec:preliminaries} ============= Simple undirected graphs are investigated here throughout. The vertex set of graph $G$ will be denoted by $V(G)$, the edge set by $E(G)$. The notation $d_G(x,y)$ for distance, $G[X]$ for the subgraph induced by the vertex set $X$, will have the usual meaning, similarly as $N_G[X]$ and $N_G(X)$ for the closed and open neighborhood respectively of vertex set $X$ in $G$. $\Delta (G)$ is the maximum degree in $G$. For a vertex set $X$ in $G$, $G-X$ means the induced subgraph $H:=G[V-X]$. $P_t$ ($C_t$) is the chordless path (cycle) on $t$ vertices. Finally, a graph is $H$-free if it does not contain $H$ as an induced subgraph. A *distance-$d$ ($d$-scattered) set* in a graph $G$ is a vertex set $S\subseteq V(G)$ such that for every pair of vertices in $S$, the distance between them is at least $d$ in the graph. For $d=2$, we obtain the traditional notion of independent set (stable set). For $d>c$, a distance-$d$ set is a distance-$c$ set as well, for example, for $d\ge 2$, any distance-$d$ set is an independent set. The algorithmic problem [<span style="font-variant:small-caps;">Maximum Weight Independent Set</span>]{} is the problem of maximizing the sum of the weights in an independent set of a graph with nonnegative vertex weights $w$. The maximum is denoted by $\alpha _w(G)$. For a weight $w$ function that has value $1$ everywhere, we obtain the usual problem [<span style="font-variant:small-caps;">Maximum Independent Set</span>]{} (MIS) with maximum $\alpha(G)$. An algorithm $A$ is *subexponential* in parameter $p>1$ if the number of steps executed by $A$ is a subexponential function of the parameter $p$. We will use here this notion for graphs, mostly in the following cases: $p$ is the number $n$ of vertices, the number $m$ of edges, or $p=n+m$ (which is considered to be the size of the input generally). Several different definitions are used in the literature under the name *subexponential function*. Each of them means some condition: this function (with variable $p>1$, called the parameter) may not be larger than some bound, depending on $p$. Here we use two versions, where the bound is of type $exp(o(p))$ and $exp(p^{1-\epsilon})$ respectively, with some $\epsilon >0$. (Clearly, the second one is the more strict.) Throughout the paper, we state our results emphasizing which version we mean. A problem $\Pi $ is *subexponential* if there exists some *subexponential* algorithm solving $\Pi$. Time analysis of recursive algorithms ------------------------------------- To formally reason about time complexities, we will need the following technical lemma. \[lem:cpx\] Let $\Delta: {\mathbb{R}}_{\geq 0} \to {\mathbb{R}}_{\geq 0}$ be a concave and nondecreasing function with $\Delta(0) = 0$, $\Delta(x) \leq x$ for every $x \geq 1$, and $\Delta(x) \leq \Delta(x/2) \cdot (2-\gamma)$ for some $\gamma > 0$ and every $x \geq 2$. Let $S,T : {\mathbb{N}}\to {\mathbb{N}}$ be two nondecreasing functions such that we have $S(0) = T(0) = 0$, moreover, for some universal constant $c$ and $S(1),T(1) \leq c$ and for every $n \geq 2$: $$\begin{aligned} T(n) \leq 2^{cn \log n / \Delta(n)} + \max(&S(n), T(n-1) + T(n-\lceil \Delta(n) \rceil),\nonumber\\ &\max_{1 \leq k \leq \lfloor \frac{n}{\Delta(n)} \rfloor} 2^k \cdot n \cdot T(n-\lceil k \Delta(n) \rceil)).\label{eq:cpx}\end{aligned}$$ Then, for some constant $c'$ depending only on $c$ and $\gamma$, for every $n\geq 1$ it holds that $$T(n) \leq 2^{c' n \log n / \Delta(n)} \cdot \left(S(n)+1\right).$$ We will use Lemma \[lem:cpx\] as a shortcut to argue about time complexities of our branching algorithms; let us now briefly explain its intuition. The function $T(n)$ will be the running time bound of the discussed algorithm. The term $2^{cn\log n / \Delta(n)}$ in  corresponds to a processing time at a single step of the algorithm; note that this is at least polynomial in $n$ as $\Delta(n) \leq n$. The terms in the $\max$ in  are different branching options chosen by the algorithm. The first one, $S(n)$, is a subcall to a different procedure, such as bounded treewidth subroutine. The second one, $T(n) + T(n-\lceil \Delta(n) \rceil)$, corresponds to a two-way branching on a single vertex of degree at least $\Delta(n)$. The last one corresponds to an exhaustive branching on a set $X \subseteq V(G)$ of size $k$, such that every connected component of $G-X$ has at most $n-k\Delta(n)$ vertices. For notational convenience, it will be easier to assume that the functions $S$ and $T$ is defined on the whole half-line ${\mathbb{R}}_{\geq 0}$ with $S(x) = S(\lfloor x \rfloor)$ and $T(x) = T(\lfloor x \rfloor)$. First, let us replace $\max$ with addition in the assumed inequality. After some simplifications, this leads to the following. $$\label{eq:sum1} T(n) \leq T(n-1) + S(n) + 2^{cn \log n / \Delta(n)} + 2n \cdot \sum_{k=1}^{\lfloor \frac{n}{\Delta(n)} \rfloor} 2^k \cdot T(n- k \Delta(n)).$$ From the concavity of $\Delta(n)$ it follows that $$n - i - \Delta(n-i) \leq n - \Delta(n).$$ Furthermore, the assumptions on $\Delta$, namely the fact that $\Delta$ is nondecreasing, concave, with $\Delta(0) = 0$, implies that for any $0 < y < x$ we have $$\frac{y}{x} \Delta(x) \geq \Delta(x) - \Delta(x-y).$$ After simple algebraic manipulation, this is equivalent to $$\frac{x}{\Delta(x)} \geq \frac{x-y}{\Delta(x-y)}.$$ That is, $x \mapsto x/\Delta(x)$ is a nondecreasing function. Using the fact that $S(n)$ and $T(n)$ are nondecreasing and the facts above, we iteratively apply  $n$ times to the first summand, obtaining the following. $$\label{eq:sum2} T(n) \leq n \cdot \left(S(n) + 2^{cn \log n / \Delta(n)} + 2n \cdot \sum_{k=1}^{\lfloor \frac{n}{\Delta(n)} \rfloor} 2^k \cdot T(n- k \Delta(n))\right).$$ We now show the following. \[cl:Pt-rec\] Consider a sequence $n_0 = n$ and $n_{i+1} = n_i - \Delta(n_i)$. Then $n_i = {\ensuremath{\mathcal{O}}}(1)$ for $i = {\ensuremath{\mathcal{O}}}(n / \Delta(n))$. Here, the big-${\ensuremath{\mathcal{O}}}$-notation hides constants depending on $\gamma$. By the concavity of $\Delta$ we have $\Delta(n'/2) \geq \Delta(n')/2$, thus as long as $n_i > n_0/2$ we have that $n_{i+1} \leq n_i - \Delta(n)/2$. Consequently, for some $j = {\ensuremath{\mathcal{O}}}(n / \Delta(n))$ we have $n_j < n_0 / 2$. We infer that we obtain $n_i = {\ensuremath{\mathcal{O}}}(1)$ at position $$i = {\ensuremath{\mathcal{O}}}\left( \frac{n}{\Delta(n)} + \frac{n/2}{\Delta(n/2)} + \frac{n/4}{\Delta(n/4)} + \ldots \right).$$ By the assumption that $\Delta(x) \leq \Delta(x/2) \cdot (2-\gamma)$ for some constant $\gamma > 0$ and every $x \geq 2$, the sum above can be bounded by a geometric sequence, yielding $i = {\ensuremath{\mathcal{O}}}(n/\Delta(n))$. The above claim implies that if we iteratively apply  to itself, we obtain $$T(n) \leq (2n)^{{\ensuremath{\mathcal{O}}}(n / \Delta(n))} \cdot \left(S(n) + 2^{cn \log n / \Delta(n)}\right).$$ This finishes the proof of the lemma. Gyárfás’ path-growing argument {#sec:gyarfas} ============================== The main (technical but useful) result of this section is the following adaptation of Gyárfás’ proof that $P_t$-free graphs are $\chi$-bounded [@gyarfas]. \[lem:path-argument\] Let $t \geq 2$ be an integer, $G$ be a connected graph with a distinguished vertex $v_0 \in V(G)$ and maximum degree at most $\Delta$, such that $G$ does not contain an induced path $P_t$ with one endpoint in $v_0$. Then, for every weight function $w : V(G) \to {\mathbb{Z}}_{\geq 0}$, there exists a set $X \subseteq V(G)$ of size at most $(t-1)\Delta +1$ such that every connected component $C$ of $G-X$ satisfies $w(C) \leq w(V(G))/2$. Furthermore, such a set $X$ can be found in polynomial time. In what follows, a connected component $C$ of an induced subgraph $H$ of $G$ is *big* if $w(C) > w(V(G))/2$. Note that there can be at most one big connected component in any induced subgraph of $G$. If $G-\{v_0\}$ does not contain a big component, we can set $X=\{v_0\}$. Otherwise, let $A_0 = \{v_0\}$ and $B_0$ be the big component of $G-A_0$. As $G$ is connected, every component of $G-A_0$ is adjacent to $A_0$, thus $v_0\in N(B_0)$ holds. We will inductively define vertices $v_1,v_2,v_3,\ldots$ such that $v_0,v_1,v_2,\ldots$ induce a path in $G$. Given vertices $v_0,v_1,v_2,\ldots,v_i$, we define sets $A_{i+1}$ and $B_{i+1}$ as follows. We set $A_{i+1} = N_G[v_0,v_1,\ldots,v_i]$. If $G-A_{i+1}$ does not contain a big connected component, we stop the construction. Otherwise, we set $B_{i+1}$ to be the big connected component of $G-A_{i+1}$. During the process we maintain the invariant that $B_i$ is the big component of $G-A_i$ and that $v_i \in N(B_i)$. Note that this is true for $i=0$ by the choice of $A_0$ and $B_0$. It remains to show how to choose $v_{i+1}$, given vertices $v_0,v_1,\ldots,v_i$ and sets $A_{i+1}$ and $B_{i+1}$. Note that $A_{i+1} = A_i \cup N_G[v_i]$ and $v_i \in N(B_i)$, so $B_{i+1}$ is the big connected component of $G[(B_i \setminus N_G(v_i))]$. Consequently, we can choose some $v_{i+1} \in B_i \cap N_G(B_{i+1}) \cap N_G(v_i)$ that satisfies all the desired properties. Since $G$ does not contain an induced $P_t$ with one endpoint in $v_0$, the aforementioned process stops after defining a set $A_{i+1}$ for some $i < t-1$, when $G-A_{i+1}$ does not contain a big component. Observe that $$|A_{i+1}| \leq (\Delta+1) + i \cdot \Delta = (i+1) \Delta + 1 \leq (t-1)\Delta + 1.$$ Consequently, the set $X := A_{i+1}$ satisfies the desired properties. For the algorithmic claim, note that the entire proof can be made algorithmic in a straightforward manner. It is well known that if graph $G$ has a set $X$ of size $k$ for every weight function $w:V(G)\to {\mathbb{Z}}_{\geq 0}$ such that every connected component $C$ of $G-X$ satisfies $w(C)\le w(V(G))/2$, then $G$ has treewidth ${\ensuremath{\mathcal{O}}}(w)$ (see, e.g., [@FG Theorem 11.17(2)]). Thus Lemma \[lem:path-argument\] implies a treewidth bound of ${\ensuremath{\mathcal{O}}}(t\Delta)$. Algorithmically, it is also a standard consequence of Lemma \[lem:path-argument\] that a tree decomposition of width ${\ensuremath{\mathcal{O}}}(t\Delta)$ can be obtained in polynomial time. What needs to be observed is that standard 4-approximation algorithms for treewidth, which run in time exponential in treewidth, can be made to run in polynomial time if we are given a polynomial-time subroutine for finding the separator $X$ as in Lemma \[lem:path-argument\]. For completeness, we sketch the proof here. \[cor:path-argument\] A $P_t$-free graph with maximum degree $\Delta$ has treewidth ${\ensuremath{\mathcal{O}}}(t\Delta)$. Furthermore, a tree decomposition of this width can be computed in polynomial time. We follow standard constant approximation algorithm for treewidth, as described in [@DBLP:books/sp/CyganFKLMPPS15 Section 7.6]. This algorithm, given a graph $G$ and an integer $k$, either correctly concludes that ${\mathrm{tw}}(G) > k$ or computes a tree decomposition of $G$ of width at most $4k+4$. Let $G$ be a $P_t$-free graph with maximum degree at most $\Delta$. We may assume that $G$ is connected, otherwise we can handle the connected components separately. Let us start by setting $k := (t-1)\Delta$ so that any application of Lemma \[lem:path-argument\] gives a set of size at most $k+1$. The only step of the algorithm that runs in exponential time is the following. We are given an induced subgraph $G[W]$ of $G$ and a set $S \subseteq W$ with the following properties: 1. $|S| \leq 3k+4$ and $W \setminus S \neq \emptyset$; 2. both $G[W]$ and $G[W \setminus S]$ are connected; 3. $S = N_G(W \setminus S)$. The goal is to compute a set $S \subsetneq \widehat{S} \subseteq W$ such that $|\widehat{S}| \leq 4k+5$ and every connected component of $G[W \setminus \widehat{S}]$ is adjacent to at most $3k+4$ vertices of $\widehat{S}$. The construction of $\widehat{S}$ is trivial for $|S| < 3k+4$, as we can take $\widehat{S} = S \cup \{v\}$ for an arbitrary $v \in W \setminus S$. The crucial step happens for sets $S$ of size exactly $3k+4$. Instead of the exponential search of [@DBLP:books/sp/CyganFKLMPPS15 Section 7.6], we invoke Lemma \[lem:path-argument\] on the graph $G[W]$ and a function $w:W \to \{0,1\}$ that puts $w(v) = 1$ if and only if $v \in S$. The lemma returns a set $X \subseteq W$ of size at most $k+1$ such that every connected component $C$ of $G[W \setminus X]$ contains at most $3k/2+2$ vertices of $S$. Since $G[W \setminus S]$ is connected and $(3k/2+2) + (k+1) < 3k+4$, we cannot have $X \subseteq S$. Consequently, $\widehat{S} := S \cup X$ satisfies all the requirements. The algorithm of [@DBLP:books/sp/CyganFKLMPPS15 Section 7.6] returns that ${\mathrm{tw}}(G) > k$ only if at some step it encounters pair $(W,S)$ for which it cannot construct the set $\widehat{S}$. However, our method of constructing $\widehat{S}$ works for every choice of $(W,S)$, and executes in polynomial time. Consequently, the modified algorithm of [@DBLP:books/sp/CyganFKLMPPS15 Section 7.6] always computes a tree decomposition of width at most $4k+4 = {\ensuremath{\mathcal{O}}}(t\Delta)$ in polynomial time, as desired. Subexponential algorithms based on the path-growing argument {#sec:mis-alg} ============================================================ The goal of this section is to use Corollary \[cl:Pt-rec\] to prove Theorems \[thm:mainMIS\] and \[thm:Fdt-free\] stated in the Introduction. <span style="font-variant:small-caps;">Independent Set</span> on graphs without long paths ------------------------------------------------------------------------------------------ We first prove the following statement, which implies Theorem \[thm:mainMIS\]. \[thm:Pt-free\] The [<span style="font-variant:small-caps;">Maximum-Weight Independent Set</span>]{} problem on an $n$-vertex $P_t$-free graph can be solved in time $2^{{\ensuremath{\mathcal{O}}}(\sqrt{tn\log n})}$. Let $G$ be an $n$-vertex $P_t$-free graph. We set a threshold $\Delta = \Delta(n) := \sqrt{\frac{n \log (n+1)}{t}}$. If the maximum degree of $G$ is at most $\Delta$, we invoke Corollary \[cor:path-argument\] to obtain a tree decomposition of $G$ of width ${\ensuremath{\mathcal{O}}}(t\Delta) = {\ensuremath{\mathcal{O}}}(\sqrt{tn\log n})$. By standard techniques on graphs of bounded treewidth (cf. [@DBLP:books/sp/CyganFKLMPPS15]), we solve [<span style="font-variant:small-caps;">Maximum-Weight Independent Set</span>]{} on $G$ in time $2^{{\ensuremath{\mathcal{O}}}(\sqrt{tn\log n})}$. Otherwise, $G$ contains a vertex of degree greater than $\Delta$. We choose (arbitrarily) such a vertex $v$ and we branch on $v$: either $v$ is contained in the maximum independent set or not. In the first case we delete $N_G[v]$ from $G$, in the second we delete only $v$ from $G$. This gives the following recursion for the time complexity $T(n)$ of the algorithm. $$\label{eq:Pt} T(n) \leq \max\left(T(n-1) + T(n-\lceil \Delta(n) \rceil) + {\ensuremath{\mathcal{O}}}(n^2), 2^{{\ensuremath{\mathcal{O}}}(\sqrt{tn \log n})}\right).$$ Observe that we have $T(n) = 2^{{\ensuremath{\mathcal{O}}}(\sqrt{tn\log n})}$ by Lemma \[lem:cpx\] with $S(n) = 2^{{\ensuremath{\mathcal{O}}}(\sqrt{tn \log n})}$; it is straightforward to check that $\Delta(n) = \sqrt{\frac{n \log (n+1)}{t}}$ satisfies all the prerequisites of Lemma \[lem:cpx\]. This finishes the proof of the theorem. Approximation on broom-free graphs ---------------------------------- We now extend the argumentation in Theorem \[thm:Pt-free\] to *$(d,t)$-brooms*—however, this time we are able to obtain only an approximation algorithm. Recall that a $(d,t)$-broom $B_{d,t}$ is a graph consisting of a path $P_t$ and $d$ additional vertices of degree one, all adjacent to one of the endpoints of the path. We now prove Theorem \[thm:Fdt-free\] from the introduction. Let $\Delta(n) = \frac{1}{2dt} \cdot n^{1/4}$; note that such a definition fits the prerequisites of $\Delta(n)$ for Lemma \[lem:cpx\]. In the complexity analysis, we will use Lemma \[lem:cpx\] with this $\Delta(n)$ and without any function $S(n)$; this will give the promised running time bound. In what follows, whenever we execute a branching step of the algorithm we argue that it fits into one of the subcases of the $\max$ in  of Lemma \[lem:cpx\]. As in the proof of Theorem \[thm:Pt-free\], as long as there exists a vertex in $G$ of degree larger than $\Delta$, we can branch on such a vertex $v$: in one subcase, we consider independent sets not containing $v$ (and thus delete $v$ from $G$), in the other subcase, we consider independent sets containing $v$ (and thus delete $N(v)$ from $G$). Such a branching step can be conducted in polynomial time, and fits in the second subcase of $\max$ in . Thus, we can assume henceforth that the maximum degree of $G$ is at most $\Delta$. We also assume that $G$ is connected and $n > (2dt)^4$, as otherwise we can consider every connected component independently and/or solve the problem by brute-force. Later, we will also need a more general branching step. If, in the course of the analysis, we identify a set $X \subseteq V(G)$ such that every connected component of $G-X$ has size at most $n - \frac{|X|n^{1/4}}{2dt}$, then we can exhaustively branch on all vertices of $X$ and independently resolve all connected components of the remaining graph. Such a branching fits into the last case of the $\max$ in , and hence it again leads to the desired time bound $2^{{\ensuremath{\mathcal{O}}}(n^{3/4} \log n)}$ by Lemma \[lem:cpx\]. We start with greedily constructing a set $A_0$ with the following properties: $G[A_0]$ is connected and $n^{1/2} \leq |N[A_0]| \leq n^{1/2} + \Delta$. We start with $A_0$ being a single arbitrary vertex and, as long as $|N[A_0]| < n^{1/2}$, we add an arbitrary vertex of $N(A_0)$ to $A_0$ and continue. Since $G$ is connected, the process ends when $|N[A_0]| \geq n^{1/2}$; since the maximum degree of $G$ is at most $\Delta$, we have $|N[A_0]| \leq n^{1/2} + \Delta < 2n^{1/2}$. Let $B$ be the vertex set of the largest connected component of $G-N[A_0]$. If $|B| < n - n^{3/4}$, we exhaustively branch on $X := N[A_0]$, as $X$ is of size at most $2n^{1/2}$, but every connected component of $G-X$ is of size at most $n - n^{3/4} \leq n- \frac{1}{2} |X| n^{1/4}$. Hence, we are left with the case $|B| > n - n^{3/4}$. Let $S = N(B)$. Note that $A_0$ is disjoint from $N[B]$. Let $A_1$ be the connected component of $G-S$ that contains $A_0$. Since $S \subseteq N(A_0)$, we have that $N[A_1] \supseteq N[A_0]$; in particular, $|N[A_1]| \geq n^{1/2}$ while, as $|B| > n-n^{3/4}$, we have $|N[A_1]| \leq n^{3/4}$. Furthermore, since $S \subseteq N(A_0)$ and $A_0 \subseteq A_1$, we have $N(A_1) = S$. Consider now the following case: there exists $v \in S$ such that $N(v) \cap B$ contains an independent set $L$ of size $d$. Observe that such a vertex $v$ can be found by an exhaustive search in time $n^{d+{\ensuremath{\mathcal{O}}}(1)}$. For such a vertex $v$ and independent set $L$, define $D$ to be the vertex set of the connected component of $G-(N[L] \setminus \{v\})$ that contains $A_1$. Note that as $L \subseteq B$ we have $N[L] \cap A_1 = \emptyset$, and thus such a component $D$ exists. Furthermore, as $N(A_1) = S$, $D$ contains $S \setminus (N(L) \setminus \{v\})$. In particular, $D$ contains $v$, and $$|D| \geq |(A_1 \cup S) \setminus N(L)| \geq |N[A_1]| - \Delta \cdot |L| \geq n^{1/2} - dn^{1/4} \geq \frac{1}{2}n^{1/2}.$$ If $|D| < n - n^{1/2}$, then we exhaustively branch on the set $X := N[L] \setminus \{v\}$, as $|X| \leq d\Delta \leq \frac{1}{2} n^{1/4}$ while every connected component of $G-X$ is of size at most $n-\frac{1}{2} n^{1/2}$ due to $D$ being of size at least $\frac{1}{2} n^{1/2}$ and at most $n-n^{1/2}$. Consequently we can assume $|D| \geq n - n^{1/2}$. Observe that $G[D]$ does not contain a path $P_t$ with one endpoint in $v$, as such a path, together with the set $L$, would induce a $B_{d,t}$ in $G$. Consequently, we can apply Lemma \[lem:path-argument\] to the graph $G[D]$ with the vertex $v_0=v$ and uniform weight $w(u) = 1$ for every $u \in D$, obtaining a set $X_D \subseteq D$ of size $|X_D| \leq (t-1)\Delta + 1 \leq \frac{1}{2} n^{1/4}$ such that every connected component of $G[D \setminus X]$ has size at most $n/2$. We branch exhaustively on the set $X = X_D \cup (N[L] \setminus \{v\})$: this set is of size at most $n^{1/4}$, while every connected component of $G-X$ is of size at most $n/2$ due to the properties of $X_D$ and the fact that $|D| \geq n-n^{1/2}$. This finishes the description of the algorithm in the case when there exists $v \in S$ and an independent set $L \subseteq N(v) \cap B$ of size $d$. We are left with the complementary case, where for every $v \in S$, the maximum independent set in $N(v) \cap B$ is of size less than $d$. We perform the following operation: by exhaustive search, we find a maximum independent set $I_A$ in $G-B$ and greedily take it to the solution; that is, recurse on $G-N[I_A]$ and return the union of $I_A$ and the independent set found by the recursive call in $G-N[I_A]$. Since $|B| > n-n^{3/4}$, the exhaustive search runs in $2^{n^{3/4}} n^{{\ensuremath{\mathcal{O}}}(1)}$ time, fitting the first summand of the right hand side in . As a result, the graph reduces by at least one vertex, and hence the remaining running time of the algorithm fits into the second case of the $\max$ in . This gives the promised running time bound. It remains to argue about the approximation ratio; to this end, it suffices to show the following claim. If $I$ is a maximum independent set in $G$ and $I'$ is a maximum independent set in $G-N[I_A]$, then $|I| - |I'| \leq d|I_A|$. Let $J = I \setminus N[I_A]$. Clearly, $J$ is an independent set in $G-N[I_A]$, and thus $|J| \leq |I'|$. It suffices to show that $|I| - |J| \leq d|I_A|$, that is, $|I \cap N[I_A]| \leq d|I_A|$. The maximality of $I_A$ implies that $V(G)\setminus B \subseteq N[I_A]$. As $I_A$ is a maximum independent set in $G-B$, we have that $|I \setminus B| \leq |I_A|$. For every $w \in I \cap N[I_A] \cap B$, pick a neighbor $f(w) \in I_A \cap N(w)$. Note that we have $f(w) \in S$. Since for every vertex $v \in S$, the size of the maximum independent set in $N(v) \cap B$ is less than $d$, we have $|f^{-1}(v)| < d$ for every $v \in S \cap I$. Consequently, $$|I \cap N[I_A] \cap B| \leq (d-1)|I_A \cap S| \leq (d-1)|I_A|.$$ Together with $|I \setminus B| \leq |I_A|$, we have $|I \cap N[I_A]| \leq d|I_A|$, as desired. This finishes the proof of Theorem \[thm:Fdt-free\]. Scattered Set {#sec:scat} ============= We prove Theorem \[thm:scatteredmain\] in this section. The algorithm for <span style="font-variant:small-caps;">Scattered Set</span> for $P_t$-free graphs hinges on the following combinatorial bound. \[lem:twbound\] For every $t\ge 2$ and for every $P_t$-free graph with $m$ edges, we have that $G$ has treewidth ${\ensuremath{\mathcal{O}}}(t\sqrt{m})$. Let $X$ be the set of vertices of $G$ with degree at least $\sqrt{m}$. The sum of the degrees of the vertices in $X$ is at most $2m$, hence we have $|X|\le 2m/\sqrt{m}=2\sqrt{m}$. By the definition of $X$, the graph $G-X$ has maximum degree less than $\sqrt{m}$. Thus by Corollary \[cor:path-argument\], the treewidth of $G-X$ is ${\ensuremath{\mathcal{O}}}(t\sqrt{m})$. As removing a vertex can decrease treewidth at most by one, it follows that $G$ has treewidth at most ${\ensuremath{\mathcal{O}}}(t\sqrt{m})+|X|={\ensuremath{\mathcal{O}}}(t\sqrt{m})$. It is known that <span style="font-variant:small-caps;">Scattered Set</span> can be solved in time $d^{{\ensuremath{\mathcal{O}}}(w)}\cdot n^{{\ensuremath{\mathcal{O}}}(1)}$ on graphs of treewidth $w$ using standard dynamic programming techniques (cf. [@DBLP:conf/esa/Thilikos11; @DBLP:conf/esa/MarxP15]). By Lemma \[lem:twbound\], it follows that <span style="font-variant:small-caps;">Scattered Set</span> on $P_t$-free graphs can be solved in time $d^{{\ensuremath{\mathcal{O}}}(t\sqrt{m})}\cdot n^{{\ensuremath{\mathcal{O}}}(1)}$. If $d$ is a fixed constant, then this running time can be bounded as $2^{{\ensuremath{\mathcal{O}}}(t\sqrt{m})+{\ensuremath{\mathcal{O}}}(\log n)}=2^{{\ensuremath{\mathcal{O}}}(t\sqrt{n+m})}$. If $d$ is part of the input, then (taking into account that we may assume $d\le n$) the running time is $$d^{{\ensuremath{\mathcal{O}}}(t\sqrt{m})}\cdot n^{{\ensuremath{\mathcal{O}}}(1)}=2^{{\ensuremath{\mathcal{O}}}(t\sqrt{m}\log n)+{\ensuremath{\mathcal{O}}}(\log n)}=2^{{\ensuremath{\mathcal{O}}}(t\sqrt{n+m}\log (n+m))}.$$ Observe that if every component of a fixed graph $H$ is a path, then $H$ is an induced subgraph of $P_{2|V(H)|}$, which implies that $H$-free graphs are $P_{2|V(H)|}$-free. Thus the algorithm described here for $P_t$-free graphs implies the first part of Theorem \[thm:scatteredmain\]. Lower bounds for <span style="font-variant:small-caps;">Scattered Set</span> {#sec:scat-low} ---------------------------------------------------------------------------- A standard consequence of the ETH and the so-called Sparsification Lemma is that there is no subexponential-time algorithm for MIS even on graphs of bounded degree (see, e.g., [@DBLP:books/sp/CyganFKLMPPS15]): \[thm:MIS\] Assuming the ETH, there is no $2^{o(n)}$-time algorithm for <span style="font-variant:small-caps;">MIS</span> on $n$-vertex graphs of maximum degree 3. A very simple reduction can reduce MIS to <span style="font-variant:small-caps;">3-Scattered Set</span> for $P_5$-free graphs, showing that, assuming the ETH, there is no algorithm subexponential in the number of vertices for the latter problem. This proves Theorem \[thm:nosubexpdist3\] stated in the Introduction. Given an $n$-vertex $m$-edge graph $G$ with maximum degree 3 and an integer $k$, we construct a $P_5$-free graph $G'$ with $n+m={\ensuremath{\mathcal{O}}}(n)$ vertices such that $\alpha(G)=\alpha_3(G')$. This reduction proves that a $2^{o(n)}$-time algorithm for <span style="font-variant:small-caps;">3-Scattered Set</span> could be used to obtain a $2^{o(n)}$-time algorithm for MIS on graphs of maximum degree 3, and this would violate the ETH by Theorem \[thm:MIS\]. We may assume that $G$ has no isolated vertices. The graph $G'$ contains one vertex for each vertex of $G$ and additionally one vertex for each edge of $G$. The $m$ vertices of $G'$ representing the edges of $G$ form a clique. Moreover, if the endpoints of an edge $e\in E(G)$ are $u,v\in V(G)$, then the vertex of $G'$ representing $e$ is connected with the vertices of $G'$ representing $u$ and $v$. This completes the construction of $G'$. It is easy to see that $G'$ is $P_5$-free: an induced path of $G'$ can contain at most two vertices of the clique corresponding to $E(G)$ and the vertices of $G'$ corresponding to the vertices of $G$ form an independent set. If $S$ is an independent set of $G$, then we claim that the corresponding vertices of $G'$ are at distance at least 3 from each other. Indeed, no two such vertices have a common neighbor: if $u,v\in S$ and the corresponding two vertices in $G'$ have a common neighbor, then this common neighbor represents an edge $e$ of $G$ whose endpoints are $u$ and $v$, violating the assumption that $S$ is independent. Conversely, suppose that $S'\subseteq V(G')$ is a set of $k$ vertices with pairwise distance at least 3 in $G'$. If $k\ge 2$, then all these vertices represent vertices of $G$: observe that for every edge $e$ of $G$, the vertex of $G'$ representing $e$ is at distance at most 2 from every other (non-isolated) vertex of $G'$. We claim that $S'$ corresponds to an independent set of $G$. Indeed, if $u,v\in S'$ and there is an edge $e$ in $G'$ with endpoints $u$ and $v$, then the vertex of $G'$ representing $e$ is a common neighbor of $u$ and $v$, a contradiction. Next we give negative results on the existence of algorithms for <span style="font-variant:small-caps;">Scattered Set</span> that have running time subexponential in the number of edges. To rule out such algorithms, we construct instances that have bounded degree: then being subexponential in the number of vertices or the number of edges are the same. We consider first claw-free graphs. The key insight here is that <span style="font-variant:small-caps;">Scattered Set</span> with $d=3$ in line graphs (which are claw-free) is essentially the <span style="font-variant:small-caps;">Induced Matching</span> problem, for which it is easy to prove hardness results. \[thm:nosubexpclaw\] Assuming the ETH, <span style="font-variant:small-caps;">$d$-Scattered Set</span> does not have a $2^{o(n)}$ algorithm on $n$-vertex claw-free graphs of maximum degree 6 for any fixed $d\ge 3$. Given an $n$-vertex graph $G$ with maximum degree 3, we construct a claw-free graph $G'$ with ${\ensuremath{\mathcal{O}}}(dn)$ vertices and maximum degree 4 such that $\alpha_d(G')=\alpha(G)$. Then by Theorem \[thm:MIS\], a $2^{o(n)}$-time algorithm for <span style="font-variant:small-caps;">$d$-Scattered Set</span> for $n$-vertex claw-free graphs of maximum degree 4 would violate the ETH. The construction is slightly different based on the parity of $d$; let us first consider the case when $d$ is odd. Let us construct the graph $G^+$ by attaching a path $Q_v$ of $\ell=(d-1)/2$ edges to each vertex $v\in V(G)$; let us denote by $e_{v,1}$, $\dots$, $e_{v,\ell}$ the edges of this path such that $e_{v,1}$ is incident with $v$. The graph $G'$ is defined as the line graph of $G^+$, that is, each vertex of $G'$ represents an edge of $G^+$ and two vertices of $G'$ are adjacent if the corresponding two vertices share an endpoint. It is well known that line graphs are claw-free. As $G^+$ has ${\ensuremath{\mathcal{O}}}(dn)$ edges and maximum degree 4 (recall that $G$ has maximum degree 3), the line graph $G'$ has maximum degree 6 with ${\ensuremath{\mathcal{O}}}(dn)$ vertices an edges. Thus an algorithm for <span style="font-variant:small-caps;">Scattered Set</span> with running time $2^{o(n)}$ on $n$-vertex claw-free graphs of maximum degree 3 could be used to solve MIS on $n$-vertex graphs with maximum degree 3 in time $2^{o(n)}$, contradicting the ETH. If there is an independent set $S$ of size $k$ in $G$, then we claim that the set $S'=\{e_{v,\ell}\mid v\in S\}$ is a $d$-scattered set of size $k$ in $G'$. To see this, suppose for a contradiction that there are two vertices $u,v\in S$ such that the vertices of $G'$ representing $e_{u,\ell}$ and $e_{v,\ell}$ are at distance at most $d-1$ from each other. This implies that there is a path in $G^+$ that has at most $d$ edges and whose first and last edges are $e_{u,\ell}$ and $e_{v,\ell}$, respectively. However, such a path would need to contain all the $\ell$ edges of path $Q_u$ and all the $\ell$ edges of $Q_v$, hence it can contain at most $d-2\ell=1$ edges outside these two paths. But $u$ and $v$ are not adjacent in $G^+$ by assumption, hence more than one edge is needed to complete $Q_u$ and $Q_v$ to a path, a contradiction. Conversely, let $S'$ be a distance-$d$ scattered set in $G'$, which corresponds to a set $S^+$ of edges in $G^+$. Observe that for any $v\in V(G)$, at most one edge of $S^+$ can be incident to the vertices of $Q_v$: otherwise, the corresponding two vertices in the line graph $G'$ would have distance at most $\ell<d$. It is easy to see that if $S^+$ contains an edge incident to a vertex of $Q_v$, then we can always replace this edge with $e_{v,\ell}$, as this can only move it farther away from the other edges of $S^+$. Thus we may assume that every edge of $S^+$ is of the form $e_{v,\ell}$. Let us construct the set $S=\{v\mid e_{v,\ell}\in S^+\}$, which has size exactly $k$. Then $S$ is independent in $G$: if $u,v\in S$ are adjacent in $G$, then there is a path of $2\ell+1=d$ edges in $G^+$ whose first an last edges are $e_{v,\ell}$ and $e_{u,\ell}$, respectively, hence the vertices of $G'$ corresponding to them have distance at most $d-1$. If $d\ge 4$ is even, then the proof is similar, but we obtain the graph $G^+$ by first subdividing each edge and attaching paths of length $\ell=d/2-1$ to each original vertex. The proof proceeds in a similar way: if $u$ and $v$ are adjacent in $G$, then $G^+$ has a path of $2\ell+2=d$ edges whose first and last edges are $e_{v,\ell}$ and $e_{u,\ell}$, respectively, hence the vertices of $G'$ corresponding to them have distance at most $d-1$. There is a well-known and easy way of proving hardness of MIS on graphs with large girth: subdividing edges increases girth and the size of the largest independent set changes in a controlled way. \[lem:girth\] If there is an $2^{o(n)}$-time algorithm for <span style="font-variant:small-caps;">MIS</span> on $n$-vertex graphs of maximum degree 3 and girth more than $g$ for any fixed $g>0$, then the ETH fails. Let $g$ be a fixed constant and let $G$ be a simple graph with $n$ vertices, $m$ edges, and maximum degree 3 (hence $m={\ensuremath{\mathcal{O}}}(n)$). We construct a graph $G'$ by subdividing each edge with $2g$ new vertices. We have that $G'$ has $n'={\ensuremath{\mathcal{O}}}(n+gm)={\ensuremath{\mathcal{O}}}(n)$ vertices, maximum degree 3, and girth at least $3(2g+1)>g$. It is known and easy to show that subdividing the edges this way increases the size of the maximum independent set exactly by $gm$. Thus a $2^{o(n')}$- time algorithm for $n'$-vertex graphs of maximum degree 3 and girth at least $g$ could be used to give a $2^{o(n)}$-time algorithm for $n$-vertex graphs of maximum degree $3$, hence the ETH would fail by Theorem \[thm:MIS\]. We use the lower bound of Lemma \[lem:girth\] to prove lower bounds for <span style="font-variant:small-caps;">Scattered Set</span> on $C_t$-free graphs. \[thm:nosubexpcycle\] Assuming the ETH, <span style="font-variant:small-caps;">$d$-Scattered Set</span> does not have a $2^{o(n)}$ algorithm on $n$-vertex $C_t$-free graphs with maximum degree 3 for any fixed $t\ge 3$ and $d\ge 2$. Let $G$ be an $n$-vertex $m$-edge graph of maximum degree 3 and girth more than $t$. We construct a graph $G'$ the following way: we subdivide each edge of $G$ with $d-2$ new vertices to create a path of length $d-1$, and attach a path of length $d-1$ to each of the $(d-2)m={\ensuremath{\mathcal{O}}}(dn)$ new vertices created. The resulting graph has maximum degree 3, ${\ensuremath{\mathcal{O}}}(d^2n)$ vertices and edges, and girth more than $(d-1)t$ (hence it is $C_t$-free). We claim that $\alpha_d(G')=\alpha(G)+m(d-2)$ holds. This means that an $2^{o(n')}$-time algorithm for <span style="font-variant:small-caps;">Scattered Set</span> $n'$-vertex $C_t$-free graphs with maximum degree 3 would give a $2^{o(n)}$-time algorithm for $n$-vertex graphs of maximum degree 3 and girth more than $t$ and this would violate the ETH by Lemma \[lem:girth\]. To see that $\alpha_d(G')=\alpha(G)+m(d-2)$ holds, consider first an independent set $S$ of $G$. When constructing $G'$, we attached $m(d-2)$ paths of length $d-1$. Let $S'$ contain the degree-1 endpoints of these $m(d-2)$ paths, plus the vertices of $G'$ corresponding to the vertices of $S$. It is easy to see that any two vertices of $S'$ has distance at least $d$ from each other: $S$ is an independent set in $G$, hence the corresponding vertices in $G'$ are at distance at least $2(d-1)\ge d$ from each other, while the degree-1 endpoints of the paths of length $d-1$ are at distance at least $d$ from every other vertex that can potentially be in $S'$. This shows $\alpha_d(G')\ge \alpha(G)+m(d-2)$. Conversely, let $S'$ be a set of vertices in $G'$ that are at distance at least $d$ from each other. The set $S'$ contains two types of vertices: let $S'_1$ be the vertices that correspond to the original vertices of $G$ and let $S'_2$ be the vertices that come from the $m(d-2)d$ new vertices introduced in the construction of $G'$. Observe that $S'_2$ can be covered by $m(d-2)$ paths of length $d-1$ and each such path can contain at most one vertex of $S'$, hence at most $m(d-2)$ vertices of $S'$ can be in $S'_2$. We claim that $S'_1$ can contain at most $\alpha(G)$ vertices, as $S'\cap S'_1$ corresponds to an independent set of $G$. Indeed, if $u$ and $v$ are adjacent vertices of $G$, then the corresponding two vertices of $G'$ are at distance $d-1$, hence they cannot be both present in $S'$. This shows $\alpha_d(G')\le \alpha(G)+m(d-2)$, completing the proof of the correctness of the reduction. As the following corollary shows, putting together Theorems \[thm:nosubexpclaw\] and \[thm:nosubexpcycle\] implies Theorem \[thm:scatteredmain\](2). \[cor:nosubexp\] If $H$ is a graph having a component that is not a path, then, assuming the ETH, <span style="font-variant:small-caps;">$d$-Scattered Set</span> has no $2^{o(n+m)}$-time algorithm on $n$-vertex $m$-edge $H$-free graphs for any fixed $d\ge 3$. Suppose first that $H$ is not a forest and hence some cycle $C_t$ for $t\ge 3$ appears as an induced subgraph in $H$. Then the class of $H$-free graphs is a superset of $C_t$-free graphs, which means that statement follows from Theorem \[thm:nosubexpcycle\] (which gives a lower bound for a more restricted class of graphs). Assume therefore that $H$ is a forest. Then it must have a component that is a tree, but not a path, hence it has a vertex $v$ of degree at least 3. The neighbors of $v$ are independent in the forest $H$, which means that the claw $K_{1,3}$ appears in $H$ as an induced subgraph. Then the class of $H$-free graphs is a superset of claw-free graphs, which means that statement follows from Theorem \[thm:nosubexpclaw\] (which gives a lower bound for a more restricted class of graphs). [^1]: Institute for Computer Science and Control, Hungarian Academy of Sciences, Hungary. [^2]: Department of Informatics, University of Bergen, Norway [^3]: Institute for Computer Science and Control, Hungarian Academy of Sciences, Hungary. [^4]: Institute of Informatics, University of Warsaw, Poland [^5]: Alfréd Rényi Institute of Mathematics, Budapest and and Department of Computer Science and Systems Technology, University of Pannonia, Veszprém, Hungary [^6]: Department of Information and Computing Sciences, Utrecht University, The Netherlands [^7]: A preliminary version of the paper, with weaker results and only a subset of authors, appeared in the proceedings of IPEC 2016 [@BMT]. This research is a part of projects that have received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 714704 (Marcin Pilipczuk), 715744 (Daniel Lokshtanov), 280152 and 725978 (Gábor Bacsó and Dániel Marx). Research of Zsolt Tuza was supported by the National Research, Development and Innovation Office – NKFIH under the grant SNN 116095.
--- abstract: 'We generalise M. M. Skriganov’s notion of weak admissibility for lattices to include standard lattices occurring in Diophantine approximation and algebraic number theory, and we prove estimates for the number of lattice points in sets such as aligned boxes. Our result improves on Skriganov’s celebrated counting result if the box is sufficiently distorted, the lattice is not admissible, and, e.g., symplectic or orthogonal. We establish a criterion under which our error term is sharp, and we provide examples in dimensions $2$ and $3$ using continued fractions. We also establish a similar counting result for primitive lattice points, and apply the latter to the classical problem of Diophantine approximation with primitive points as studied by Chalk, Erdős, and others. Finally, we use o-minimality to describe large classes of sets to which our counting results apply.' address: | Department of Mathematics\ Royal Holloway, University of London\ TW20 0EX Egham\ UK author: - Martin Widmer bibliography: - 'literature.bib' title: 'Weak admissibility, primitivity, o-minimality, and Diophantine approximation' --- Introduction {#intro} ============ In this article we generalise Skriganov’s notion of (weak) admissibility for lattices to include standard lattices occurring in Diophantine approximation and algebraic number theory (e.g., ideal lattices), and we prove a sharp estimate for the number of lattice points in sets such as aligned boxes. Our result applies when the lattice is weakly admissible, whereas Skriganov’s result requires the dual lattice to be weakly admissible (in his stronger sense). If the lattice is symplectic or orthogonal[^1] and weakly admissible then both results apply, and our error term is better, provided the lattice is not admissible and the box is sufficiently distorted. Our error term also has a good dependence on the geometry of the lattice which allows us to apply a Möbius inversion to get a similar estimate for primitive lattice points. The motivation for this comes from a classical Diophantine approximation result [@ChalkErdos1959] due to Chalk and Erdős from 1959 for numbers; it appears that our result is the first one in higher dimensions. We also make modest progress on a conjecture of Dani, Laurent, and Nogueira [@DaniLaurentNogueira2014; @LaurentNogueira2012] on an inhomogeneous Khintchine Groshev type result for primitive points. Finally, we use o-minimality, a notion from model theory, to describe large classes of sets to which our counting results apply. The usage of o-minimality to asymptotically count lattice points has been initiated by Barroero and the author [@BarroeroWidmer], and [@BarroeroWidmer Theorem 1.3] has already found various applications (see, e.g., [@Barroero1; @Barroero2; @Frei1; @Frei2; @Frei3; @Frei4]). Here we further develop this idea but we use o-minimality in a different way.\ Next we shall state the simplest special case of Theorem \[Thm1\], and compare it to Skriganov’s result [@Skriganov Theorem 6.1] (more precisely, Technau and the authors generalisation [@TechnauWidmer Theorem 1] to inhomogeneously expanding boxes). Let $\Gamma\subset \IR^N$. Following Skriganov we define $\nu(\Gamma,\rho):=\inf\{|x_1\cdots x_N|^{1/N}; \vvx\in \Gamma\backslash\{\vv0\}, |\vvx|<\rho\}$, and we say a lattice $\La$ in $\IR^N$ is weakly admissible if $\nu(\La,\rho)>0$ for all $\rho>0$ and admissible if $\lim_{\rho\rightarrow \infty}\nu(\La,\rho)>0$. Let $\Z$ be a translate of the box $[-Q_1,Q_1]\times\cdots \times[-Q_N,Q_N]$, and write $\Qm$ for the maximal $Q_i$, and $\Rz$ for their geometric mean. We set $\LEZ:=\left|\#(\La\cap \Z)-\Vol\Z/\det\La\right|$. \[Thm0\] Suppose $\La$ is a weakly admissible lattice in $\IR^N$. Then we have $$\begin{aligned} 3 \LEZ \ll_N \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\nu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)^{N-1}.\end{aligned}$$ Suppose $\La$ is unimodular. Skriganov [@Skriganov Theorem 6.1] proved error estimates for homogeneously expanding aligned boxes (and more generally certain polyhedrons), provided the dual lattice $\La^\perp$ (with respect to the standard inner product) is weakly admissible (see also [@Skriganov1994 (1.11) Theorem 1.1] for a precursor of this result for admissible lattices). As shown in [@TechnauWidmer Theorem 1] his method also leads to results for inhomogeneously expanding aligned boxes (provided $\La^\perp$ is weakly admissible) of the form[^2] $$\begin{aligned} 3\label{Skriganovbound} \LEZ \ll_N \frac{1}{\nu(\La^\perp, (\Rz/\Qmin)^*)^N}\inf_{\rho> \gamma_N^{1/2}} \left(\frac{\Rz^{N-1}}{\sqrt{\rho}}+\frac{r^{N-1}}{\nu(\La^\perp, 2^r\Rz/\Qmin)^N}\right),\end{aligned}$$ where $\gamma_N$ denotes the Hermite constant, $r=N^2+N\log(\rho/\nu(\La^\perp,\rho\Rz/\Qmin))$, and $(\Rz/\Qmin)^*=\max\{\Rz/\Qmin,\gamma_N\}$. If $\La$ is admissible (which implies that $\La^\perp$ is admissible) then Skriganov’s bound becomes $\ll_\La (\log \Rz)^{N-1}$ which conjecturally is sharp. Let us know suppose that $\La$ is weakly admissible but not admissible. Technau and the author [@TechnauWidmer Theorem 2] have shown that in general, even if $\La$ and $\La^\perp$ are both weakly admissible, there is no way to bound $\nu(\La,\cdot)$ in terms of $\nu(\La^\perp,\cdot)$. This indicates the complementary aspect of Theorem \[Thm0\] and (\[Skriganovbound\]). However, if $\La=A\IZ^N$ with, e.g., a symplectic or orthogonal matrix $A$ then $\nu(\La,\cdot)=\nu(\La^\perp,\cdot)$ by [@TechnauWidmer Proposition 1], and we can directly compare our result with Skriganov’s; note also that for $N=2$ every unimodular lattice is symplectic (cf. [@TechnauWidmer Remark after Proposition 1]). Using that $\Rz/\Qmin\geq (\Qm/\Rz)^{1/(N-1)}=:\x$ and that $r\geq -N\log(\nu(\La^\perp,\x)$ we find the following crude lower bound[^3] for the right hand-side of (\[Skriganovbound\]) $$\begin{aligned} 3\label{Skriganovbound1} \left(\nu(\La, \x)\nu(\La, (\nu(\La,\x)^{-N\log 2}\x)\right)^{-N}.\end{aligned}$$ Choosing $B=\Qm/\Rz=\x^{N-1}$ we see that the error term in Theorem \[Thm0\] is bounded from above by $$\begin{aligned} 3\label{Thm1bound} \ll_N\Rz^{N-1}\nu(\La, \x^{N-1})^{-(N-1)}.\end{aligned}$$ In particular, if $N=2$ then our error term is better whenever $\nu(\La,\Qm/\Rz)^{-3}$ is larger than a certain multiple of $(\Vol\Z)^{1/2}$, so if the box is sufficiently distorted in terms of $\nu(\La, \cdot)$ and the volume of the box (note that for $\nu(\La, \Qm/\Rz)^{-1}=o(\Rz)$ as $\Rz$ tends to infinity, we still get asymptotics). Also for arbitrary $N$ our error term is better when the box is sufficiently distorted in terms of $\nu(\La, \cdot)$ and the volume of the box, and provided $\nu(\La,\rho)$ decays faster than $\rho^{-1/\log 2}$ or sufficiently slowly, e.g., like a negative power of $\log \rho$. The latter happens for almost every unimodular lattice (cf. [@Skriganov Lemma 4.5]), and with $\La=A\IZ^N$ also for almost every[^4] matrix $A\in SO_N(\IR)$ (cf. [@Skriganov Lemma 4.3]), and, as mentioned before, for these $\La$ we also have $\nu(\La,\cdot)=\nu(\La^\perp,\cdot)$.\ Another significant difference between our and Skriganov’s error term concerns the dependence on the lattice. If we replace $\Z$ by $k^{-1}\Z$ (or equivalently replace $\La$ by $k\La$ and fix $\Z$) then the lower bound (\[Skriganovbound1\]) of the error term in (\[Skriganovbound\]) remains the same. On the other hand the upper bound (\[Thm1bound\]) of the error term in Theorem \[Thm0\] decreases by a factor $k^{-N+1}$. This improvement allows us to sieve for coprimality, and thus to prove asymptotics for the number of primitive lattice points.\ Generalisation of weak admissibility and statement of the results ================================================================= Generalised weak admissibility ------------------------------ Let $\S=(\vm,\vbeta)$, where $\vm=(\me,\ldots,\mn)\in \IN^n$, $\vbeta=(\beta_1,\ldots,\beta_n)\in (0,\infty)^n$, and $n\in \IN=\{1,2,3,\ldots\}$. We write $\vx_i$ for the elements in $\IR^{\mi}$ and $\vvx=(\vx_1,\ldots,\vx_n)$ for the elements in $\IR^{\me}\times\cdots\times\IR^{\mn}=\IR^N$, where $$\begin{aligned} 3 \N&:=\sum_{i=1}^{n}\mi.\end{aligned}$$ We will always assume that $N>1$. We set $$\begin{aligned} 3 \t&:=\sum_{i=1}^{n}\beta_i.\end{aligned}$$ We use $|\cdot|$ to denote the Euclidean norm, and we write $$\begin{aligned} 3 \Nm(\vvx):=\prod_{i=1}^{n}|\vx_i|^{\beta_i}\end{aligned}$$ for the multiplicative $\vbeta$-norm on $\E$ induced by $\S$. Let $\C\subset \E$ be a coordinate-tuple subspace, i.e., $$\begin{aligned} 3 \C=\{\vvx\in \E; \vx_i=\vNull\; (\text{for all }i\in I)\},\end{aligned}$$ where $I\subset \{1,\ldots,n\}$. We fix such a pair $(\S,\C)$, and for $\Gamma\subset \E$ and $\ro>0$ we define the quantities $$\begin{aligned} 3 \nu(\Gamma,\ro)&:=\inf\{\Nm(\vvx)^{1/\t}; \vvx\in \Gamma\backslash \C, |\vvx|<\ro\},\\ \Nm(\Gamma)&:=\lim_{\rho\rightarrow \infty}\nu(\Gamma,\ro).\end{aligned}$$ As usual we always interpret $\inf\emptyset =\infty$ and $\infty>x$ for all $x\in \IR$. The above quantities in the special case when $\C=\{\vv0\}$ and $\mi=\beta_i=1$ $(\text{for all }1\leq i\leq n)$ were introduced by Skriganov in [@Skriganov1994; @Skriganov]. By a lattice in $\IR^N$ we always mean a lattice of rank $N$. \[defadm\] Let $\La$ be a lattice in $\IR^N$. We say $\La$ is [*weakly admissible*]{} for $(\S,\C)$ if $\nu(\La,\ro)>0$ for all $\ro>0$. We say $\La$ is [*admissible*]{} for $(\S,\C)$ if $\Nm(\La)>0$. Note that weak admissibility for a lattice in $\IR^N$ depends only on the choice of $\C$ and $\vm$ whereas admissibility depends on $\C$ and $\S=(\vm,\vbeta)$. Also notice that a lattice $\La$ in $\IR^N $ is weakly admissible (or admissible) in the sense of Skriganov [@Skriganov] if and only if $\La$ is weakly admissible (or admissible) for $(\S,\C)$ with $\C=\{\vv0\}$ and $\mi=\beta_i=1$ $(\text{for all }1\leq i\leq n)$. Let us give some examples to illustrate that our notion of weak admissibility captures new interesting cases not covered by Skriganov’s notion of weak admissibility. Let $\Theta\in \Mat_{r\times s}(\IR)$ be a matrix with $r$ rows and $s$ columns and consider[^5] $$\begin{aligned} 3\label{Dioappexample} \La=\begin{bmatrix} I_r & \Theta \\ \vNull & I_s \end{bmatrix} \IZ^{r+s}=\{(\vp+\Theta\vq,\vq); (\vp,\vq)\in \IZ^r\times\IZ^s\}.\end{aligned}$$ We take $n=2$, $\me=r$, $m_2=s$ and $\C=\{(\vx_1,\vx_2); \vx_2=\vNull\}.$ Then the lattice $\La$ is weakly admissible for $(\S,\C)$ (for every choice of $\vbeta$) if $\vp+\Theta\vq\neq \vNull$ for every $\vq\neq \vNull$. If $\vbeta=(1,\beta)$ then $\La$ is admissible for $(\S,\C)$ if we have $$\begin{aligned} 3\label{Dioapp} |\vp+\Theta\vq|{|\vq|}^{\beta}\geq c_\La\end{aligned}$$ for every $(\vp,\vq)$ with $\vq\neq \vNull$ and some fixed $c_\La>0$. The above lattice $\La$ naturally arises when considering Diophantine approximations for the matrix $\Theta$ (cf. Corollary \[Cor2\]). Recall that the matrix $\Theta$ is called badly approximable if (\[Dioapp\]) holds true with $\beta=s/r$. W. M. Schmidt [@Schmidt1969] has shown that the Hausdorff dimension of the set of badly approximable matrices is full, i.e., $rs$. Another example comes from the Minkowski-embedding of, e.g., an ideal in a number field. Suppose $K$ is a number field with $r$ real and $s$ pairs of complex conjugate embeddings. Let $\sigma:K\rightarrow \IR^r\times\IC^s$ be the Minkowski-embedding, and identify $\IC$ in the usual way with $\IR^2$. Set n=r+s, $\C=\{\vv0\}$, $\mi=\beta_i=1$ for $1\leq i\leq r$, and $\mi=\beta_i=2$ for $r+1\leq i\leq r+s$. Now let $\A\subset K$ be a free $\IZ$-module of rank $N=r+2s$. Then $\La=\sigma\A$ is admissible in $(\S,\C)$. In particular, this generalises the examples of Skriganov for totally real number fields to arbitrary number fields $K$. Unlike in Skriganov’s setting we can also consider cartesian products of such modules $\A_j$ by using the embedding $\sigma:K^p\rightarrow \IR^{pr}\times\IC^{ps}$ that sends a tuple $\valpha$ to $(\sigma_1(\valpha),\ldots,\sigma_{r+s}(\valpha))$. Now $\mi$ is $p$ if $\sigma_i$ is real and $2p$ otherwise while $n$ and $\beta_i$ remain unchanged. Again we get that $\La=\sigma(\A_1\times\cdots\times \A_p)$ is an admissible lattice in $(\S,\C)$. Generalised aligned boxes ------------------------- Now we introduce the sets in which we count the lattice points. Essentially these are the sets that are distorted only in the directions of the coordinate axes. Let $(\S,\C)$ be given, and recall that $\C=\C_I$. For $\Q=(Q_1,\ldots,Q_n)\in (0,\infty)^n$ we consider the $\vbeta$-weighted geometric mean $$\begin{aligned} 3 \Rz=\left(\prod_{i=1}^{n}\Qi^{\beta_i}\right)^{1/\t},\end{aligned}$$ and we assume throughout this note that $$\begin{aligned} 3\label{Qorder} \Qi\leq \Rz \;(\text{for all } i\notin I).\end{aligned}$$ We set $$\begin{aligned} 3\Qm:=\max_{1\leq i\leq n}\Qi,\\ \Qmin:=\min_{1\leq i\leq n}\Qi.\end{aligned}$$ For $\ka>0$ and $M\in \IN$ we introduce the family of sets $$\begin{aligned} 3 \F_{\ka,M}:=\{S\subset \IR^N; \partial(AS)\in \text{ Lip}(N,M,\ka\cdot \diam(AS))\; \forall A\in \Aut_N(\IR)\}.\end{aligned}$$ Here $\Aut_N(\IR)$ denotes the group of invertible $N\times N$-matrices with real entries, $\diam(\cdot)$ denotes the diameter, $\partial(\cdot)$ denotes the topological boundary, and the notation Lip-$(\cdot,\cdot,\cdot)$ is explained in Definition \[Lip\] Section \[countingprinciples\]. It is an immediate consequence of [@WidmerLNCLP Theorem 2.6] that every bounded convex set in $\IR^N$ lies in $\F_{\ka,M}$ for $\ka=16N^{5/2}$ and $M=1$. We will also show (Proposition \[Propomin\]) that if $Z\subset \IR^{d+N}$ is definable in an o-minimal structure and each fiber $Z_T=\{\vvx; (T,\vvx)\in Z\}\subset \IR^N$ is bounded then each fiber $Z_T$ lies in $\F_{\ka_Z,M_Z}$ for certain constants $\ka_Z$ and $M_Z$ depending only on $Z$ but not on $T$. This result provides another rich source of interesting examples, and might be of independent interest. For $1\leq i\leq n$ let $\pi_i:\E\rightarrow \IR^{\mi}$ be the projection defined by $\pi_i(\vvx)=\vx_i$. We fix values $\ka$ and $M$, and we assume throughout this article that $\Z\subset \IR^N$ is such that for all $1\leq i\leq n$ $$\begin{aligned} 3 (1)\;&\Z\in \F_{\ka,M},\\ (2)\;&\pi_i(\Z)\subset \Byi(\Qi) \text{ for some }\vyi \in \IR^{\mi}.\end{aligned}$$ Here $B_{\vyi}(\Qi)$ denotes the closed Euclidean ball in $\IR^{\mi}$ about $\vy_i$ of radius $\Qi$. As is well known (see, e.g., [@Spain]) $\partial(\Z)\in \text{ Lip}(N,M,L)$ implies that $\Z$ is measurable. Main results ------------ Let $(\S,\C)$ be given. For $\Gamma\subset \E$ we introduce the quantities $$\begin{aligned} 3 \la_1(\Gamma):=\inf\{|\vvx|; \vvx\in \Gamma\backslash\{\vv0\}\},\end{aligned}$$ and $$\begin{aligned} 3 \mu(\Gamma,\ro):=\min\{\la_1(\Gamma\cap \C), \nu(\Gamma,\ro)\}.\end{aligned}$$ If $\mu(\Gamma,\rho)=\infty$ then we interpret $1/\mu(\Gamma,\rho)$ as $0$. Finally, we introduce the error term $$\begin{aligned} 3 \LEZ:=\left|\#(\Z\cap\La)-\frac{\Vol \Z}{\det \La}\right|.\end{aligned}$$ Our first result is a sharp upper bound for $\LEZ$. \[Thm1\] Suppose $\La$ is a weakly admissible lattice for $(\S,\C)$, and define $\cthree:=M((1+\ka)N^{2N})^N$. Then we have $$\begin{aligned} 3 \LEZ \leq \cthree \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)^{N-1}.\end{aligned}$$ Considering suitable homogeneously expanding parallelepipeds it is clear that the error term cannot be improved in this generality. However, the situation becomes much more interesting when we restrict the sets $\Z$ to aligned boxes. In this case Skriganov conjectured [@Skriganov1994 Remark 1.1] that his error term [@Skriganov1994 (1.11) Theorem 1.1] for admissible lattices (in his sense) is sharp. Skriganov’s conjecture would follow from the expected sharp lower bound for the extremal discrepancy of sequences in the unit cube in $\IR^N$ (see [@Skriganov1994 Remark 2.2]); however, this is a major open problem in uniform distribution theory, proved only for $N=2$ by Schmidt [@SchmidtVII]. Therefore, the sharpness of Skriganov’s error term for admissible lattices is known only for $N=2$. Here we are able to show that for weakly admissible lattices (in our sense) the error term in Theorem \[Thm1\] is sharp for $N=2$ and $N=3$. \[sharp\] Suppose $2\leq n\leq 3$, $\m_i=\beta_i=1$ ($1\leq i\leq n$) (hence $N=n$) and $\C=\{\vvx;\vx_n=\vNull\}$. Then there exists an absolute constant $c_{abs}>0$, a unimodular, weakly admissible lattice $\La$ for $(\S,\C)$, and a sequence of increasingly distorted (i.e., $\Rz/\Qm$ tends to zero), aligned boxes $\Z=[-Q_1,Q_1]\times\cdots\times[-Q_n,Q_n]$, satisfying (\[Qorder\]), whose volume $(2\Rz)^N$ tends to infinity, such that for each box $\Z$ $$\LEZ \geq c_{abs} \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)^{N-1}.$$ Thanks to the good dependence on the lattice of the error term in Theorem \[Thm1\] we are also able to prove asymptotics for the number of primitive lattice points. Let $\La$ be a lattice in $\IR^N$. We say $\vvx\in \La$ is primitive if $\vvx$ is not of the form $k\vvy$ for some $\vvy\in \La$ and some integer $k>1$. We write $$\begin{aligned} 3 \La^*:=\{\vvx\in \La; \vvx\text{ is primitive}\}.\end{aligned}$$ To state our next result let $\T:[0,\infty)\rightarrow [1,\infty)$ be monotonic increasing, and an upper bound for the divisor function, i.e., $$\T(k)\geq \sum_{d\mid k}1$$ for all $k\in \IN$. Finally, $\zeta(\cdot)$ denotes the Riemann zeta function. \[Thm2\] Suppose $\La$ is a weakly admissible lattice for $(\S,\C)$. Then there exists a constant $\cone=\cone(N,\ka,M)$, depending only on $N,\ka,M$, such that $$\begin{aligned} 3 \left|\#(\Z\cap\La^*)-\frac{\Vol \Z}{\zeta(N)\det \La}\right| \leq \cone \left(\left(\frac{\Rz}{\mu}+1\right)^{N-1}+\left(\frac{\Rz}{\mu}+1\right)\T\left(\Ad \right)\right),\end{aligned}$$ where $$\begin{aligned} 1 \Ad =N^{2N+2}(\Rz+|\phi(\vvy)|)\left(\frac{1}{\mu}+\frac{1}{\Rz}\right),\end{aligned}$$ $\mu=\mul$, and $|\phi(\vvy)|$ is the Euclidean norm of $(\Rz\vy_1/\Qo,\ldots,\Rz\vy_n/\Qn)\in \E$. Note that for every $a>2$ there is a $b=b(a)\geq \exp(\exp(1))$ such that for $x\geq b$ we can take $T(x)=a^{\frac{\log x}{\log\log x}}$. We use $\Rz+|\phi(\vvy)|\leq \Rz(1+|\vvy|/\Qmin)$ and ${1}/{\mu}+{1}/{\Rz}\leq 2/\mu$ to obtain the following corollary. \[Cor\] Suppose $\La$ is a weakly admissible lattice for $(\S,\C)$ and $a>2$. Then there exists a constant $\ctwo=\ctwo(a,N,\ka,M,|\vvy|)$, depending only on $a,N,\ka,M$ and $|\vvy|$ such that for all $\Rz\geq b\mu$ we have $$\begin{aligned} 3 \left|\#(\Z\cap\La^*)-\frac{\Vol \Z}{\zeta(N)\det \La}\right| \leq \ctwo\left(\left(\frac{\Rz}{\mu}\right)^{N-1}+a^{\frac{\log(\eta\Rz/\mu)}{\log\log(\eta\Rz/\mu)}}\left(\frac{\Rz}{\mu}\right)\right),\end{aligned}$$ where $\mu=\mul$, and $\eta=1+|\vvy|/\Qmin$. Next we consider applications to Diophantine approximation. Let $\Theta\in \Mat_{r\times s}(\IR)$ be a matrix with $r$ rows and $s$ columns and suppose that $\App:[1,\infty)\rightarrow (0,1]$ is a non-increasing function such that $$\begin{aligned} 1\label{Diophcond} |\vp+\Theta\vq|{|\vq|}^{\beta}\geq \App(|\vq|)\end{aligned}$$ for every $(\vp,\vq)$ with $\vq\neq \vNull$. Let $\vy$ be in $\IR^r$, $Q\geq 1$, and let $0<\epsilon\leq 1$. We consider the system $$\begin{aligned} 1 \label{hallo1} \vp+\Theta\vq-\vy\in [0,\epsilon]^r\\ \label{hallo2} \vq\in [0,Q]^s. $$ Let $N^*_{\Theta,\vy}(\epsilon,Q)$ be the number of $(\vp,\vq)\in \IZ^{r+s}$ that satisfy the above system and have coprime coordinates, i.e., $\gcd(p_1,\ldots,p_r,q_1,\ldots,q_s)=1$. In the one-dimensional case $r=s=1$ Chalk and Erdős [@ChalkErdos1959] proved in 1959 that if $\Theta$ is an irrational number and $\epsilon=\epsilon(\vq)=(1/\vq)(\log \vq/\log\log \vq)^2$ then (\[hallo1\]) has infinitely many coprime solutions, i.e., $N^*_{\Theta,\vy}(\epsilon,Q)$ is unbounded as $Q$ tends to infinity. No improvements or generalisations have been obtained since. The following corollary follows straightforwardly from Corollary \[Cor\], and we leave the proof to the reader. We suppose $\epsilon=\epsilon(Q)$ is a function of $Q$, and that $\epsilon\cdot Q^\beta$ tends to infinity as $Q$ tends to infinity. \[Cor2\] Suppose $a>2$. Then, as $Q$ tends to infinity, we have $$\begin{aligned} 3 N^*_{\Theta,\vy}(\epsilon,Q)=\frac{\epsilon^r Q^s}{\zeta(r+s)}+O(u^{r+s-1}+ua^{\frac{\log\delta}{\log\log\delta}}),\end{aligned}$$ where $u=\left(\frac{\epsilon Q^\beta}{\App(Q)}\right)^{1/(1+\beta)}$, and $\delta=\left(\frac{1}{\App(Q)}\left(\frac{Q}{\epsilon}\right)^\beta\right)^{1/(1+\beta)}$. Corollary \[Cor2\] also implies new results on how quickly $\epsilon$ can decay so that (\[hallo1\]) still has infinitely many coprime solutions. As an example let us suppose that $\Theta$ is a badly approximable matrix so that in (\[Diophcond\]) we can choose $\beta=\s/\r$ and $\App(\cdot)$ to be constant. A straightforward computation shows that if $c>2^{(\r\s+\s^2)/(\r^2(\r+\s-1))}$ and $\epsilon=\epsilon(Q)=Q^{-\s/\r}c^{\log Q/\log\log Q}$ then $N^*_{\Theta,\vy}(\epsilon,Q)$ tends to infinity as $Q$ does. In particular, if $\epsilon=\epsilon(|\vq|_\infty)=|\vq|_\infty^{-\s/\r}c^{\log |\vq|_\infty/\log\log |\vq|_\infty}$ then (\[hallo1\]) has infinitely many coprime solutions[^6]. To the best of the author’s knowledge this is the first such result result in arbitrary dimensions. A similar simple calculation shows that Corollary \[Cor2\] in conjunction with the classical Khintchine Groshev Theorem implies that the same holds true not only for badly approximable matrices $\Theta$ but for almost[^7] every $\Theta\in \Mat_{r\times s}(\IR)$. Finally, we mention a connection to a question of Dani, Laurent and Nogueira [@DaniLaurentNogueira2014; @LaurentNogueira2012]. Suppose $\epsilon:[1,\infty)\rightarrow (0,1]$ and $Q^{\s-1}\epsilon(Q)^{\r}$ is non-increasing. Dani, Laurent and Nogueira conjecture[^8] [@DaniLaurentNogueira2014 2. paragraph after Theorem 1.1] that if $\sum_{j\in \IN}j^{\s-1}\epsilon(j)^\r=\infty$ then for almost every $\Theta\in \Mat_{r\times s}(\IR)$ there exist infinitely many coprime solutions of (\[hallo1\]), where again we interpret $\epsilon=\epsilon(|\vq|_\infty)$ as a function evaluated at $|\vq|_\infty$. We cannot prove this conjecture but, as mentioned before, our result shows at least that we have infinitely many such solutions for almost every $\Theta$ if $\epsilon(Q)\gg Q^{-\s/\r}c^{\log Q/\log\log Q}$ and $c>2^{(\r\s+\s^2)/(\r^2(\r+\s-1))}$. Basic counting principle {#countingprinciples} ======================== Let $\Da\geq 2$ be an integer. Let $\Lambda$ be a lattice of rank $\Da$ in $\IR^\Da$. Recall that $B_P(R)$ denotes the closed Euclidean ball about $P$ of radius $R$. We define the successive minima $\lambda_1(\Lambda),\ldots,\lambda_\Da(\Lambda)$ of $\Lambda$ as the successive minima in the sense of Minkowski with respect to the Euclidean unit ball. That is $$\begin{aligned} 3 \lambda_i=\inf \{\lambda; B_0(\lambda)\cap \Lambda \text{ contains $i$ linearly independent vectors}\}.\end{aligned}$$ \[Lip\] Let $M$ be a positive integer, and let $L$ be a non-negative real number. We say that a set $S$ is in Lip$(\Da,M,L)$ if $S$ is a subset of $\IR^\Da$, and if there are $M$ maps $\phi_1,\ldots,\phi_M:[0,1]^{\Da-1}\longrightarrow \IR^\Da$ satisfying a Lipschitz condition $$\begin{aligned} 3 |\phi_i(\vx)-\phi_i(\vy)|\leq L|\vx-\vy| \text{ for } \vx,\vy \in [0,1]^{\Da-1}, i=1,\ldots,M \end{aligned}$$ such that $S$ is covered by the images of the maps $\phi_i$. For any set $S$ we write $$1^*(S)= \begin{cases} 1& \text{if $S\neq \emptyset$,} \\ 0&\text{if $S=\emptyset$.} \end{cases}$$ We will apply the following basic counting principle. \[MV\_CL\] Let $\Lambda$ be a lattice in $\IR^\Da$ with successive minima $\lambda_1,\ldots, \lambda_\Da$. Let $S$ be a set in $\IR^\Da$ such that the boundary $\partial S$ of $S$ is in Lip$(\Da,M,L)$, and suppose $S\subset B_P(L)$ for some point $P$. Then $S$ is measurable, and moreover, $$\begin{aligned} 3 \left|\#(S\cap\Lambda)-\frac{\Vol S}{\det \Lambda}\right| \leq \ccountlatticepts(\Da) M\left(\left(\frac{L}{\lambda_1}\right)^{\Da-1}+1^*(S\cap \Lambda)\right),\end{aligned}$$ where $\ccountlatticepts(\Da)=\Da^{3\Da^2/2}$. By [@art1 Theorem 5.4] the set $S$ is measurable, and moreover, $$\begin{aligned} 3\label{thm5.4} \left|\#(S\cap\Lambda)-\frac{\Vol S}{\det \Lambda}\right| \leq \Da^{3\Da^2/2}M\max_{1\leq j<\Da}\left\{1,\frac{L^j}{\lambda_1\cdots \lambda_j}\right\}.\end{aligned}$$ First suppose $L\geq \lambda_1$. Then the lemma follows immediately from (\[thm5.4\]). Next we assume $L<\lambda_1$. We distinguish two subcases. First suppose $S\cap\Lambda\neq \emptyset$. Then $$\begin{aligned} 3 \max_{1\leq j<\Da}\left\{1,\frac{L^j}{\lambda_1\cdots \lambda_j}\right\}=1=1^*(S\cap \Lambda)\leq\left(\frac{L}{\lambda_1}\right)^{\Da-1}+1^*(S\cap \Lambda).\end{aligned}$$ Now suppose $S\cap\Lambda=\emptyset$. As $L<\lambda_1$ we get, using Minkowski’s second Theorem, $$\begin{aligned} 1 \left|\#(S\cap\Lambda)-\frac{\Vol S}{\det \Lambda}\right|=\frac{\Vol S}{\det \Lambda}\leq \frac{(2L)^\Da}{\lambda_1\cdots \lambda_\Da}\leq 2^{\Da}\left(\frac{L}{\lambda_1}\right)^{\Da-1}.\end{aligned}$$ This proves the lemma. Proof of Theorem \[Thm1\] ========================= Let $\thi=\Rz/\Qi$ ($1\leq i\leq n$), and let $\phi$ be the automorphism of $\E$ defined by $$\begin{aligned} 1 \phi(\vvx):=(\tho\vx_1,\ldots,\thn\vx_n).\end{aligned}$$ Set $$\begin{aligned} 1 \thm:=\min_{1\leq i\leq n}\thi=\Rz/\Qm.\end{aligned}$$ Note that by (\[Qorder\]) we have $$\begin{aligned} 3\label{thetaorder} \thi\geq 1\;(\text{for all } i\notin I).\end{aligned}$$ Moreover, $$\begin{aligned} 3 \prod_{i=1}^{n}\thi^{\beta_i}=1,\end{aligned}$$ and hence, $$\begin{aligned} 3\label{Nminv} \Nm(\phi\vvx)=\Nm(\vvx).\end{aligned}$$ \[Lip-par\] We have $\partial \phi(\Z)\in$ Lip$(N,M,L)$ for $L=2n^{1/2}\ka\Rz$. We have $$\begin{aligned} 1 \phi(\Z)\subset \phi(B_{\vy_1}(\Qo)\times\cdots \times B_{\vy_n}(\Qn))=B_{\tho\vy_1}(\Rz)\times\cdots \times B_{\thn\vy_n}(\Rz),\end{aligned}$$ and hence, $\phi(\Z)\subset B_{\phi\vvy}(\dS\Rz)$. As $\Z\in \F_{\ka,M}$ the claim follows. \[Prop1\] The set $\Z$ is measurable and $$\begin{aligned} 3 \left|\#(\Z\cap\La)-\frac{\Vol \Z}{\det \La}\right| \leq \cfour\left(\left(\frac{\Rz}{\la_1(\phi\La)}\right)^{N-1}+1^*(\phi\Z\cap\phi\La)\right),\end{aligned}$$ where $\cfour=(1+2n^{1/2}\ka)^{N-1}M\ccountlatticepts(N)$. Since $\#(\Z\cap\La)=\#(\phi\Z\cap\phi\La)$ and ${\Vol \Z}/{\det \La}={\Vol \phi\Z}/{\det \phi\La}$ this follows immediately from Lemma \[MV\_CL\] and Lemma \[Lip-par\]. \[minimumestimate\] Let $\Ac>0$. Then we have $$\begin{aligned} 3 \la_1(\phi\La)\geq \min\{\la_1(\La\cap \C_I), \nu(\La,\Ac), \thm\Ac\}.\end{aligned}$$ By (\[thetaorder\]) we have $\theta_i\geq 1$ (for all $i\notin I$). Moreover, if $\vvx\in \La\cap \C_I$ then $\vx_i=\v0$ (for all $i\in I$), and thus $$\begin{aligned} 3 |\phi(\vvx)|^2=\sum_{1\leq i\leq n\atop i\notin I}|\theta_i\vx_i|^2\geq \sum_{1\leq i\leq n\atop i\notin I}|\vx_i|^2=|\vvx|^2.\end{aligned}$$ Hence, if $\vvx\in \La\cap \C_I$ and $\vvx\neq 0$ then $|\phi(\vvx)|\geq\la_1(\La\cap\C_I)$. Now suppose that $\vvx\in \La\backslash \C_I$. If $\vvz$ is an arbitrary point in $\E$ then, by the weighted arithmetic geometric mean inequality, we have $$\begin{aligned} 1 |\vvz|^2=\sum_{i=1}^{n}|\vz_i|^2\geq \frac{1}{\max_i \beta_i}\sum_{i=1}^{n}\beta_i|\vz_i|^2 \geq \frac{\t}{\max_i \beta_i}\left(\prod_{i=1}^{n}|\vz_i|^{2\beta_i}\right)^{\frac{1}{\t}}\geq\Nm(\vvz)^{2/\t},\end{aligned}$$ and thus $$\begin{aligned} 3\label{normbound} |\vvz|\geq \Nm(\vvz)^{1/\t}.\end{aligned}$$ Using (\[normbound\]) and (\[Nminv\]) we conclude that $$\begin{aligned} 3 |\phi(\vvx)|\geq \Nm(\phi\vvx)^{1/\t}=\Nm(\vvx)^{1/\t}.\end{aligned}$$ First suppose that $|\vvx|< \Ac$. Then we have by the definition of $\nu(\cdot,\cdot)$ $$\begin{aligned} 3 \Nm(\vvx)^{1/\t}\geq \nu(\La,\Ac),\end{aligned}$$ and hence $|\phi(\vvx)|\geq \nu(\La,\Ac)$. Now suppose $|\vvx|\geq\Ac$. Then we have $$\begin{aligned} 3 |\phi(\vvx)|=\thm|(\tho\vx_1/\thm,\ldots,\thn\vx_n/\thm)|\geq \thm|(\vx_1,\ldots,\vx_n)|=\thm |\vvx|\geq \thm \Ac.\end{aligned}$$ This proves the lemma. We can now easily finish the proof of Theorem \[Thm1\]. Since, $\thm \Qm=\Rz$ we conclude $\la_1(\phi\La)\geq \min\{\mu(\La,\Ac), \Ac\Rz/\Qm\}$. Thus, we have $$\begin{aligned} 3\label{errortermbound} \frac{\Rz}{\la_1(\phi\La)}\leq\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}.\end{aligned}$$ The latter in conjunction with Lemma \[Prop1\] and the fact $\cfour+1=(1+2n^{1/2}\ka)^{N-1}MN^{3N^2/2}+1\leq M((1+\ka)N^{2N})^N=\cthree$ proves the theorem. Preparations for the Möbius inversion {#1errorterm} ===================================== Recall that $\T:[0,\infty)\rightarrow [1,\infty)$ is a monotonic increasing function that is an upper bound for the divisor function, i.e., $\T(k)\geq \sum_{d|k}1$ for all $k\in \IN$. In this section $\Da$ is a positive integer. For $A\in \Aut_\Da(\IR)$ we write $\|A\|$ for the (Euclidean) operator norm. \[1\*est\] Let $\Lambda$ be a lattice in $\IR^\Da$, and let $A$ be in $\Aut_\Da(\IR)$ with $A\IZ^\Da=\Lambda$. Then $$\begin{aligned} 3 \#\{k\in \IN; B_P(R)\backslash \{\vNull\}\cap k\Lambda\neq \emptyset\} \leq \T((R+|P|)\|A^{-1}\|)(2R\|A^{-1}\|+1).\end{aligned}$$ First assume $A=I_\Da$ so that $\Lambda=\IZ^\Da$. Suppose $v=(a_1,\ldots,a_\Da)\in \IZ^\Da$ is non-zero, $kv\in B_P(R)$ and $P=(x_1,\ldots,x_\Da)$. Then $ka_i$ lies in $[x_i-R,x_i+R]$ for $1\leq i\leq \Da$. As $v\neq \vNull$ there exists an $i$ with $a_i\neq 0$. We conclude that $k$ is a divisor of some non-zero integer in $[x_i-R,x_i+R]$. There are at most $2R+1$ integers in this interval, each of which of modulus at most $R+|P|$. Hence the number of possibilities for $k$ is $\leq \T(R+|P|)(2R+1)$. This proves the lemma for $A=I_\Da$. Next note that $$\#(B_P(R)\backslash \{\vNull\}\cap k\La)=\#(A^{-1}B_P(R)\backslash \{\vNull\}\cap k\IZ^\Da).$$ Hence, the general case follows from the case $A=I_\Da$ upon noticing $A^{-1}B_P(R)\subset B_{A^{-1}(P)}(R\|A^{-1}\|)$, and $|A^{-1}(P)|\leq\|A^{-1}\||P|$. Next we estimate the operator norm $\|A^{-1}\|$ for a suitable choice of $A$. \[Operatornormest\] Let $\Lambda$ be a lattice in $\IR^\Da$. There exists $A\in \Aut_\Da(\IR)$ with $A\IZ^\Da=\Lambda$ and $$\begin{aligned} 3 \|A^{-1}\|\leq \frac{\copnorm(\Da)}{\lambda_1},\end{aligned}$$ where $\copnorm(\Da)=\Da^{2\Da+1}$. Any lattice $\Lambda$ in $\IR^\Da$ has a basis $v_1,\ldots,v_\Da$ with $\frac{|v_1|\cdots|v_\Da|}{|\det[v_1\ldots v_\Da]|}\leq \Da^{2\Da}$, see, e.g., [@art1 Lemma 4.4]. Let $A$ be the matrix that sends the canonical basis $e_1,\ldots,e_n$ to $v_1,\ldots,v_n$. Now suppose $A^{-1}$ sends $e_i$ to $(\rho_1,\ldots,\rho_n)$ then by Cramer’s rule $$\begin{aligned} 3 |\rho_j|=&\left|\frac{\det[v_1\ldots e_i\ldots v_\Da]} {\det[v_1\ldots v_j\ldots v_\Da]}\right|\leq\frac{|\det[v_1\ldots e_i\ldots v_\Da]|} {|v_1|\cdots|v_j|\cdots|v_\Da|}\Da^{2\Da}.\end{aligned}$$ Now we apply Hadamard’s inequality to obtain $$\begin{aligned} 3 \frac{|\det[v_1\ldots e_i\ldots v_\Da]|} {|v_1|\cdots|v_j|\cdots|v_\Da|} \leq \frac{|v_1|\cdots|e_j|\cdots|v_\Da|}{|v_1|\cdots|v_i|\cdots|v_\Da|} =\frac{1}{|v_i|}\leq \frac{1}{\lambda_1}.\end{aligned}$$ Next we use that for a $\Da\times \Da$ matrix $[a_{ij}]$ with real entries we have $\|[a_{ij}]\|\leq \sqrt{\Da}\max_{ij}|a_{ij}|$, and this proves the lemma. We combine the previous two lemmas. \[mainlemma\] Let $\Lambda$ be a lattice in $\IR^\Da$, and let $\lambda_1=\lambda_1(\Lambda)$. Then $$\begin{aligned} 3 \sum_{k=1}^{\infty}1^*(B_P(R)\backslash\{\vNull\}\cap k\Lambda )\leq \T\left(\copnorm(\Da)\left(\frac{R+|P|}{\lambda_1}\right)\right)\left(\frac{2\copnorm(\Da) R}{\lambda_1}+1\right).\end{aligned}$$ Note that $\sum_{k=1}^{\infty}1^*(B_P(R)\backslash\{\vNull\}\cap k\Lambda)=\#\{k\in \IN; B_P(R)\backslash\{\vNull\}\cap k\Lambda\neq \emptyset\}$. Hence, the lemma follows immediately from Lemma \[1\*est\] and Lemma \[Operatornormest\]. Proof of Theorem \[Thm2\] {#proofthm2} ========================= Set $$\begin{aligned} 3 \Z^*:=\Z\backslash\{\vv0\},\end{aligned}$$ and $$\begin{aligned} 3 \R:=\dS\Rz.\end{aligned}$$ \[Prop2\] We have $$\begin{aligned} 3 \left|\#(\Z^*\cap\La)-\frac{\Vol \Z}{\det \La}\right| \leq \cfive \left(\left(\frac{\Rz}{\la_1(\phi\La)}\right)^{N-1}+1^*(B_{\phi(\vvy)}(\R)\backslash\{\vv0\}\cap\phi\La)\right),\end{aligned}$$ where $\cfive=(1+2n^{1/2}\ka)^{N-1}(M+1)\ccountlatticepts(N)$. Lemma \[Lip-par\] implies that $\partial \Z^*\in$ Lip$(N,M+1,L)$ with $L=2n^{1/2}\ka\Rz$. As noted in the proof of the latter lemma we have $\phi(\Z^*)\subset B_{\phi\vvy}(\R)\backslash\{\vv0\}$. We conclude as in Lemma \[Prop1\]. For $\vvx\in \La\backslash\{\vv0\}$ we define $\gcd(\vvx):=d$ if $\vvx=d\vvx'$ for some $\vvx'\in \La$ but $\vvx\neq k\vvx'$ for all integers $k>d$ and all $\vvx'\in\La$. (An equivalent definition is $\gcd(A\vvz):=\gcd(\vvz)$, where $\vvz\in \IZ^N$, $\gcd(\vvz):=\gcd(z_1,\ldots,z_N)$, and $\La=A\IZ^N$.) Next we define $$\begin{aligned} 1 \Fd=\{\vvx\in \La\cap\Z^*; \gcd(\vvx)=d\}.\end{aligned}$$ In particular, $\La^*\cap\Z=\Fo$. Then for $k\in \IN$ we have the disjoint union $$\begin{aligned} 1 \bigcup_{k\mid d}\Fd=k\La\cap\Z^*.\end{aligned}$$ If $\vvx=k\vvx'$ lies in $k\La\cap\Z^*$ then $k\phi\vvx'$ lies in $k\phi\La\cap B_{\phi(\vvy)}(\R)$, and hence $$k\leq \frac{\R+|\phi(\vvy)|}{\la_1(\phi\La)} \leq \frac{\R+|\phi(\vvy)|}{\mul} + \frac{\R+|\phi(\vvy)|}{\Rz}=:\Ab,$$ where for the second inequality we have applied Lemma \[minimumestimate\]. We use the Möbius function $\mu(\cdot)$ and the Möbius inversion formula to get $$\begin{aligned} 1 \#(\La^*\cap\Z)=\#\Fo=\sum_{k=1}^{\infty}\mu(k)\sum_{d\atop k|d}\#\Fd=\sum_{k=1}^{[\Rt]}\mu(k)\sum_{d\atop k|d}\#\Fd=\sum_{k=1}^{[\Rt]}\mu(k)\#(k\La\cap\Z^*).\end{aligned}$$ For the rest of this section we will write $g\ll h$ to mean there exists a constant $c=c(N,M,\ka)$ such that $g\leq ch$. Applying Lemma \[Prop2\] with $\La$ replaced by $k\La$ yields $$\begin{aligned} 3 &\left|\#(\Z\cap\La^*)-\frac{\Vol \Z}{\zeta(N)\det \La}\right| \ll\\ &\sum_{k=1}^{[\Rt]}\left(\frac{\Rz}{k\la_1(\phi\La)}\right)^{N-1}+\sum_{k=1}^{[\Rt]} 1^*(B_{\phi(\vvy)}(\R)\backslash\{\vv0\}\cap k\phi\La)+\sum_{k>\Rt}\frac{\Vol \Z}{k^N\det \La}.\end{aligned}$$ First we note that $$\begin{aligned} 3 \sum_{k>\Rt}k^{-N}\leq \sum_{k\geq\max\{\Rt,1\}}k^{-N}\ll \max\{\Rt,1\}^{1-N}\leq \max\{\frac{\R}{\la_1(\phi\La)},1\}^{1-N},\end{aligned}$$ and moreover, $$\begin{aligned} 3 \frac{\Vol \Z}{\det \La}= \frac{\Vol \phi\Z}{\det \phi\La}\leq \frac{\Vol B_{\vv0}(\R)}{\det \phi\La}\ll \frac{\R^N}{\la_1(\phi\La)^N}. \end{aligned}$$ Combining both with (\[errortermbound\]) yields $$\begin{aligned} 3 \sum_{k>\Rt}\frac{\Vol \Z}{k^N\det \La}\ll \frac{\R}{\la_1(\phi\La)}\ll \frac{\Rz}{\la_1(\phi\La)} \leq \frac{\Rz}{\mul}+1.\end{aligned}$$ Next we note that by Lemma \[mainlemma\] $$\begin{aligned} 3 \sum_{k=1}^{[\Rt]}1^*(B_{\phi(\vvy)}(\R)\backslash\{\vv0\}\cap k\phi\La)\leq \T\left(\copnorm(N)\frac{\R+|\phi(\vvy)|}{\la_1(\phi(\La))} \right)\left(\frac{2\copnorm(N)\R}{\la_1(\phi(\La))}+1\right).\end{aligned}$$ Moreover, $$\begin{aligned} 3 \left(\frac{2\copnorm(N)\R}{\la_1(\phi(\La))}+1\right)\ll \frac{\Rz}{\mul}+1,\end{aligned}$$ and $$\begin{aligned} 3 \frac{\R+|\phi(\vvy)|}{\la_1(\phi(\La))} \leq \frac{\R+|\phi(\vvy)|}{\mul} + \frac{\R+|\phi(\vvy)|}{\Rz}=\Ab.\end{aligned}$$ Since $\copnorm(N)\Ab<\Ad$ we conclude that $$\begin{aligned} 3 \sum_{k=1}^{[\Rt]}1^*(B_{\phi(\vvy)}(\R)\backslash\{\vv0\}\cap k\phi\La)\ll \T\left(\Ad\right)\left(\frac{\Rz}{\mul}+1\right).\end{aligned}$$ Finally, $$\begin{aligned} 3 \sum_{k=1}^{[\Rt]}\left(\frac{\Rz}{k\la_1(\phi\La)}\right)^{N-1}\ll \left(\frac{\Rz}{\mul}+1\right)^{N-1}\sum_{k=1}^{[\Rt]}k^{1-N}\ll \left(\frac{\Rz}{\mul}+1\right)^{N-1}\L^*,\end{aligned}$$ where $$\L^*= \begin{cases} \max\{\log(\Ab),1\} & \text{if $N=2$,} \\ 1&\text{if $N>2$.} \end{cases}$$ If $N>2$ then $\L^*=1$ and we are done. So suppose $N=2$. Hence $\copnorm(N)=32$. By assumption $\T(x)\geq 1$ so that $\L^*\leq T(\copnorm(N)\Ab)$ for $\Ab\leq\exp(1)$. Now suppose $\Ab>\exp(1)$. Since $T$ is monotonic and $2^{[\log_2[32 \Ab]]}\leq 32\Ab$ we have $\T(32\Ab)\geq [\log_2[32\Ab]]+1\geq \log_2(32\Ab-1)\geq \log\Ab$. Thus, $\L^*\leq T(\copnorm(N)\Ab)\leq T(\Ad)$. This finishes the proof. Lower bounds for the error term {#lowerboundserrorterm} =============================== The main goal of this section is to prove Theorem \[sharp\]. Throughout this section we assume that $\m_i=\beta_i=1$ ($1\leq i\leq n$), so that $N=n=\t\geq 2$, and that $\La$ is a unimodular weakly admissible for $(\S,\C)$ but not admissible for $(\S,\C)$. To simplify the notation we write $\Nmm(\cdot):=\Nm(\cdot)$ and $\nu(\cdot):=\nu(\La,\cdot)$. Let $\k\geq 1$ be a constant, and $\{\vvx_j\}_{j=1}^\infty=\{(x_{j1},\ldots,x_{jn})\}_{j=1}^\infty$ be a sequence of pairwise distinct elements in $\La\backslash\C$ satisfying $$\begin{aligned} 1 \Nmm(\vvx_j)\leq \k\nu(|\vvx_j|)^n.\end{aligned}$$ We define $$\begin{aligned} 1 N_j&:=a\nu(|\vvx_j|)^{-n},\\ \Zj&:=N_j\Cx,\\ c_j&:={\la_{n-1}(\La,\Cx)},\end{aligned}$$ where $a>0$ is a constant which will be specified later, $\Cx$ denotes the $\vv0$-centered box $$\Cx:=[-|x_{j1}|,|x_{j1}|]\times\cdots\times[-|x_{jn}|,|x_{jn}|],$$ and $\la_{i}(\La,\Cx)$ are the corresponding successive minima. For $1\leq i\leq n$ we choose the minimal eligible values $Q_i=N_j|x_{ji}|$ for the set $\Zj$, so that[^9] $$\begin{aligned} 1\label{Rzest} \Rz\leq (a\k)^{\frac{1}{n}} N_j^{\frac{n-1}{n}}.\end{aligned}$$ We also assume that our sets $\Zj$ satisfy the condition (\[Qorder\]), i.e., $$\begin{aligned} 1 \Qi\leq \Rz \;(\text{for all } i\notin I).\end{aligned}$$ \[lemmasixone\] We have $$\begin{aligned} 1 \#(\Zj\cap\La)-\Vol\Zj\geq (N_j/(c_jn))^{n-1}-2^na\k N_j^{n-1}.\end{aligned}$$ Moreover, $N_j$ tends to infinity and $\Rz/\Qm$ tends to zero. Let $v_1,\ldots,v_{n-1}$ be linearly independent lattice points in $\la_{n-1}(\La,\Cx)\Cx$. Then the lattice points $\sum_{l=1}^{n-1}m_lv_l$ with $-N_j/(c_j n)\leq m_l \leq N_j/(c_j n)$ are all distinct and lie all in $\Zj$. Since $2[N_j/(c_jn)]+1\geq N_j/(c_jn)$ the claimed inequality follows at once. Recall that $\La$ is not admissible, and hence $N_j$ tends to infinity, and thus $\Rz/\Qm$ tends to zero. We now make the crucial assumption that the $n-1$-th successive minimum $c_j$ is uniformly bounded[^10] in $j$. Suppose there exists a constant $c_\La\geq 1$ such that $$\begin{aligned} 2 \label{succminbound} c_j\leq c_\La\end{aligned}$$ for all $j$, and take $a:=1/(4\k(2c_\La n)^{n-1})$. Then we have $$\begin{aligned} 1\label{LEZbound} \LEZi\geq \#(\Zj\cap\La)-\Vol\Zj\geq (c_\La n)^{-n}N_j^{n-1}.\end{aligned}$$ This follows immediately from Lemma \[lemmasixone\]. Next we prove a general criterion for $\La$ under which we have $$\begin{aligned} 1\label{errorlowerbound} \#(\Zj\cap\La)-\Vol\Zj\geq \cet \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)^{N-1}\end{aligned}$$ with a certain constant $\cet>0$. \[lowerboundcrit\] Suppose that the condition (\[succminbound\]) and $$\begin{aligned} 2 \label{growcond} \nu\left(\frac{|\vvx_j|}{\nu(|\vvx_j|)^n}\right)\geq \con \nu(|\vvx_j|)\end{aligned}$$ for some constant $\con>0$ hold true. Then there exists $\cet=\cet(\k,c_\La,n,\con)>0$ such that (\[errorlowerbound\]) holds true for all $j$ large enough. We have $\Qm\leq N_j|\vvx_j|$, and so ignoring the first few members of the sequence $\vvx_j$, we can assume that $$\mul\geq\nu(N_j|\vvx_j|)=\nu(a|\vvx_j|/\nu(|\vvx_j|)^n)\geq \nu(|\vvx_j|/\nu(|\vvx_j|)^n)\geq \con\nu(|\vvx_j|).$$ Hence, $$\inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)\leq \left(\frac{\Rz}{\mul}+1\right)\leq\left(\frac{\Rz}{\con \nu(|\vvx_j|)}+1\right)\leq (2\k^{1/n}/\con)N_j$$ for all $j$ large enough. This, in conjunction with (\[LEZbound\]), shows that (\[errorlowerbound\]) holds true. For the rest of this section we assume that $$\begin{aligned} 3\label{Cchoice} \C=\{\vvx;\vx_n=\v0\}.\end{aligned}$$ We now apply Proposition \[lowerboundcrit\] to prove the case $n=2$ in Theorem \[sharp\]. \[Thmsharptwo\] Suppose $n=2$. Then there exists a unimodular, weakly admissible lattice $\La$ for $(\S,\C)$, and a sequence of increasingly distorted (i.e., $\Rz/\Qm$ tends to zero), aligned boxes $\Z=[-Q_1,Q_1]\times[-Q_2,Q_2]$ whose volume $(2\Rz)^2$ tends to infinity such that $$\LEZ \geq c_{abs} \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right),$$ where $c_{abs}>0$ is an absolute constant. Let $\alpha$ be an irrational real number, and consider the lattice $\La$ given by the vectors $(p-q\alpha,q)$ with $p,q\in \IZ$. Then $\La$ is unimodular and weakly admissible for $(\S,\C)$. To choose an appropriate $\alpha$ we consider its continued fraction expansion $\alpha=[a_0,a_1,a_2,\ldots]$. Using the recurrence relation $q_{j+1}=a_{j+1}q_j+q_{j-1}$ for the denominator $q_j$ of the $j$-th convergent $p_j/q_j$ (in lowest terms) we can define $\alpha$ by setting $a_0=a_1=1$ (so that $q_0=q_1=1$) and $a_{j+1}=[\log q_j]+1$. Next we note that $a_{j+1}=[\log(a_jq_{j-1}+q_{j-2})]+1\leq \log((a_j+1)q_{j-1})+1\leq \log(a_j+1)+a_j+1\leq 3a_j$. Similarly we find $a_j+\log a_j-1\leq a_{j+1}$, and hence, $$a_j+\log a_j-1\leq a_{j+1}\leq 3a_j.$$ Put $\vvx_j=(p_j-q_j\alpha,q_j)\in \La\backslash\C$ so that $|\vvx_j|>|\vvx_{j-1}|$, at least for $j$ large enough. From the theory of continued fractions we know that for $\vvx\in \La\backslash\C$ the inequality $\Nmm(\vvx)<1/2$ implies that $\vvx=c\vvx_j$ for some non-zero integer $c$ and $j\in \IN$. We conclude that for all sufficiently large $\rho$ we have $\nu(\rho)^2=\Nmm(\vvx_j)$ for some $j$. Also by the theory of continued fractions we know that $$1/(a_{j+1}+2)<\Nmm(\vvx_j)<1/a_{j+1}.$$ Since $a_{j}>a_{j-1}+2$ we conclude $\Nmm(\vvx_{j-1})<\Nmm(\vvx_{j-2})$ and thus $$\Nmm(\vvx_{j-1})=\nu(|\vvx_j|)^2$$ for $j$ large enough; so we can take $\k=1$. We also easily find that $|\vvx_j|/\nu(|\vvx_j|)^2\leq |\vvx_{j+1}|$ for $j$ large enough. It is now straightforward to verify (\[growcond\]). Moreover, for $j$ large enough, (\[Qorder\]) holds true, and so $\Zj$ is an eligible set. Since $n=2$ we automatically have (\[succminbound\]) with $c_\La=1$. Hence we can apply Proposition \[lowerboundcrit\]. Finally, we note that $\Vol \Zj=4N_j^2\Nmm(\vvx_j)=(2a)^2\Nmm(\vvx_{j-1})^{-2}\Nmm(\vvx_j)\geq 2^{-6}a_j^2/(a_{j+1}+2)$ which tends to infinity, and moreover, that the boxes $\Zj$ are increasingly distorted by Lemma \[lemmasixone\]. This completes the proof. Next we prove the case $n=3$ in Theorem \[sharp\]. This case does not rely on Proposition \[lowerboundcrit\]. \[lowerboundexample\] Suppose $n=3$. Then there exists a unimodular, weakly admissible lattice $\La$ for $(\S,\C)$, and a sequence of increasingly distorted, aligned boxes $\Z=[-Q_1,Q_1]\times[-Q_2,Q_2]\times[-Q_3,Q_3]$ whose volume $(2\Rz)^3$ tends to infinity such that $$\LEZ \geq c_{abs} \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)^2,$$ where $c_{abs}>0$ is an absolute constant. Let $\alpha=[a_0,a_1,a_2,\ldots]$ be a badly approximable real number, so that the partial quotients $a_i$ are bounded. We set $\a=\max a_i$, and we consider the lattice $$\begin{aligned} 3\label{latticeexample} \La=\{(p_1-q\alpha,p_2-q\alpha,q); p_1,p_2,q\in \IZ\}.\end{aligned}$$ The lattice $\La$ is unimodular and weakly admissible for $(\S,\C)$. In this proof we write $h\ll g$ to mean $h\leq c g$ for a constant $c=c(\a)$ depending only on $\a$. First we note that $$\begin{aligned} 3 \Nmm(\vvx)\gg |\vvx|^{-1}\end{aligned}$$ for every $\vvx\in \La\backslash \C$. Hence, $$\begin{aligned} 3\label{nulower} \nu(\rho)\gg \rho^{-1/3}.\end{aligned}$$ Now suppose $p_j/q_j$ is the $j$-th convergent of $\alpha$, and put $\vvx_j=(p_j-q_j\alpha,p_j-q_j\alpha,q_j)\in \La\backslash\C$. Then, for $j$ large enough, (\[Qorder\]) holds true, and so $\Zj$ is an eligible set. Since $$\begin{aligned} 3 \Nmm(\vvx_j)\ll |\vvx_j|^{-1},\end{aligned}$$ we also conclude that there exists $\k=\k(\a)\geq 1$ such that $$\begin{aligned} 3 \Nmm(\vvx_j)\leq\k\nu(|\vvx_j|)^{3}.\end{aligned}$$ Since $q_{j+1}=a_{j+1}q_j+q_{j-1}$ we get $q_{j+1}\ll q_j$ and, as is wellknown, $|p_{j+1}-q_{j+1}\alpha|<|p_j-q_j\alpha|$. Furthermore, $(p_j,q_j)$ and $(p_{j+1},q_{j+1})$ are linearly independent, and thus $\vvx_j$ and $\vvx_{j+1}$ are linearly independent. Hence, we conclude $$c_j:=\la_2(\La,\Cx)\ll 1,$$ and thus, by virtue of (\[LEZbound\]), we get $\LEZi\gg N_j^2$. Moreover, for $j$ sufficiently large, we have $$\begin{aligned} 3\label{normcomp} |\vvx_{j-1}|<|\vvx_{j}|\ll |\vvx_{j-1}|,\end{aligned}$$ and thus $$\begin{aligned} 3\label{nuupper} \nu(|\vvx_j|)\leq \Nmm(\vvx_{j-1})^{1/3}\ll |\vvx_{j-1}|^{-1/3}\ll |\vvx_{j}|^{-1/3}.\end{aligned}$$ Combining (\[nulower\]), (\[normcomp\]) and (\[nuupper\]) implies that $$\begin{aligned} 3 \rho^{-1/3}\ll\nu(\rho)\ll \rho^{-1/3}.\end{aligned}$$ Therefore, we have $$\begin{aligned} 3 N_j\ll\nu(|\vvx_j|)^{-3}\ll |\vvx_j|\ll q_j\leq |\vvx_j| \ll\nu(|\vvx_j|)^{-3}\ll N_j.\end{aligned}$$ Thus, $N_j^2\ll \Qm=N_jq_j\ll N_j^2$, and due to (\[Rzest\]), $\Rz\ll N_j^{2/3}$. Hence, with $B=N_j$ we have $$\begin{aligned} 1 \frac{\Rz}{\nu(\Ac)}\ll \frac{\Qm}{\Ac},\end{aligned}$$ and thus for all $j$ large enough $$\begin{aligned} 3 \inf_{0<\Ac\leq \Qm}\left(\frac{\Rz}{\mu(\La,\Ac)}+\frac{\Qm}{\Ac}\right)^{2}\ll\left(\frac{\Qm}{\Ac}\right)^{2}\ll N_j^{2}\ll \LEZi. \end{aligned}$$ Hence, we have shown that (\[errorlowerbound\]) holds true. Finally, we observe that $\Vol \Zj=8N_j^3\Nmm(\vvx_j)\gg N_j^2$ which, due to Lemma \[lemmasixone\], completes the proof. $\mathcal{F}_{\ka,M}$ - Families via o-minimality {#omin} ================================================= In this section let $d\geq 1$ and $\Da\geq 2$ both be integers. For $Z\subset \IR^{d+\Da}$ and $T\in \IR^d$ we write $Z_T=\{x\in \IR^\Da; (T,x)\in Z\}$ and call this the fiber of $Z$ above $T$. For the convenience of the reader we quickly recall the definition of an o-minimal structure following [@PilaWilkie]. For more details we refer to [@Wilkie2007; @PilaWilkie] and [@vandenDries1998]. \[defomin\] A structure (over $\IR$) is a sequence $\mathcal{S}=(\mathcal{S}_n)_{n\in \IN}$ of families of subsets in $\IR^n$ such that for each $n$: 1. $\mathcal{S}_n$ is a boolean algebra of subsets of $\IR^n$ (under the usual set-theoretic operations). 2. $\mathcal{S}_n$ contains every semi-algebraic subset of $\IR^n$. 3. If $A \in \mathcal{S}_n$ and $B\in \mathcal{S}_{m}$ then $A\times B \in \mathcal{S}_{n+m}$. 4. If $\pi: \IR^{n+m}\rightarrow \IR^n$ is the projection map onto the first $n$ coordinates and $A \in \mathcal{S}_{n+m}$ then $\pi(A) \in \mathcal{S}_n$. An o-minimal structure (over $\IR$) is a structure (over $\IR$) that additionally satisfies: 5. The boundary of every set in $\mathcal{S}_1$ is finite. The archetypical example of an o-minimal structure is the family of all semi-algebraic sets. Following the usual convention, we say a set $A$ is definable (in $\mathcal{S}$) if it lies in some $\mathcal{S}_n$. A map $f:A\rightarrow B$ is called definable if its graph $\Gamma(f):=\{(x,f(x)); x\in A\}$ is a definable set. \[Propomin\] Suppose $Z\subset \IR^{d+\Da}$ is definable in an o-minimal structure over $\IR$, and assume further that all fibers $Z_T$ are bounded sets. Then there exist constants $\ka_Z$ and $M_Z$ depending only on $Z$ (but independent of $T$) such that the fibers $Z_T$ lie in $\mathcal{F}_{\ka_Z,M_Z}$ for all $T\in \IR^d$. Suppose the set $Z$ is defined by the inequalities $$\begin{aligned} 1\label{Zfunc} f_1(T_1,\ldots,T_d,x_1,\ldots,x_\Da)\leq 0,\ldots, f_k(T_1,\ldots,T_d,x_1,\ldots,x_\Da)\leq 0,\end{aligned}$$ where the $f_i$ are certain real valued functions on $\IR^{\Da+d}$. If all these functions $f_i$ are definable in a common o-minmal structure then we can apply Proposition \[Propomin\]. This happens for instance if the $f_i$ are restricted analytic functions[^11] or polynomials in $z_1,\ldots,z_{d+\Da}$ and each $z_i\in \{T_m,\exp(T_m),x_l,\exp(x_l); 1\leq m\leq d, 1\leq l\leq \Da\}$. For more details and examples we refer to [@Wilkie2007; @Scanlon2011; @Scanlon2016]. For the proof of Proposition \[Propomin\] we shall need the following lemma. We are grateful to Fabrizio Barroero for alerting us to Pila and Wilkies Reparametrization Lemma for definable families and its relevance for the lemma. \[ominlemma\] Suppose $Z\subset \IR^{d+\Da}$ is definable in an o-minimal structure over $\IR$, and assume further that all fibers $Z_T$ are bounded sets. Then there exist constants $\ka_Z$ and $M_Z$ depending only on $Z$ such that the boundary $\partial Z_T$ lies in Lip$(\Da,M_Z,\ka_Z\cdot \diam(Z_T))$ for every $T\in \IR^d$. First note that if $\#Z_T\leq 1$ then $\partial Z_T$ lies in Lip$(\Da,1,0)$. Hence, it suffices to prove the claim for those $T$ with $\#Z_T\geq 2$. By replacing $Z$ with the definable set $\{(T,x)\in Z; (\exists x,y\in Z_T)(x\neq y)\}$ we can assume that $\#Z_T\geq 2$ for all $T\in \pi(Z)$, where $\pi$ is the projection onto the first $d$ coordinates. We use the existence of definable Skolem functions. By [@vandenDries1998 Ch.6, (1.2) Proposition] there exists a definable map $f:\pi(Z)\rightarrow \IR^d$ whose graph $\Gamma(f)\subset Z$. The proof of said (1.2) Proposition actually shows that there is an algorithmic way to construct the Skolem function f. We will use the fact that this choice of f is determined by $Z$ and $\pi$ and hence can be seen as part of the data of $Z$. Now we consider the set $Z'=\{(T,y); (T,x)\in Z, y=x-f(T)\}$. This set is again definable and each non-empty fiber contains the origin, i.e., $0\in Z'_T$ for all $T\in \pi(Z)$. Next we scale the fibers and translate by the point $y_0=(-1/2)(1,\ldots,1)\in \IR^\Da$ to get a new definable set whose fibers all lie in $(0,1)^\Da$. We put $Z''=\{(T,z); (T,y)\in Z', z=(3\cdot\diam(Z'_T))^{-1}y-y_0\}$ (recall that $\diam(Z'_T)=\diam(Z_T)>0$ since $Z_T$ has at least two points). We note that the graph of the function $T\rightarrow \diam(Z_T)$ from $\pi(Z)$ to $\IR$ is given by $$\{(T,t)\in \pi(Z)\times\IR; \phi(T,t)\land \lnot((\exists u\in \IR)(\phi(T,u) \land u<t)\},$$ where $\phi(T,t)$ stands for $(\forall x,y \in Z_T)(|x-y| \leq t)$. This shows that the aforementioned map is definable and hence, so is $Z''$. Also we have $Z''_T\subset (0,1)^\Da$ for all $T$. By [@BarroeroWidmer Lemma 3.15] the set $Z'''=\{(T,w); w\in \partial Z''_T\}$ is also definable. The fibers of a definable set are again definable (cf. [@BarroeroWidmer Lemma 3.1]), and hence by [@vandenDries1998 Ch.4, (1.10) Corollary] we have $\dim(\partial Z''_T)\leq\Da-1$. From Pila and Wilkie’s Reparameterization Lemma for definable families [@PilaWilkie 5.2. Corollary] we conclude[^12] that $\partial Z''_T$ lies in Lip$(\Da,M_{Z'''},\ka_{Z'''})$ for all $T\in \IR^d$ with certain constants $\ka_{Z'''}$ and $M_{Z'''}$. Rescaling and retranslating gives $\partial Z_T\in$ Lip$(\Da,M_{Z'''},\ka_{Z'''}\cdot \diam(Z_T))$. Finally, we note that $Z'''$ depends only on $Z$ and $f$ which itself can be seen as part of the data of $Z$, so that the constants $\ka_{Z'''}$ and $M_{Z'''}$ may be chosen to depend only on $Z$. This completes the proof of the lemma. We can now prove Proposition \[Propomin\]. Consider the set $$Z'''':=\{(\varphi,T,x); \varphi \in \Aut_\Da(\IR), x\in \varphi(Z_T)\}.$$ This set is definable in the given o-minimal structure, and we have $Z''''_{(\varphi,T)}=\varphi(Z_T)$. Applying Lemma \[ominlemma\] to the fibers $Z''''_{(\varphi,T)}$ we conclude that there exist constants $\ka_{Z''''}$ and $M_{Z''''}$ such that $\partial \varphi(Z_T)$ lies in Lip$(\Da,M_{Z''''},\ka_{Z''''}\cdot \diam(\varphi (Z_T)))$ for all $(\varphi,T)\in \Aut_\Da(\IR)\times\IR^d$. Note that $Z''''$ depends only on $Z$ so that $M_{Z''''},\ka_{Z''''}$ are depending only on $Z$, and this completes the proof of Proposition \[Propomin\]. Acknowledgements {#acknowledgements .unnumbered} ================ It is my pleasure to thank Fabrizio Barroero, Michel Laurent, Arnaldo Nogueira, Damien Roy, and Maxim Skriganov for helpful discussions. I completed this article during a visiting professorship at Graz University of Technology, and I thank the Institute of Analysis and Number Theory for its hospitality. [^1]: The lattice $\La=A\IZ^N$ is symplectic (or orthogonal) if $A\in \Aut_N(\IR)$ is symplectic (or orthogonal). [^2]: In the above setting our definition of $\nu(\cdot,\cdot)$ is the $N$-th root of Skriganov’s and the one in [@TechnauWidmer]. [^3]: We are only interested in “sufficiently distorted” boxes, and so we can assume $\x>\gamma_N^{1/2}$. [^4]: In the sense of the Haar measure on $SO_N(\IR)$. [^5]: Despite the row notation we treat the vectors as column vectors. [^6]: Here $|\cdot|_\infty$ denotes the maximum norm. [^7]: With respect to the Lebesgue measure. [^8]: In fact their conjecture is more general but the mentioned special case is probably the most natural case. [^9]: To simplify the notation we suppress the dependence on $j$ and we simply write $Q_i$ and $\Rz$. [^10]: Note that $\la_1(\La,\Cx)\leq 1$ by definition of the box $\Cx$. On the other hand $\Vol\Cx$ tends to zero, so that by Minkowski’s second Theorem $\la_n(\La,\Cx)\rightarrow \infty$ as $j$ tends to infinity. [^11]: By a restricted analytic function we mean a real valued function on $\IR^n$, which is zero outside of $[-1,1]^n$, and is the restriction to $[-1,1]^n$ of a function, which is real analytic on an open neighborhood of $[-1, 1]^n$. [^12]: Using that the partial derivatives are uniformly bounded we can extend the domain of the parametrisation to $[0,1]^{\Da-1}$ without altering the Lipschitz constant.
--- author: - 'Gujot Singh,  Stephen R. Ellis, and J. Edward Swan II, [^1]' bibliography: - 'swan-ellis-singh-arXiv-v1.bib' title: 'The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching' --- [Singh : Effect of Focal Distance, Age, and Brightness]{} [ 0= ]{} compelling applications of augmented reality (AR) require interacting with real and virtual objects at reaching distances. Some examples include image-guided medical procedures ([e.g.]{}, Kersten-Oertel [et al.]{}[@kersten-oertel:2013]), manufacturing ([e.g.]{}, Curtis [et al.]{}[@curtis:1998]), and maintenance ([e.g.]{}, Henderson and Feiner [@henderson:2009]). Among the factors that determine success is the accuracy with which observers can match the distance of a real object to an AR-presented virtual object. For example, a surgeon may need to cut to the depth indicated by an AR-presented tumor, or place a needle within the tumor. In order for AR to be useful for image-guided surgery of the brain, Edwards [et al.]{}[@edwards:2000] found that surgeons must be able to place a scalpel with a tolerance of 1 mm; and, in order for AR to be useful for a type of radiation therapy, Krempien [et al.]{} [@krempien:2008] found that a needle must be placed with a tolerance of 1 mm. In previous work motivated by this topic, Swan [et al.]{}[@swan:2015] reported initial efforts to measure the accuracy of AR depth matching. An optical see-through AR display was used, and reaching distances of 24 to 56 cm were examined. The depth judgment was *perceptual matching*, where observers adjusted a pointing object in depth, until they judged it to be the same distance from themselves as a target object. Fig. \[f:swan2015\] summarizes these results, which were collected across three experiments. The pointer was always a real object, and therefore its distance from the observer could be objectively measured in the real world. In Fig. \[f:swan2015\], the $x$-axis is the actual depth of the target object, and the $y$-axis is the depth error of the pointer. Here, $\mbox{\it error} = 0$ indicates that observers placed the pointer at the same depth as the target object; $\mbox{\it error} > 0$ indicates *overestimated* depth matches, where observers placed the pointer farther in depth than the target object; and $\mbox{\it error} < 0$ indicates *underestimated* depth matches, where observers placed the pointer closer than the target object. As a control condition, Swan [et al.]{}[@swan:2015] examined the accuracy of matching a *real* target object, and found accuracies of 1.4 to 2.7 mm (Fig. \[f:swan2015\]a, the *real consistent* condition). However, when they examined matching a *virtual* target object, they found that observers systematically overestimated the matching distance, ranging from 0.5 cm at near distances to 4.0 cm at far distances (Fig. \[f:swan2015\]b, the *AR collimated* condition). Therefore, as illustrated in Fig. \[f:swan2015\], there was a significant difference in depth matching real and virtual targets. Swan [et al.]{}[@swan:2015] determined that the likely reason for these results was that their AR display used collimating optics, which present virtual objects focused at optical infinity. They found the results to be very well described by a model where this collimation causes the eyes’ vergence angle to rotate outwards by a constant amount. Fig. \[f:model\] illustrates this model. Let the black points labelled $\alpha$ and $\alpha'$ be two real objects, with the first located close to the observer, and the second located farther away. And, let the red points labelled $\beta$ and $\beta'$ be two virtual objects, which are rendered to be the same distance as the real targets. $\alpha$, $\alpha'$, $\beta$, and $\beta'$ also represent the angle of binocular parallax that the eyes make when the observer fixates on each object. Therefore, when fixating on the close real object, the angle of binocular parallax is $\alpha$, and if the fixation changes to the close virtual object, then the collimation causes the eyes to rotate outwards, reducing the angle to $\beta$. When placing the real pointer $\alpha$ at the same depth as the virtual target $\beta$, observers’ eyes rotate inwards and outwards as they fixate between the two objects, and therefore observers perceive them to be located at the same depth. The model predicts that this change in vergence angle, $\Delta v = \alpha - \beta$, is constant for reaching distances. Therefore, when fixating on the far target, $\alpha' < \alpha$, and this same change in vergence angle, $\Delta v = \alpha' - \beta'$, causes a larger depth distance between $\alpha'$ and $\beta'$ (Fig. \[f:model\]). This model explains three properties of Swan [et al.]{}’s [@swan:2015] results (Fig. \[f:swan2015\]): (1) because the collimating optics cause the eyes to rotate outwards, the depth judgments of the virtual targets are overestimated relative to the real targets, (2) the amount of overestimation increases with increasing distance, and (3) the results are very well fit with a linear model. ![The perceptual matching depth judgments from Swan [et al.]{} [@swan:2015]. For Experiment I, the actual distances were 34, 38, 42, 46, and 50 cm, while for Experiments II and III the actual distances were 55, 63, 71, 79, and 87% of each observer’s maximum reach.[]{data-label="f:swan2015"}](swan2015-errg){width="\FigWidth"} ![The model that explains how a constant change in vergence angle, $\Delta v$, leads to matched distances of virtual objects (red: $\beta$, $\beta'$), relative to real objects (black: $\alpha$, $\alpha'$), that are increasingly overestimated with increasing distance.[]{data-label="f:model"}](model){width="0.7\FigWidth"} This analysis suggests that, for accurate depth placement, virtual objects need to be presented with a *focal depth*—also termed *accommodative demand*—that is consistent with their intended depth. Then, the eyes’ vergence angle will not be biased, and depth matches will be more accurate. This paper reports three experiments that systematically examine this hypothesis. However, it was not possible to conduct these experiments with the same AR display as Swan [et al.]{} [@swan:2015]. That display, an NVIS Inc. nVisor ST60 model, contains unadjustable collimating optics, which always present virtual objects focused at optical infinity. This is consistent with the vast majority of commercially available AR displays, almost all of which have a focal distance that is set at the factory, and unadjustable by the end user.[^2] Therefore, an *augmented reality haploscope*—an AR display mounted on an optical workbench that allows accommodative demand and vergence angle to be independently and precisely adjusted—was developed and used for the experiments reported here.[^3] Background and Related Work =========================== Depth Perception and Depth Cues ------------------------------- The human visual system achieves a percept of perceived depth from *depth cues*—sources of perceptual information related to depth. At least nine depth cues have been identified (Cutting and Vishton [@cutting:1995]): *occlusion* (a closer object occludes farther objects), *binocular disparity* (an object projects to different locations on each retina), *motion perspective* (objects at different distances from a moving observer have different apparent velocities), *height in the visual field* (starting from the horizon, closer objects are lower in the visual field), *relative size* (among objects of the same size, the farther object projects to a smaller retinal angle), *accommodation* (the lens of the human eye changes shape to bring objects into focus), *vergence* (the two eyes rotate to fixate on an object (Fig. \[f:model\])), *relative density* (for a textured surface, at farther distances more objects are seen within a constant retinal angle), and *aerial perspective* (objects at great distances lose color saturation and contrast). Depth cues differ in effectiveness based on various visual characteristics, such as scene content and distance from the observer. Nagata [@nagata:1991], and later Cutting and Vishton [@cutting:1995], organized the relative effectiveness of different depth cues according to distance. Within near-field reaching distances, they find that the operative depth cues, in approximate order of decreasing salience, are *occlusion*, *binocular disparity*, *motion perspective*, *relative size*, *accommodation and vergence*, and *relative density*. Most of these depth cues can be categorized as *retinal*, because the information from the cue comes from the visual scene sensed on the retina. However, the cues of accommodation and vergence are *extra-retinal*, because the cue information comes from sensors that detect the state of the muscles that control the lenses’ shape and the eyes’ vergence angle. In principal, the extra-retinal cues cues could provide absolute egocentric depth information (Gillam [@gillam:1995]). In contrast, retinal cues can only provide relative depth information between objects in the scene; these cues require an external reference to establish the scene’s overall scale. However, when combined with extra-retinal cues, and an observer’s constant interpupillary distance, retinal cues can also provide absolute depth information (Bingham and Pagano [@bingham:1998], Mon-Williams and Tresilian [@monwilliams:1999]). In general, the way the human visual system combines information from different depth cues to produce a stable percept of distance is subtle and not fully understood, although many theories have been advanced and the collected evidence favors some theories over others (Landy [et al.]{} [@landy:1995], Singh [@singh:2013]). Vergence and Accommodation -------------------------- Visual perception requires a rapid series of precise eye movements (Leigh and Zee [@leigh:2015]), including *fixation* (hold an image steady on the fovea by minimizing eye movement), *saccadic* (quick movement that projects an object of interest to the fovea), *smooth pursuit* (retain fixation on an object during smooth movement of either the object or head), *vestibular* (hold vision steady during head movements), and *vergence* (the two eyes rotate to fixate on an object of interest (Fig. \[f:model\])). When changing fixation from a far to a near object, the eyes converge, the lenses become thicker, and the pupils constrict. These three actions—vergence, accommodation, and changing pupil size—are interlinked physiologically, and the mechanism of these three simultaneous reflexes is called the *near triad*. Because of the interlinkage, changes in either accommodation or vergence drive corresponding changes in the other two components of the triad (Semmlow and Hung [@semmlow:1983]). Apart from the influence of accommodation and vergence, pupil diameter also changes according to scene illumination, becoming larger in dim settings and smaller in bright settings. Although these illumination-driven changes in pupil diameter affect the eye’s optical depth of field, and therefore could potentially affect accommodation, little effect of changing pupil diameter on accommodation has been observed (Ripps [et al.]{}[@ripps:1962]). Therefore, in near field viewing, vergence and accommodation are the main depth reflexes, and the link between them is known as the *vergence-accommodation reflex*. Because of this reflex, accommodation and vergence operate in unison: changes in accommodation drive changes in vergence (*accommodative vergence*), and changes in vergence drive changes in accommodation (*vergence accommodation*) (Kersten and Legge [@kersten:1983]). Therefore, the vergence reflex is driven both by binocular disparity (the eyes rotate to bring a fixated object to a level of zero binocular disparity), as well as accommodative vergence. Likewise, the accommodation reflex is driven both by focal blur (the lenses adjust to minimize blur), as well as vergence accommodation (Mon-Williams and Tresilian [@monwilliams:2000]). ![The vergence-accommodation conflict, and its effect on perceived depth. (a) In normal viewing of real world objects, the vergence distance, required for zero binocular disparity, is the same as the focal distance, required for minimal focal blur. (b) When the vergence distance is farther than the focal distance, e.g. when viewing a virtual object beyond the surface of a stereo monitor, the vergence angle is biased inwards (grey lines), and the object is seen as closer than encoded by disparity. (c) When the vergence distance is closer than the focal distance, e.g. when viewing a virtual object in front of the surface of a stereo monitor, the vergence angle is biased outwards (grey lines), and the object is seen as farther than encoded by disparity.[]{data-label="f:verg-acc-con"}](verg-acc-con){width="1\FigWidth"} Of course, the vergence-accommodation reflex is calibrated for viewing real world objects, which present consistent binocular disparity and focal blur cues (Fig. \[f:verg-acc-con\]a). When viewing virtual objects, the binocular disparity and focal blur cues are often inconsistent, because the focal blur cue is fixed at the screen depth, while the depth of the binocular disparity cue varies, sometimes beyond the screen depth (Fig. \[f:verg-acc-con\]b), and sometimes in front (Fig. \[f:verg-acc-con\]c). This is called the *vergence-accommodation conflict*, and it is a ubiquitous aspect of all stereo displays with a single focal plane (Kruijff [et al.]{}[@kruijff:2010]). The conflict causes visual fatigue (Gabbard [et al.]{}[@gabbard:2017], Lambooij [et al.]{}[@lambooij:2009]), hinders visual performance (Hoffman [et al.]{} [@hoffman:2008]), and biases depth perception towards the screen depth (Fig. \[f:verg-acc-con\], Swenson [@swenson:1932], Mon-Williams and Tresilian [@monwilliams:2000]). The contribution of vergence to perceived depth depends upon various properties of the scene. At near-field distances, vergence has been conclusively found to provide egocentric depth information (Brenner and Van Damme [@brenner:1998], Owens and Liebowitz [@owens:1980], Tresilian [et al.]{} [@tresilian:1999], Viguier [et al.]{} [@viguier:2001]; Foley [@foley:1980] provides a comprehensive review). Although vergence in isolation is not a very accurate depth cue, observers are very sensitive to changes in vergence, which generally allows accurately matching the depth of one object with another (Brenner and Van Damme [@brenner:1998]). Each individual has a different vergence resting point—their *dark vergence*—which is the vergence angle that their eyes assume when the controlling muscles are completely relaxed. In low light conditions, the egocentric depth specified by vergence is biased towards each individual’s dark vergence distance (Owens and Liebowitz [@owens:1980]). As a depth cue, vergence is most effective within near-field distances of 2 meters (Viguier [et al.]{} [@viguier:2001]), a distance range that encompasses $\sim$90% of vergence eye movements (Tresilian [et al.]{} [@tresilian:1999]). As other retinal depth cues become available, the contribution of vergence to perceived depth is reduced, but still present (Foley [@foley:1980]). As discussed above, accommodation influences perceived depth through the vergence-accommodation reflex. Although some studies have found evidence that accommodation alone can serve as a depth cue for some observers, these experiments require careful experimental setups to eliminate other depth cues, and the consensus remains that accommodation influences perceived depth through its effect on vergence (Mon-Williams and Tresilian [@monwilliams:2000]). Similar to dark vergence, each individual has a *dark focus*—the distance their eyes focus when the controlling muscles are in a relaxed state (Iavecchia [et al.]{} [@iavecchia:1988]). The dark focus biases the eye’s focal response, resulting in a number of perceptual consequences, including perceived depth (Roscoe [@roscoe:1985]). Generally, the dark focus and dark vergence distances vary independently, and for most individuals are not equal (Owens and Leibowitz [@owens:1980]). Accommodation and Age {#s:age} --------------------- Accommodative ability, the distance range within which a viewed object can be brought into clear focus, decreases with increasing age (Duane [@duane:1912]), a condition known as *presbyopia*. It is primarily caused by hardening of the crystalline lens, although other physiological changes in the lens, connective tissue, and controlling muscles also play a role (Kasthurirangan and Glasser [@kasthurirangan:2006]). As measured by Duane [@duane:1912], presbyopia begins by the age of 12, but through the early 30’s the loss is minuscule—the closest distance of clear focus declines from $\sim$8 to $\sim$13 cm. However, the decline then accelerates, and by the age of 50 often surpasses 50 cm. At some point in the 40’s, the closest distance of clear focus often surpasses standard reading distance, and reading glasses are required. By their mid-50’s, most people have lost the ability to adjust the distance of clear focus. It seems reasonable that this loss of accommodative ability would have perceptual consequences, and indeed, older people are worse than younger people at many perceptual tasks (Bian and Andersen [@bian:2013]). However, accommodative vergence does not diminish with age; even as the visual system looses the ability to adjust accommodation, the eyes still verge properly in response to accommodative stimuli (Heron [et al.]{} [@heron:2001]). Because vergence is the primary source of depth information from the vergence-accommodation reflex (Mon-Williams and Tresilian [@monwilliams:2000]), this suggests that depth perception could be unaffected by presbyopia. Indeed, Bian and Andersen [@bian:2013] found that, when making judgments of medium-field egocentric distances, older people (average 73.4 years) were *more* accurate than younger people (average 22.5 years). This is one of a series of recent studies that have found that older observers preserve their abilities in tasks related to distance perception (Bian and Andersen [@bian:2013]). Accommodation and Scene Flatness -------------------------------- Another effect of the vergence-accommodation conflict in stereo displays is that the accommodative distance changes the perceived *flatness* of the scene (Andersen [et al.]{} [@andersen:1998], Nagata [@nagata:1991], Singh [@singh:2013]). Specifically, when medium- to far-field scenery is shown on a display, but accommodative distance is in the near field, depth distances between scene objects are compressed, and the scene is perceived as being a flat window, positioned some depth distance from the observer. However, when the same scene is shown with collimation, these depth distances are no longer compressed, and the scene objects appear to extend in depth, with some closer to the observer and others farther. This is a reason why many augmented and virtual reality displays, especially those used for flight simulation and other far-field applications, present collimated light (Watt [et al.]{} [@watt:2005]). Likewise, the NVIS nVisor ST60 used by Swan [et al.]{}[@swan:2015], which also presents collimated light, was originally marketed for military training and forward observer tasks, which primarily involve medium- to far-field distances. Depth Perception and Brightness {#s:bright} ------------------------------- Among objects of the same size and distance, the brighter appear closer than the dimmer. This principal has long been known in art, and is discussed by Leonardo Da Vinci in his *Notebooks* (McCurdy [@mccurdy:1938]). The principal has been thoroughly studied, at both near field (Ashley [@ashley:1898], Farnè [@farne:1977]) and medium field (Coules [@coules:1955]) distances, and in both monocular and binocular conditions (Coules [@coules:1955]). In addition to brightness, the contrast between an object and the background also effects perceived depth, so a dark object against a light background can appear closer than an object with less contrast (Farnè [@farne:1977]). Among the theories that explain this effect are that brighter objects stimulate a larger area on the retina, and that brighter objects effect pupil size, which then biases other near triad reflexes. Related Work in Augmented Reality --------------------------------- To date, including Swan [et al.]{} [@swan:2015], only a small number of papers have examined near-field AR depth matching. Ellis and Menges [@ellis:1998] measured the effects of convergence, accommodation, observer age, viewing condition (monocular, biocular stereo, binocular stereo), and the presence of an occluding surface. They found that accuracy is degraded by monocular viewing and an occluding surface. Using the same experimental setup, McCandless [et al.]{} [@mccandless:2000] additionally studied motion parallax and latency in monocular viewing; they found reduced accuracy with increasing distance and latency. Singh [et al.]{} [@singh:2010] found that an occluding surface has complex accuracy effects, and Rosa [et al.]{} [@rosa:2016] found increased accuracy with redundant tactile feedback. The Augmented Reality Haploscope {#s:haplo} ================================ As motivated in Section \[s:intro\], an *augmented reality haploscope* was designed and engineered.[^4] The design was loosely based on the AR haploscopes described by Rolland [et al.]{} [@rolland:1995] and Ellis and Menges [@ellis:1998], but similar designs have a long history in the study of depth perception (e.g., Swenson [@swenson:1932]). ![The Augmented Reality (AR) Haploscope. The physical design allows independent adjustment of vergence angle and focal distance.[]{data-label="f:haplo"}](haploscope){width="\FigWidth"} Fig. \[f:haplo\] shows the AR haploscope. The physical design has the following requirements: (1) provide a range of vergence angles and accommodative demands, (2) adjust to match a wide range of inter-pupillary distances, and (3) be rigid enough to resist inevitable bumps. To achieve these, the device is mounted on an optical breadboard. The primary structure is built on three optical rails: two 12-inch rails serve as mounting bases for left-eye and right-eye optical systems, and both 12-inch rails are mounted on a 24-inch rail using 3-inch rail carriers, which can be adjusted to match the required inter-pupillary distance. ![The optical system of the AR haploscope. Changing the accommodation lens changes the focal distance.[]{data-label="f:ray"}](haplo-ray){width="\FigWidth"} The goal of each optical system is to collimate the generated image, so the image is located at optical infinity, or 0 diopters (D). Then, the collimated image can either be left at optical infinity, or a negative power lens can reduce the focal distance. Fig. \[f:ray\] shows the optical system. The image is first generated by a monitor. Then, the image is minified by a $-$10 D concave lens; without minification, only a small part of the monitor can be seen through the optical system. As shown in Fig. \[f:ray\], when this $-$10 D lens is placed 10 cm from the monitor, it creates a minified image at $-$5 cm. This minified image is then collimated by a $+$10 D convex lens, positioned 10 cm from the image. The collimated image is then passed through an *accommodation lens*. This comes from a standard optometric trial set; either a 0 D plain glass lens, which retains the collimation, or a negative power concave lens, which reduces the focal distance. In the experiments reported here, the strongest accommodation lens used was $-$3 D, which resulted in a 33.3 cm focal distance. After generation, the images are reflected into the observers’ eyes by 15% reflective optical combiners, mounted at 45$^{\circ}$ directly in front of each eye. Fig. \[f:haplo\] shows the monitors; the minification, collimation, and accommodation lenses; and the optical combiners. ![Rotating the optical systems to match the correct vergence angle.[]{data-label="f:rotate"}](haplo-rotate){width="0.9\FigWidth"} Fig. \[f:rotate\] illustrates how the haploscope matches different vergence angles. The rail carriers are adjusted so that the distance between the pivot points matches the observer’s inter-pupillary distance (Fig. \[f:haplo\]). The chin and forehead rest is adjusted so that these pivot points are directly below the rotational centers of the observer’s eyes. As illustrated in Fig. \[f:rotate\], when the left and right optical systems then rotate about the pivot points, for all convergence distances the view rays from the center of the two eyes stay in line with the principal axes of the optical systems. This allows presenting a virtual object at any distance, near ($n$), medium ($m$), or far ($f$), while the observer’s view rays continue to pass through the middle of the optical system, where optical distortion is minimized. To display a target object at a specific distance, the optical systems are rotated to the matching convergence angle $1/2\alpha$ (Figs. \[f:model\], \[f:rotate\]); $1/2\alpha = \arctan(i/2d)$, where $i$ is the observer’s interpupillary distance, and $d$ is the target distance. The angle of each optical system is measured by a constellation of tracking fiducials attached to each monitor (Fig. \[f:haplo\]), which allows an ART TrackPack to measure the vergence angle to an accuracy of 0.01$^{\circ}$. Experiment I: Accommodation {#s:exI} =========================== As discussed in Section \[s:intro\], Swan [et al.]{} [@swan:2015] hypothesized that the linearly increasing overestimation they found with collimated AR graphics (Fig. \[f:swan2015\]), was caused by the collimation biasing the eyes’ vergence angle to rotate outwards by a constant amount (Fig. \[f:model\]). The purposes of Experiment I were to test aspects of this hypothesis, using the same matching task and within a similar range of near-field distances. Because Experiment I used a different display—the AR haploscope—the first purpose (1) was to replicate the *real consistent* and *AR collimated* conditions of Swan [et al.]{} [@swan:2015]. If Experiment I found similar results, that would suggest that these results generalize to AR more broadly, and are not specific to the NVIS display used by Swan [et al.]{} [@swan:2015]. The next purpose (2), the *AR consistent* condition, was to test whether presenting AR objects at a focal distance that was *consistent* with the distance specified by other depth cues, especially binocular disparity, would result in more accurate depth matches than what was seen in the AR collimated condition. If the depth matches are more accurate, that would further support the hypothesis that collimated graphics bias the eyes’ vergence angle outwards. However, for many AR applications, always presenting virtual objects at consistent focal distances is unlikely to be practical. Therefore, the final purpose (3), the *AR midpoint* condition, was to test whether presenting AR objects at a focal distance equal to the *midpoint* of the tested range would result in performance similar to the consistent condition. If the performance is similar, this would suggest that, for accurate depth matching within reaching distances, the expense of making the focal demand consistent for every virtual object is not necessary. Method ------ ### Apparatus and Task {#s:app} Fig. \[f:haplo-exp\] shows the experimental setup. The haploscope was mounted on the end of an optical breadboard, 244 cm long by 92 cm wide. The breadboard was supported by a custom-built aluminum table, with six legs. Mounted to the legs of the table were six hydraulic jacks, which could lift the entire table, so the surface could be adjusted to be between 104 and 134 cm above the ground. This adjustability allowed the table to be comfortably positioned for observers of many different heights. Aluminum arms extending above the table supported tracking cameras, as well as an overhead light (Fig. \[f:haplo-exp\]). Because the tracking cameras and light were attached to the table, when the table height was adjusted, their distance above the table top remained constant. Tracking was provided by a 2-camera TRACKPACK system, from A.R.T. GmbH. On both sides of the table, *depth adjusters*—plastic pipes running through collars—could easily be slid back and forth in depth (Fig. \[f:haplo-exp\]). When the real target was presented, it hung from an arm attached to the left-hand depth adjuster. The real target was a wireframe octahedron, 5 cm wide by 6 cm high, constructed of balsa wood and painted green. An electric motor rotated the target at 4 rpm. Although slow, the rotation gave a definite sense of three-dimensional structure from motion, even when viewed monocularly. The depth position of the real target was precisely measured by a tracking fiducial mounted to the arm (Fig. \[f:haplo-exp\]). When an AR target was presented, the arm supporting the real target was removed. The AR target was identical to the real target: a green octahedron that rotated at 4 rpm, rendered and viewed through the haploscope optics. Only the green channel was used, which eliminated chromatic distortion. Careful calibration ensured that the AR target matched the real target in size and position at all tested distances. In addition, because accommodation lenses of different powers change the overall magnification of the optical system (Fig. \[f:ray\]), the calibration was repeated for every lens power. The targets were located 29 cm above the tabletop, and seen against a black curtain hung 1.2 meters from the observer (Fig. \[f:haplo-exp\]). The appearance of the real and AR targets was as similar as possible: the lighting and color of the real target made it appear to glow against an otherwise dark background, and it did not cast any visible shadows or reflections. The table was covered with black cloth, which created a smooth and featureless surface under the target. The matching task from Swan [et al.]{} [@swan:2015] was replicated. The pointer was made of green, translucent plastic, $\sim$4 mm in diameter, with a rounded top, mounted on an arm attached to the right-hand depth adjuster (Fig. \[f:haplo-exp\]). Observers matched the target depth by sliding the depth adjuster until the pointer was directly below the bottom point of the rotating target. The distance between the bottom of the target and the top of the pointer was $\sim$1 cm. The depth position of the pointer was precisely measured by a tracking fiducial mounted on the arm (not visible in Fig. \[f:haplo-exp\]). ![The experimental setup. The AR haploscope was mounted on the end of an optical breadboard. Real and AR targets were positioned at different depths from the observer. The depth of the targets was matched by changing the position of the pointer.[]{data-label="f:haplo-exp"}](haplo-exp){width="0.8\FigWidth"} ### Experimental Design [[**Observers:**]{}]{} 40 *observers* were recruited from a population of university students and staff. The observers ranged in age from 18 to 38; the mean age was 20.9, and 18 were male and 22 female. 10 observers were paid \$12 an hour, and the rest received course credit. [[**Independent Variables:**]{}]{} Observers saw 4 different *conditions*: real consistent, AR collimated, AR consistent, and AR midpoint. The target object appeared at 5 different *distances* from the observer: 33.3, 36.4, 40, 44.4, and 50 cm, which correspond to 3, 2.75, 2.5, 2.25, and 2 D. Observers saw 6 *repetitions* of each distance. In the *real consistent* condition, observers saw the real target object (Fig. \[f:haplo-exp\]), which, by definition, was always presented at a focal distance that was consistent with its actual distance. In the remaining conditions, the AR target was seen. In the *AR collimated* condition, a 0 D plain glass accommodation lens was used, presenting the target at optical infinity. In the *AR consistent* condition, the accommodation lens power—3, 2.75, 2.5, 2.25, or 2 D—was always consistent with the target’s presented distance. Finally, in the *AR midpoint* condition, the 2.5 D accommodation lens was used, presenting the target at a focal distance of 40 cm. [[**Dependent Variables:**]{}]{} The primary dependent variable was *judged distance*—the measured position of the pointer (Fig. \[f:haplo-exp\]). In addition, *error* $=$ *judged distance* $-$ *actual distance* was also calculated (Fig. \[f:swan2015\]). [[**Design:**]{}]{} A mixed design was used, with condition varying between observers, and distance and repetition varying within each observer. There were 10 observers in each condition, and the presentation order of condition varied in a round-robin fashion, so each group of 4 observers covered all conditions. For each observer, distance $\times$ repetition was randomly permuted, with the restriction that the distance changed every trial. Therefore, each observer completed $5\ \mbox{(distance)} \times 6\ \mbox{(repetition)} = 30\ \mbox{trials}$, and the experiment collected a total of $40\ \mbox{(observers)} \times 30\ \mbox{(trials)} = 1200\ \mbox{data points}$. ### Procedure After receiving an explanation of the experimental procedures, an observer gave informed consent. Then, they took a stereo vision test, which measured their sensitivity to depth changes encoded by binocular disparity. Next, the observer’s inter-pupillary distance was measured, using a pupilometer set to optical infinity, and the haploscope was adjusted to match this distance. The task was then explained, using the real target and the pointer. If the observer indicated that, when working at the demonstrated distances, they would normally wear corrective optics (glasses or contacts), they were instructed to wear the optics. Observers then donned safety goggles, which easily fit over glasses. The googles had 3.5 cm circular openings for each eye, and were otherwise covered with black gaffer tape. The size of these openings was calibrated so that, when looking through the haploscope optics, observers could see the complete field of view provided by the optical combiners, but their peripheral view of the rest of the haploscope was blocked. The chinrest and forehead rest were adjusted so that the observer’s eyes were approximately centered within the optical combiners, and the haploscope pivot points were approximately centered under the eyes’ rotational centers (Figs. \[f:haplo\], \[f:rotate\]). The table and chair heights were adjusted so the observer was sitting comfortably. The observer then completed one of the four conditions. The pointer was placed at a random position within the trackable distance of 23 to 67 cm from the observer, and the experimenter then displayed the first target distance. Using their right hand to manipulate the pointer depth adjuster (Fig. \[f:haplo-exp\]), the observer moved the pointer from this starting position to match the target’s depth. The observer then closed their eyes, and the experimenter displayed the next target distance. The observer then opened their eyes, and moved the pointer from the previously matched distance to the new distance. This pattern continued until all trials were completed. To display distances with the real target, the experimenter used the real target depth adjuster to slide the real target to the correct position. For the AR target, the experimenter adjusted the angle of each haploscope arm, and swapped out the accommodation lenses as needed. Regardless of condition, the procedures were as similar as possible, and the time required for each trial was approximately equal. During real consistent trials, observers looked through the haploscope optics, even through the monitors were switched off. After the trials, the observer was debriefed. The overall experiment took approximately one hour. ![image](exp123-jd){width="1.5\FigWidth"} Analysis -------- Similar to Swan [et al.]{} [@swan:2015], the data was analyzed by examining the slopes and intercepts of linear equations that predict judged distance from actual distance. Multiple regression methods determine if the slopes and intercepts significantly differ (Pedhazur [@pedhazur:1982], Cohen [et al.]{} [@cohen:2003]). For data with this structure, multiple regression methods are preferable to ANOVA analysis, because multiple regression allows the prediction of a continuous dependent variable (judged distance) from a continuous independent variable (actual target distance), as well as a categorical independent variable (condition). In contrast, ANOVA analysis only examines categorical independent variables, which results in a significant loss of power when an independent variable is inherently continuous (Pedhazur [@pedhazur:1982]). In addition, multiple regression yields slopes and intercepts, which as descriptive statistics are more useful than means, because they directly describe functions that predict judged distances from actual target distances. Finally, multiple regression methods focus on effect size, as opposed to significance; an analytic approach advocated by many in the applied statistics community (Cohen [et al.]{} [@cohen:2003]).[^5] Figs. \[f:jd\]a–d and \[f:e1-err\] show the results from Experiment I, plotted as a scatterplot of judged against actual distance (Fig. \[f:jd\]), as well as mean error against distance (Fig. \[f:e1-err\]). Both figures indicate that the data is very well fit by linear regressions; note the $r^{2}$ values in Fig. \[f:jd\]. Fig. \[f:e1-MR\] shows multiple regression analysis, which compares pairs of panels from Fig. \[f:jd\] against each other; each panel in Fig. \[f:e1-MR\] examines two independent variables: a continuous variable (actual distance), and a categorical variable (a pair of panels from Fig. \[f:jd\]). To properly account for repeated measurements, for each observer at each distance, the responses were averaged over the 6 repetitions, reducing the size of the analyzed dataset from 1200 to 200 points—note the reduced density of points in Fig. \[f:e1-MR\] relative to Fig. \[f:jd\]a–d. Each panel in Fig. \[f:e1-MR\] compares two regression equations from Fig. \[f:jd\]. The multiple regression analysis operates in the following manner: First, the *slopes* of the equations are tested to see if they significantly differ. If they do, as in Fig. \[f:e1-MR\]a, both equations from Fig. \[f:jd\] are reported as the best overall description of the data in the panel. If the slopes of the equations do not significantly differ, then the *intercepts* of the equations are tested to see if they significantly differ. This test first sets the slopes of the equations—which do not differ—to a common value. If the intercepts significantly differ, as in Fig. \[f:e1-MR\]b, two regression equations, with slopes adjusted to a common value, are reported as the best overall description of the data in the panel. If neither the slopes nor the intercepts significantly differ, as in Fig. \[f:e1-MR\]c, then the data from both panels is combined, and a regression over the combined data is reported as the best overall description of the data in the panel. Therefore, this multiple regression analysis yields three possible outcomes, which by chance are illustrated in the first three panels of Fig. \[f:e1-MR\]: (1) the slopes significantly differ (Fig. \[f:e1-MR\]a), (2) the slopes do not differ but the intercepts significantly differ (Fig. \[f:e1-MR\]b), or (3) neither the slopes nor the intercepts significantly differ (Fig. \[f:e1-MR\]c). In each case, the panel also indicates two measures of effect size: (1) the overall $R^{2}$ value, the percentage of variation in the panel explained by the linear regressions, and (2) $dR^{2}$, the percentage of variation explained by the change in the categorical variable. If $dR^{2}$ is too small, hypothesis testing is not performed, because any statistical differences would be too small to be meaningful (Pedhazur [@pedhazur:1982]). Based on the results reported in this paper, hypothesis testing is only conducted when $dR^{2} \geq 0.08\%$. Finally, for each panel, if there is a statistical difference in either slope or intercept, then the distance, $d$, in cm, between the fitted regression lines is also reported. When there is a difference in slope, as in Fig. \[f:e1-MR\]a, $d$ is reported at the minimum and maximum $x$ values (33.3 and 50 cm). When there is a difference in intercept, as in Fig. \[f:e1-MR\]b, then the regression lines are the same distance apart for every $x$, and only one $d$ value is reported. $d$ is a signed value; $d > 0$ indicates a distance farther from the observer, and $d < 0$ closer to the observer. ![Experiment I, plotted as mean error against distance $(N = 1200)$.[]{data-label="f:e1-err"}](exp-1-errg){width="1\FigWidth"} ![Experiment I, multiple regression analysis, plotted as a scatterplot of judged against actual distance, with $N = 200$ ghosted data points. The thin dashed lines represent veridical performance. Blue lines represent fitted regression equations from Fig. \[f:jd\]. Black and red lines represent the linear regressions shown in each panel. Blue lines are not visible when overlaid by black or red lines; the degree of blue line visibility is a graphical indication of how closely the regressions in each panel agree with the regressions from Fig. \[f:jd\].[]{data-label="f:e1-MR"}](exp-1-MRg){width="1\FigWidth"} Results ------- [[*Real consistent very accurate*]{}:]{} Fig. \[f:e1-err\]a indicates that observers were extremely accurate in the real consistent condition. The mean error is $-0.2$ mm, and the slope of the regression for Fig. \[f:e1-err\]a, $y = -0.003x + 0.11$, does not significantly differ from 0 ($F_{1,48} = 1.63, p = 0.21$). Note that this is statistically equivalent to testing whether the slope in Fig. \[f:jd\]a differs from 1. [[*AR collimated increasingly overestimated*]{}:]{} When AR collimated is compared to real consistent (Fig. \[f:e1-MR\]a), the slopes significantly differ ($F_{1,96} = 10.7, p = 0.001$), indicating that the AR collimated targets were overestimated, from $+$0.7 to $+$1.8 cm (Fig. \[f:e1-err\]b). [[*AR consistent underestimated*]{}:]{} When AR consistent is compared to real consistent (Fig. \[f:e1-MR\]b), the slopes do not significantly differ ($F_{1,96} = 0.68, p = 0.41$), but the intercepts do ($F_{1,97} = 41.3, p < 0.001$), indicating that the AR consistent targets were underestimated by a constant $-$0.4 cm (Fig. \[f:e1-err\]c). [[*AR midpoint equivalent to real consistent*]{}:]{} When AR midpoint is compared to real consistent (Fig. \[f:e1-MR\]c), the effect size of the difference is 0.036% of the variation, which is too small for any statistical differences to be meaningful. Therefore, the joint data is best fit by a single equation, indicating that AR midpoint targets were accurately matched (Fig. \[f:e1-err\]d). [[*AR consistent and AR midpoint equivalent*]{}:]{} When AR consistent is compared to AR midpoint (Fig. \[f:e1-MR\]d), the effect size is 0.035%, also too small for any statistical differences to be meaningful. Therefore, matches of AR consistent and AR midpoint targets were equivalent (Fig. \[f:e1-err\]c, d). Discussion ---------- The first purpose (1) of Experiment I was to replicate the real consistent and AR collimated conditions of Swan [et al.]{} [@swan:2015] (Fig. \[f:swan2015\]). The pattern in Figs. \[f:e1-err\]a, b indeed matches Fig. \[f:swan2015\]. Given the many differences between the AR haploscope and the NVIS display used by Swan [et al.]{} [@swan:2015], this replication is consistent with the idea that this pattern of results generalizes to any collimated AR or stereo display. In addition, Swan [et al.]{} [@swan:2015] hypothesized that collimation biases the eyes’ vergence angle to rotate outwards by a constant amount (Fig. \[f:model\]). For each distance, Fig. \[f:vdist\]a shows $\Delta v$, the change in vergence angle,[^6] for the 10 AR collimated observers. For all observers $\Delta v$ changes less than 0.5$^{\circ}$, and the median observer, seen in the boxplot, changes less than 0.072$^{\circ}$. These small angular changes are consistent with the hypothesis that, within these reaching distances, the vergence angle bias is constant. The next purpose (2) was to test whether presenting AR objects at a focal distance that was *consistent* with the distance specified by other depth cues, especially binocular disparity, would result in more accurate depth matches than what was seen in the AR collimated condition. Figs. \[f:e1-err\]a, b, and c, as well as the analysis in Figs. \[f:e1-MR\]a and b, confirm this hypothesis: AR consistent is much more accurate than AR collimated, and for a consistent focal distance, real and AR targets do not differ in slope (Fig. \[f:e1-MR\]b). The final purpose (3) was to test whether presenting AR objects at a focal distance equal to the midpoint of the tested range would result in similar performance as the consistent condition. Figs. \[f:e1-err\]c and d, as well as the analysis in Figs. \[f:e1-MR\]c and d, indicate that, when the focus was set to the midpoint, matching was indeed just as accurate. ![For the *AR collimated* condition, the change in vergence angle $\Delta v = \alpha - \beta$ (Fig. \[f:model\]), when an observer has matched the depth of the virtual target $\beta$ with the real pointer $\alpha$ (Fig. \[f:haplo-exp\]). Each line in each panel is a different observer. For all $N = 30$ observers, $\Delta v$ is approximately constant across all tested distances. The boxplot gives the value for the median observer.[]{data-label="f:vdist"}](verg-dist){width="1\FigWidth"} Experiment II: Age {#s:exII} ================== As discussed in Section \[s:age\], increasing age leads to presbyopia, a decline in the ability of the eyes to accommodate to different focal distances. Experiment I found significant negative effects of collimation, but all of the observers were young, with a mean age of 20.9, and therefore likely not presbyopic. In addition, as discussed in Section \[s:age\], although older people are worse than younger people at many perceptual tasks, recent studies have found that older people preserve their abilities in many tasks related to distance perception. Therefore, it was unclear if older observers would replicate the effects observed in Experiment I (Fig. \[f:e1-err\]). Furthermore, this work was primarily inspired by medical AR applications, and the majority of medical professionals are old enough to suffer some degree of presbyopia. Therefore, the purpose of Experiment II was to replicate Experiment I, using presbyopic observers, aged 40 and older. Method ------ Other than the age of the observers, the methods of Experiment II were identical to Experiment I. 40 *observers* were recruited from a population of university and community members. The observers ranged in age from 41 to 80; the mean age was 55.6, and 19 were male and 21 female. 6 observers were paid \$10 an hour, 33 were paid \$12 hour, and one was not paid. Each observer completed $5\ \mbox{(distance)} \times 6\ \mbox{(repetition)} = 30\ \mbox{trials}$, and the experiment collected a total of $40\ \mbox{(observers)} \times 30\ \mbox{(trials)} = 1200\ \mbox{data points}$. Results ------- Fig. \[f:jd\]e–h shows the results from Experiment II as scatterplots; the $r^{2}$ values indicate that the data continues to be very well fit by regression equations. Fig. \[f:e2-err\] shows the results as error, with Experiment I’s results also shown for comparison. Figs. \[f:e12-MR\] and \[f:e2-MR\] show the results of multiple regression analysis. [[*Older and younger only differ in AR collimated*]{}:]{} Fig. \[f:e12-MR\] compares Experiment I to Experiment II condition by condition. For the AR collimated condition (Fig. \[f:e12-MR\]b), the slopes do not significantly differ ($F_{1,96} = 0.38, p = 0.54$), but the intercepts do ($F_{1,97} = 35.7, p < 0.001$); the older observers matched collimated AR targets a constant $-$1.1 cm closer to themselves than the younger observers (Fig. \[f:e2-err\]b). For the remaining conditions, the effect size of the difference between Experiments I and II, 0.013% (Fig. \[f:e12-MR\]a), 0.031% (Fig. \[f:e12-MR\]c), and 0.056% (Fig. \[f:e12-MR\]d), is too small for any statistical differences to be meaningful. Therefore, for the real consistent, AR consistent, and AR midpoint conditions, the results for the older observers and the younger observers are equivalent. [[*Real consistent very accurate*]{}:]{} Fig. \[f:e2-err\]a indicates that older observers were very accurate when matching the distance of real targets. The mean error is $+0.4$ mm, and the slope of the linear model for Fig. \[f:e2-err\]a, $y = -0.012x + 0.60$, does not significantly differ from 0 ($F_{1,48} = 2.5, p = 0.12$). Note that this is statistically equivalent to testing whether the slope in Fig. \[f:jd\]e differs from 1. [[*AR collimated increasingly overestimated*]{}:]{} For the older observers, when AR collimated is compared to real consistent (Fig. \[f:e2-MR\]a), the slopes significantly differ ($F_{1,96} = 13.8, p < 0.001$); the AR collimated errors ranged from $-$0.7 to $+$0.9 cm (Fig. \[f:e2-err\]b). [[*AR consistent underestimated*]{}:]{} For the older observers, when AR consistent is compared to real consistent (Fig. \[f:e2-MR\]b), the slopes do not significantly differ ($F_{1,96} = 0.56, p = 0.46$), but the intercepts do ($F_{1,97} = 16.6, p < 0.001$); the AR consistent targets were underestimated by a constant $-$0.3 cm (Fig. \[f:e2-err\]c). [[*AR midpoint equivalent to real consistent*]{}:]{} For the older observers, when AR midpoint is compared to real consistent (Fig. \[f:e2-MR\]c), the effect size is 0.0093%, which is too small for any statistical differences to be meaningful. Therefore, the AR midpoint targets were accurately matched (Fig. \[f:e2-err\]d). [[*AR consistent and AR midpoint equivalent*]{}:]{} For the older observers, when AR consistent is compared to AR midpoint (Fig. \[f:e2-MR\]d), the effect size is 0.036%, also too small for any statistical differences to be meaningful. Therefore, the matches of the AR consistent and AR midpoint targets were equivalent (Fig. \[f:e2-err\]c, d). ![Experiment II: older observers, plotted as mean error against actual distance $(N = 1200)$. For comparison, Experiment I’s results are also shown in light grey, offset along the $x$-axis for clarity.[]{data-label="f:e2-err"}](exp-2-errg){width="1\FigWidth"} ![Experiment I versus II, the effect of age, multiple regression analysis, with $N = 400$ ghosted data points. See the caption for Fig \[f:e1-MR\].[]{data-label="f:e12-MR"}](exp-12-MRg){width="1\FigWidth"} ![Experiment II, multiple regression analysis, older observers, with $N = 200$ ghosted data points. See the caption for Fig \[f:e1-MR\].[]{data-label="f:e2-MR"}](exp-2-MRg){width="1\FigWidth"} Discussion ---------- The purpose of Experiment II was to replicate Experiment I, using older, presbyopic observers. According to Duane [@duane:1912], the younger observers in Experiment I had an expected near focus of $\sim$8.3 cm ($\sim$11.5 D), while for these older observers the expected near focus was $\sim$68 cm ($\sim$1.5 D). Experiment II’s results only differ for the AR collimated condition. For collimated targets, older observers showed less overestimation than younger observers, with matches shifted towards the observer by a constant $-$1.1 cm (Fig. \[f:e12-MR\]b). Older observers had a mean error of $+$0.12 cm, while younger observers had a mean error of $+$1.2 cm (Fig. \[f:e2-err\]b), and therefore older observers were on average *more* accurate than younger observers. However, the slope, $b =$ 1.072, is the same for both sets of observers (Fig. \[f:e12-MR\]b), and differs significantly from the slope for the real consistent condition (Figs. \[f:e1-MR\]a and \[f:e2-MR\]a). Therefore, for both younger and older observers, matches of collimated targets were inaccurate, and increasingly overestimated with increasing distance. In addition, for each distance, Fig. \[f:vdist\]b shows $\Delta v$, the change in vergence angle for the 10 older AR collimated observers. For 9 of the 10 observers $\Delta v$ changes less than 0.6$^{\circ}$, for the outlying observer it changes 1.4$^{\circ}$, and the median observer changes less than 0.25$^{\circ}$. These small angular changes are consistent with the hypothesis that, for both younger and older observers, the vergence angle bias is constant. For the other conditions, the observer’ age—and therefore the observers’ ability to accommodate to different focal demands—made no difference. Older observers were just as accurate as younger observers in matching the distance to real targets, as well as to AR targets with both consistent and midpoint focal cues. These results are consistent with previous work that has found that older observers preserve their abilities in many tasks related to distance perception (Bian and Andersen [@bian:2013]). Experiment III: Brightness {#s:exIII} ========================== However, a conflicting finding from both experiments is that AR consistent was underestimated, while AR midpoint was accurate *and* AR consistent was equivalent to AR midpoint. This is true for both Experiment I (Fig. \[f:e1-MR\]) and Experiment II (Fig. \[f:e2-MR\]). These conflicting statistical results are likely due to the small effect size of AR consistent’s underestimation ($d = -$0.4 and $d = -$0.3 cm, respectively). Nevertheless, the underestimation is statistically significant, and was replicated among 20 observers with widely varying ages. As discussed in Section \[s:bright\], brighter objects appear closer than similar-sized dimmer objects. Figs. \[f:bright\]a and b show photographs, taken through the haploscope optics, of the real and AR targets used in Experiments I and II. The AR target appeared brighter than the real target.[^7] For Experiment III, the brightness was reduced, until the AR and real targets appeared to have the same brightness (Fig. \[f:bright\]c). The purpose of Experiment III was to determine if the dim AR target would increase the accuracy of the AR consistent condition. ![Experiment III examined target brightness. (a) The real target. (b) The bright AR target used in Experiments I and II. (c) The dim AR target used in Experiment III.[]{data-label="f:bright"}](bright){width="0.9\FigWidth"} Method ------ Other than the brightness of the AR target, the methods of Experiment III were identical to Experiment I. Because the real target object did not change, that condition was not replicated. To facilitate comparison with Experiment I, younger observers were recruited, from a population of university students and staff. The 30 *observers* ranged in age from 17 to 24; the mean age was 19.8, and 21 were male and 9 female. 6 observers were paid \$12 an hour, and the rest received course credit. Each observer completed $5\ \mbox{(distance)} \times 6\ \mbox{(repetition)} = 30\ \mbox{trials}$, and the experiment collected a total of $30\ \mbox{(observers)} \times 30\ \mbox{(trials)} = 900\ \mbox{data points}$. Results ------- Fig. \[f:jd\]j–l shows the results from Experiment III as scatterplots; the $r^{2}$ values indicate that the data continues to be very well fit by regression equations. Fig. \[f:e3-err\] shows the same results as error, with Experiment I’s results also shown for comparison. Figs. \[f:e13-MR\] and \[f:e3-MR\] show the results of multiple regression analysis. [[*Dim targets differ in AR consistent and AR midpoint*]{}:]{} Fig. \[f:e13-MR\] compares Experiment I to Experiment III condition by condition. For the AR collimated condition (Fig. \[f:e13-MR\]a), the effect size of the difference is 0.00027%, much too small for any statistical differences to be meaningful. Therefore, the results for the dim targets and the bright targets are equivalent (Fig. \[f:e3-err\]b). For the AR consistent condition (Fig. \[f:e13-MR\]b), the slopes significantly differ ($F_{1,96} = 7.7, p = 0.007$), and therefore the dim targets were matched $+$0.2 to $+$0.9 cm farther than the bright targets (Fig. \[f:e3-err\]c). And finally, for the AR midpoint condition (Fig. \[f:e13-MR\]c), the slopes do not significantly differ ($F_{1,96} < 0.01, p = 0.96$), but the intercepts do ($F_{1,97} = 17.7, p < 0.001$), and therefore the dim targets were matched $+$0.4 cm farther than the bright targets (Fig. \[f:e3-err\]d). [[*AR collimated increasingly overestimated*]{}:]{} When dim AR collimated is compared to real consistent from Experiment I (Fig. \[f:e3-MR\]a), the slopes significantly differ ($F_{1,96} = 5.5, p = 0.021$), indicating that the dim AR collimated targets were overestimated from $+$0.8 to $+$1.9 cm (Fig. \[f:e3-err\]b). [[*AR consistent equivalent to real consistent*]{}:]{} When dim AR consistent is compared to real consistent from Experiment I (Fig. \[f:e3-MR\]b), the effect size of the difference is 0.031%, which is too small for any statistical differences to be meaningful. Therefore, the dim AR consistent targets were accurately matched (Fig. \[f:e3-err\]c). [[*AR midpoint equivalent to real consistent*]{}:]{} When dim AR midpoint is compared to real consistent from Experiment I (Fig. \[f:e3-MR\]c), the effect size of the difference is 0.025%, which is too small for any statistical differences to be meaningful. Therefore, the dim AR midpoint targets were accurately matched (Fig. \[f:e3-err\]d). [[*AR consistent and AR midpoint equivalent*]{}:]{} When dim AR consistent is compared to dim AR midpoint (Fig. \[f:e3-MR\]d), the effect size is 0.012%, also too small for any statistical differences to be meaningful. Therefore, the matches of dim AR consistent and dim AR midpoint targets were equivalent (Fig. \[f:e3-err\]c, d). ![Experiment III: dim targets, plotted as mean error against actual distance $(N = 900)$. For comparison, Experiment I’s results are also shown in light grey, offset along the $x$-axis for clarity.[]{data-label="f:e3-err"}](exp-3-errg){width="1\FigWidth"} ![Experiment I versus III, the effect of brightness, multiple regression analysis, with $N = 300$ ghosted data points. See the caption for Fig \[f:e1-MR\].[]{data-label="f:e13-MR"}](exp-13-MRg){width="1\FigWidth"} ![Experiment III, multiple regression analysis, dim targets, with $N = 200$ ghosted data points. See the caption for Fig \[f:e1-MR\]. The real consistent data is repeated from Experiment I.[]{data-label="f:e3-MR"}](exp-3-MRg){width="1\FigWidth"} Discussion ---------- The purpose of Experiment III was to determine if a dim AR target, which has the same apparent brightness as the real target (Fig. \[f:bright\]), would increase the accuracy of the AR consistent condition. When comparing Experiment III to Experiment I, there was no difference in matching AR collimated targets, but increased accuracy for both AR consistent and AR midpoint targets (Fig. \[f:e13-MR\]). In addition, when combined with the real consistent data from Experiment I, matches for the dim AR target were overestimated in the AR collimated condition, but accurate in both the AR consistent and AR midpoint conditions (Figs. \[f:e3-MR\]). Therefore, for the AR consistent and AR midpoint conditions, the dim AR targets were as accurately matched as the real targets (Fig. \[f:e3-err\]). In addition, the pattern of AR consistent being underestimated, while AR midpoint was accurate *and* AR consistent was equivalent to AR midpoint, occurred for both the younger observers of Experiment I (Fig. \[f:e1-MR\]) and the older observers of Experiment II (Fig. \[f:e2-MR\]). While Experiment III only tested younger observers, the results are consistent with the hypothesis that older observers would also accurately match the depth of dim AR consistent and dim AR midpoint targets. Finally, for the dim AR collimated targets, for each distance Fig. \[f:vdist\]c shows $\Delta v$, the change in vergence angle, for the 10 AR collimated observers. For all observers $\Delta v$ changes less than 0.4$^{\circ}$, and the median observer changes less than 0.12$^{\circ}$. These small angular changes are consistent with the hypothesis that, for the dim AR targets, the vergence angle bias is still constant. General Discussion ================== [[**Constant Vergence Angle Bias:**]{}]{} As discussed in Section \[s:intro\], Swan [et al.]{} [@swan:2015] found that the AR collimated condition caused overestimation that increased linearly with distance, and proposed that this was caused by the collimation biasing the eyes’ vergence angle to rotate outwards by a constant amount. All of the experiments reported here replicated this result, and strongly support this hypothesis. These findings are also consistent with the prediction, by Mon-Williams and Tresilian [@monwilliams:2000], that an inconsistent accommodative cue would bias perceived depth in the same direction as the accommodative cue (Fig. \[f:verg-acc-con\]). However, Swan [et al.]{} [@swan:2015] did not measure this vergence angle change, and it was not measured here. In a future experiment, it should be directly measured. [[**Dim Targets:**]{}]{} The experiments found the most accurate matches for dim AR targets, which more closely matched the brightness of the real targets. These results are consistent with previous work that finds brighter objects appear closer than dimmer objects (Ashley [@ashley:1898], Farnè [@farne:1977], Coules [@coules:1955]). However, it is interesting that the error for matching the dim AR targets disappeared, even though the error was calculated *between-subjects*: in all of the experiments, different groups of 10 observers saw the real targets, the bright AR targets, and the dim AR targets. It would have been less surprising to have found these errors in a within-subjects design, where observers made a judgment about two targets with different brightnesses, viewed simultaneously (e.g., Ashley [@ashley:1898], Farnè [@farne:1977], Coules [@coules:1955]). The errors may be related to the fact that AR targets are drawn with impoverished depth cues, and therefore brightness could be directly biasing the vergence angle. If this hypothesis is true, it would be another component of accurate depth presentation that must be considered by AR practitioners. A future experiment should examine whether the brightness of an AR object directly influences vergence angle. [[**Midpoint Accommodative Stimulus:**]{}]{} While the AR consistent condition was accurate for the dim AR targets, the AR midpoint condition was accurate across all of the experiments. It is not clear why AR midpoint was accurate at both brightness levels, while AR consistent was not. Nevertheless, the practical implication is that, because the AR midpoint condition was at least as accurate as the AR consistent condition, for AR applications requiring accurate near-field depth matching, it is sufficient for the focal demand to be set to the middle of the working volume. However, the positive results for the AR midpoint condition suggest comparison with light-field displays, which can simultaneously present multiple virtual objects at different focal distances. In addition to solving the vergence-accommodation conflict, light-field displays are predicted to eventually become the dominant technology for all kinds of 3D experiences (Balram [@balram:2014]). However, although the technology is rapidly developing, AR light-field displays face many fundamental challenges and design tradeoffs, in areas such as depth range, color resolution, spatial resolution, computational demands, and data throughput requirements (Wu [et al.]{} [@wu:2014]). Therefore, the AR midpoint results suggest that the level of engineering complexity required for these kinds of displays may not be necessary, especially for AR applications where the most important perceptual task is accurate matching at near-field distances (e.g., Edwards [@edwards:2000], Krempien [et al.]{} [@krempien:2008]). [[**Future Work:**]{}]{} As discussed in this section, errors detected in these experiments are likely due to vergence angle biases. Therefore, useful future work would replicate these experiments while measuring vergence angle. Possible methods for making this measurement include binocular eye tracking (Wang [et al.]{} [@duchowski:2014]), or nonius line methods (Ellis and Menges [@ellis:1998]). In addition, because the AR haploscope was mounted to a tabletop, these experiments could not examine the depth cue of motion perspective. Although some AR applications, such as the operating microscope described by Edwards [et al.]{} [@edwards:2000], are also mounted and therefore lack motion perspective, it is a very salient depth cue (Nagata [@nagata:1991], Cutting and Vishton [@cutting:1995]), and should be examined in future experiments. A head-mounted AR haploscope, such as the one used by McCandless [et al.]{} [@mccandless:2000], would allow a replication of these experiments that included motion perspective. Practical Implications ====================== For accurate near-field depth matching, the experiments reported here have the following implications: [[****]{}]{} Collimated graphics should not be used. A focal distance set to the middle of the depth range is a good as a focal distance optimized for every virtual object. [[****]{}]{} The brightness of virtual objects needs to match the brightness of real objects. [[****]{}]{} Observers old enough to suffer age-related reductions in accommodative ability are just as accurate as younger observers. Acknowledgments {#acknowledgments .unnumbered} =============== Acknowledgment {#acknowledgment .unnumbered} ============== This material is based upon work supported by the National Science Foundation, under awards IIS-0713609, IIS-1018413, and IIS-1320909, to J. E. Swan II. [Gurjot Singh]{} Biography text here. [Stephen R. Ellis]{} Biography text here. [J. Edward Swan II]{} Biography text here. [^1]: Manuscript received XXXX; revised XXXX. [^2]: The authors of the current paper, who have been studying virtual and augmented reality for 8, 30, and 18 years, respectively, can only recall a single commercially-available AR display—the Microvision Nomad from the early 2000’s— which came with a focus adjustment knob (Gabbard [et al.]{} [@gabbard:2017]). [^3]: Portions of these experiments are reported in a PhD dissertation (Singh [@singh:2013]). [^4]: Additional technical details, and a history of preliminary versions and design tradeoffs, can be found in Singh [@singh:2013]. [^5]: Custom analysis software, developed by the third author, was used, which implements methods described by Pedhazur [@pedhazur:1982]. A more detailed discussion of the application of multiple regression methods to depth perception data is available in Swan [et al.]{} [@swan:2015]. [^6]: $\Delta v = \alpha - \beta$, $\alpha = 2 \arctan(i/2x)$, and $\beta = 2 \arctan(i/2y)$, where $i$ is the observer’s inter-pupillary distance, $x$ is the actual target distance, and $y$ is the judged distance (Fig. \[f:jd\]). Note that using $x$ assumes that observers would match a real object with perfect accuracy, but the very accurate and precise results for the real consistent condition suggest this assumption is reasonable. [^7]: Note that brightness is the perceptual experience of luminance, and cannot be directly measured or captured with a camera. The luminance of the targets was measured (Singh [@singh:2013]).
\ Shang-Gang Shi$^{1}$, Yun-Song Piao$^{1}$  and  Cong-Feng Qiao$^{1,2}$\ 0.6in **Abstract** 0.3in [In this work we study the cosmological evolution of a dark energy model with two scalar fields, i.e. the tachyon and the phantom tachyon. This model enables the equation of state $w$ to change from $w>-1$ to $w<-1$ in the evolution of the universe. The phase-space analysis for such a system with inverse square potentials shows that there exists a unique stable critical point, which has power-law solutions. In this paper, we also study another form of tachyon-quintom model with two fields, which voluntarily involves the interactions between both fields.]{} Introduction ============ Recent observational data  [@PR97; @S03; @R04] strongly indicate that the Universe is spatially flat and accelerating at the present time. Within the framework of general relativity, cosmic acceleration can be sourced by an energy-momentum tensor which has a large negative pressure called dark energy (See Ref. [@CLW06] for a recent review). The simplest candidate for dark energy seems to be a small positive cosmological constant, but it suffers from difficulties associated with the fine tuning and coincidence problem. This problem can be alleviated in models of dynamically evolving dark energy called quintessence  [@Zlatev; @99], which have tracker like properties where the energy density in the fields track those of the background energy density before dominating today. The phantom, whose kinetic energy term has the reverse sign, has been also proposed as a candidate of dynamical dark energy [@Caldwell02]. There has been the enormous variety of DE models suggested in the literature, see Ref. [@SS00] for reviews. The analysis of the properties of dark energy from recent observations mildly favor models with $\omega$ crossing $-1$ in the near past [@Alam04; @FWZ05]. But, neither quintessence nor phantom can fulfill this transition. The quintom scenario of dark energy is designed to understand the nature of dark energy with $\omega$ across $-1$. The first model of quintom scenario of dark energy is given in Ref.  [@FWZ05] with two scalar fields, where one is quintessence and the other is phantom. This model has been studied in detail later on [@FLPZ06; @RGCai; @XinZhang; @AKV; @Setare; @Chimento], and recently a new type of quintom model inspired by the string theory has also been proposed, which only have a single scalar field [@Yi-Fu; @Cai08]. The role of the rolling tachyon [@Sen02] in string theory has been widely studied in cosmology, see Refs.[@Gibbons02; @FT2002], and especially Refs.[@P02; @CGJP04; @AL04; @CGST04] for dark energy. Some sort of tachyon condensate may described by effective field theory with a Lagrangian density $\mathcal{L}=-V(\phi)\sqrt{1+g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi}$. It can act as a source of dark matter or inflation field. Meanwhile the tachyon can also act as a source of dark energy depending upon the form of the tachyon potential. However, as compared to canonical quintessence, tachyon models require more fine-tuning to agree with observations. In Ref. [@HL03], the authors consider the Born-Infeld type Lagrangian with negative kinetic energy term. The Lagrangian density they choose is $\mathcal{L}=-V(\varphi)\sqrt{1-g^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi}$. It is clear that for the spatially homogeneous scalar field, the equation of state $\omega=-1-\dot{\varphi}^2$ will be less than $-1$ unless the kinetic energy term $\dot{\varphi}^2= 0$. This field is called phantom tachyon, see for example [@TS04]. In principle, we can consider the multi-fields model including multiple tachyon and multiple phantom tachyon. However, without loss of generality, we only consider the case of one tachyon and one phantom tachyon, since it is the simplest one to realize that the equation of state $w$ cross $-1$ during the evolution of the universe and it shows most of the central ideas of such model. Because of the quintom-like behavior it shows, we call it tachyon-quintom for the convenience. This paper is organized as follows: in section II we study in detail the tachyon-quintom model with inverse square potentials. The numerical analysis shows this model is not sensitive to the initial kinetic energy density of tachyon and phantom tachyon, and we give the reason in detail. Then the phase-space analysis of the spatially flat FRW models shows that there exist a unique stable critical point, and we compare it with tachyon model; in section III we present another two-field model which include the interaction between two fields; the section IV is summary. The Tachyon-Quintom Model ========================= A. The tachyon-quintom model {#a.the-tachyon-quintom-model .unnumbered} ----------------------------- We assume a four-dimensional, spatially-flat Friedmann-Robertson-Walker Universe filled by a homogeneous tachyon $\phi$ with potential $V(\phi)$, a homogeneous phantom tachyon $\varphi$ with potential $V(\varphi)$ and a fluid with barotropic equation of state $p_\gamma=(\gamma-1)\rho_\gamma$, $0<\gamma\leq2$, such as radiation ($\gamma=4/3$)or dust matter ($\gamma=1$). In this section, we turn our attention to the possibility of the tachyon and phantom tachyon as a source of the dark energy. The action for such a system is given by $$S= \int d^4x\sqrt{-g} \left( {\frac {M_p^2\mathcal{R}}{2}+\mathcal{L}_\phi +\mathcal{L}_\varphi+\mathcal{L}_m}\right)$$ where $M_p$ is the reduced Planck mass, $\mathcal{R}$ is the scalar curvature, $\mathcal{L}_m$ represents the Lagrangian density of matter fields and $$\label{Lagrangian density1} \mathcal{L}_\phi=-V(\phi)\sqrt{1+g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi}$$ $$\label{Lagrangian density2} \mathcal{L}_\varphi=-V(\varphi)\sqrt{1-g^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi}.$$ We now restrict to spatially homogeneous time dependent solutions for which $\partial_i\phi =\partial_i\varphi = 0$. Thus the energy densities and the pressure of the field $\phi$ and $\varphi$ are given, respectively, by $$\begin{aligned} \label{e1}\rho_\phi=\frac{V(\phi)}{\sqrt{1-\dot{\phi}^2 }},~~~~p_\phi=-V(\phi)\sqrt{1-\dot{\phi}^2 } , \\ \label{e2}\rho_\varphi=\frac{V(\varphi)}{\sqrt{1+\dot{\varphi}^2 }}, ~~~~p_\varphi=-V(\varphi)\sqrt{1+\dot{\varphi}^2 }\end{aligned}$$ Here a dot is derivation with respect to synchronous time. The background equations of motion are $$\label{eq1} \frac{\ddot{\phi}}{1-\dot{\phi}^2}+3H\dot{\phi}+\frac{1}{V(\phi)}\frac{dV(\phi)}{d\phi}=0$$ $$\label{eq2}\frac{\ddot{\varphi}}{1+\dot{\varphi}^2}+3H\dot{\varphi}-\frac{1}{V(\varphi)}\frac{dV(\varphi)}{d\varphi}=0$$ $$\dot{\rho}_\gamma=-3\gamma H\rho_\gamma$$ $$\begin{array}{l}\label{Hubble} \dot{H}=-\frac{1}{2M_p^2}(\rho_\phi+p_\phi+\rho_\varphi+p_\varphi+\rho_\gamma+p_\gamma)\\~~ =-\frac{1}{2M_p^2} \left( {\frac{\dot{\phi}^2V(\phi)}{\sqrt{1-\dot{\phi}^2}}-\frac{\dot{\varphi}^2V(\varphi)}{\sqrt{1+\dot{\varphi}^2}}+\gamma\rho_\gamma} \right)\end{array}\\$$ together with a constraint equation for the Hubble parameter: $$\label{Hubbleeq} H^2=\frac{1}{3M_p^2}\left( {\frac{V(\phi)}{\sqrt{1-\dot{\phi}^2}}+\frac{V(\varphi)}{\sqrt{1+\dot{\varphi}^2}}+ \rho_\gamma} \right)$$ The potentials we considered are inverse square potentials: $$V(\phi)=M^2_\phi\phi^{-2},~~~V(\varphi)=M^2_\varphi\varphi^{-2}$$ Those potentials allow constructing a autonomous system  [@CLW97; @HW02] using the evolution equations, and give power-law solutions. The cosmological dynamics of the tachyon field with inverse square potential was studied in Ref. [@P02; @AL04; @CGST04]. Interestingly, the inverse square potential plays the same role for tachyon fields as the exponential potential does for standard scalar fields. We define the following dimensionless quantities : $$x_\phi\equiv\dot{\phi},~~y_\phi\equiv\frac{V(\phi)}{3H^2M_p^2}, ~~x_\varphi\equiv\dot{\varphi},~~y_\varphi\equiv\frac{V(\varphi)}{3H^2M_p^2},~~z\equiv\frac{\rho_\gamma}{3H^2M_p^2}$$ Now the Eqs. (\[Hubbleeq\]) and (\[Hubble\]) can be rewrite as follow: $$\label{Hubble1} 1=\frac{y_\phi}{\sqrt{1-x^2_\phi}}+\frac{y_\varphi}{\sqrt{1+x^2_\varphi}}+z\equiv\Omega_{DE}+z$$ $$\label{Hubbleeq1} \frac{H'}{H}=-\frac{3}{2}\left( {-\frac{y_\phi (\gamma-x^2_\phi)}{\sqrt{1-x^2_\phi}}-\frac{y_\varphi (\gamma+x^2_\varphi)}{\sqrt{1+x^2_\varphi}}+\gamma} \right)$$ where $\Omega_{DE}$ measure the dark energy density as a fraction of the critical density, a prime denotes a derivative with respect to the logarithm of the scale factor,$N={\rm ln}\,a$. Then the evolution Eqs. (\[eq1\]) and (\[eq2\]) can be written to an autonomous system: $$\label{yundong1} x'_\phi=-3(x_\phi-\sqrt{\beta_\phi y_\phi })(1-x_\phi^2)$$ $$\label{yundong2} y'_\phi=3 y_\phi\left( {-\frac{y_\phi (\gamma-x^2_\phi)}{\sqrt{1-x^2_\phi}}-\frac{y_\varphi (\gamma+x^2_\varphi)}{\sqrt{1+x^2_\varphi}}-\sqrt{\beta_\phi y_\phi }x_\phi+\gamma} \right)$$ $$\label{yundong3} x'_\varphi=-3(x_\varphi+\sqrt{\beta_\varphi y_\varphi })(1+x_\varphi^2)$$ $$\label{yundong4} y'_\varphi=3y_\varphi\left( {-\frac{y_\phi (\gamma-x^2_\phi)}{\sqrt{1-x^2_\phi}}-\frac{y_\varphi (\gamma+x^2_\varphi)}{\sqrt{1+x^2_\varphi}}-\sqrt{\beta_\varphi y_\varphi }x_\varphi+\gamma} \right)$$ where $$\beta_\phi=\frac{4M_p^2}{3M_\phi^2},~~~\beta_\varphi=\frac{4M_p^2}{3M_\varphi^2}$$ The equation of state of the dark energy is $$\label{w} \omega=\frac{-V(\phi)\sqrt{1-\dot{\phi}^2}-V(\varphi)\sqrt{1+\dot{\varphi}^2}}{\frac{V(\phi)}{\sqrt{1-\dot{\phi}^2}}+\frac{V(\varphi)}{\sqrt{1+\dot{\varphi}^2}}} =\frac{-y_\phi\sqrt{1-x_\phi^2}-y_\varphi\sqrt{1+x_\varphi^2}}{\frac{y_\phi}{\sqrt{1-x^2_\phi}}+\frac{y_\varphi}{\sqrt{1+x^2_\varphi}}}$$ B. Numerical analysis {#b.numerical-analysis .unnumbered} --------------------- Mapping between the number of $e$-foldings and the redshift $z\equiv a_0/a-1=1/a-1$, we note that at big bang nucleosynthesis (BBN) $N_{\rm BBN}\approx-20$ ($z\approx 10^9$), at matter-radiation equality $N_{\rm eq}\approx -8$ ($z\approx 3200$). We choose $N=-8$ as the initial number of e-folds, so choose the $\gamma=1$ in Eqs. (\[yundong2\])  and (\[yundong4\]) is a good approximation. The evolutions of $\omega$ and $\Omega_{DE}$ are shown in Fig.\[figure1\]. In Ref. [@BHM01], the authors use standard Big Bang Nucleosynthesis and the observed abundances of primordial nuclides to give a constraints on the scalar matter : $\Omega_{DE}<0.045$, at temperatures near 1 MeV. The initial values of $x_\phi,y_\phi,x_\varphi$ and  $y_\varphi$ given below are safely satisfy this requirement, since $\Omega_{DE}\leq 6\times10^{-7}$ at $N=-8$, and the energy density of $\phi$ and $\varphi$ are decreasing more slowly than the fluid$(\omega_\phi=\frac{p_\phi}{\rho_\phi}=-1+\dot{\phi}^2<0, \omega_\varphi=\frac{p_\varphi}{\rho_\varphi}=-1-\dot{\varphi}^2\leq-1 )$. So at temperatures near 1 MeV, the $\Omega_{DE}$ is smaller. ![\[figure1\] Evolution of the equation of state ($\omega$)and density parameters($\Omega_{DE}$) as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5 \times10^{-11}$.](fig1.eps "fig:"){height="2.5in" width="3.2in"} ![\[figure1\] Evolution of the equation of state ($\omega$)and density parameters($\Omega_{DE}$) as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5 \times10^{-11}$.](fig2.eps "fig:"){height="2.5in" width="3.2in"} From pictures we can see that this model is not sensitive to the initial kinetic energy density of tachyon and phantom tachyon ($x_\phi=\dot{\phi},x_\varphi=\dot{\varphi}$). When $x_{\phi i}=0.9999999,~y_{\phi i}=6\times10^{-11},~x_{\varphi i}=2.0,~y_{\varphi i}=6.5\times10^{-11}$, the current $\omega$ and $\Omega_{DE}$ are $-1.025896$ , $0.722306$, respectively. When $x_{\phi i}=1\times10^{-12},~y_{\phi i}=6\times10^{-11},~x_{\varphi i}=1\times10^{-12},~y_{\varphi i}=6.5\times10^{-11}$, the current $\omega$ and $\Omega_{DE}$ are $-1.025841$ , $0.722367$, respectively. From Eq. (\[e1\]), we know that the initial energy density of the tachyon varied by nearly four orders of magnitude is still consistent with current observational constraints. But the initial potential energy density of tachyon and phantom tachyon require fine-tuning to agree with observations. At the present, we want to explain in rough detail how solutions converge to the common solution for the different initial conditions which given in Fig.\[figure1\]. (In next subsection, we will know that there is only one stable critical point, so the solutions converge to a common, cosmic evolutionary track is not surprised. ) ![\[figure2\] Evolution of the  $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$  as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5\times10^{-11}$.](fig3.eps "fig:"){height="2.5in" width="3.2in"} ![\[figure2\] Evolution of the  $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$  as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5\times10^{-11}$.](fig4.eps "fig:"){height="2.5in" width="3.2in"} ![\[figure2\] Evolution of the  $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$  as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5\times10^{-11}$.](fig5.eps "fig:"){height="2.5in" width="3.2in"} ![\[figure2\] Evolution of the  $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$  as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5\times10^{-11}$.](fig6.eps "fig:"){height="2.5in" width="3.2in"} From the Fig.\[figure2\], we can see that the evolution of the  $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$  as a function of N. We observe that the $y_\phi$ and $y_\varphi$ evolutionary tracks are not sensitive to the initial conditions of $x_\phi$ and $x_\varphi$ which given in Fig.\[figure1\]. This can be seen from the Eqs. (\[yundong2\]) and (\[yundong4\]). The initial value of $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$ are so small that we can safely neglect the first three term of the right hand side of Eqs. (\[yundong2\]) and (\[yundong4\]). So when $-8<N<-2$, we get $y'_\phi\approx3 \gamma y_\phi,~y'_\varphi\approx3 \gamma y_\varphi$. When $N\geq-2$ , the different initial value of $x_\phi$ and $x_\varphi$ have converged to a common evolutionary track, so from then on even the value of $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$ become large, the convergence of $y_\phi$ and $y_\varphi$ with different initial value of $x_\phi$ and $x_\varphi$ is no change. Now, consider the case (a) in which the initial value of $x_{\phi a}$ is large, such as the solid line in Fig.\[figure2\] . Since $x_{\phi a}>\sqrt{\beta_\phi y_{\phi a}}$ , so $x_{\phi a}$ will decrease until $x_{\phi a}=\sqrt{\beta_\phi y_{\phi a}}\equiv x_A$ (see Eq. (\[yundong1\])). Next, consider the case (b) in which the initial value of $x_{\phi b}$ is small, such as the dashed line in Fig.\[figure2\] . Since $x_{\phi b}<\sqrt{\beta_\phi y_{\phi b}}$ , so $x_{\phi b}$ will increase until $x_{\phi b}=\sqrt{\beta_\phi y_{\phi b}}\equiv x_B$ . We have observed above that the $y_\phi$ evolutionary tracks is not sensitive to the initial value of $x_\phi$ in these two case. But this doesn’t mean that $x_A= x_B$ , since the corresponding N might be different, and $y_\phi$ vary with N . In order to show these two case will converge, we suppose that $x_{\phi a}=x_{\phi b}+\delta$ at certain point N (e.g. $N=-4$). Since $x_{a \phi},~x_{\phi b}$ are very small, and in some sense $y_\phi$  evolutionary track is independent, so the Eq. (\[yundong1\]) can be expanded to lowest order in $\delta$ : $\delta'=-3\delta$ . The solution of this equation is $\delta\propto e^{-3N}$ .This means that $\delta$ dacays exponentially, and the evolutionary track of different initial value of $x_\phi$ given in Fig.\[figure2\] converge. Similar analysis can apply to $x_\varphi$. So we have proved that the evolutionary tracks of the  $x_\phi,~y_\phi,~x_\varphi$ and  $y_\varphi$  of different initial value given in Fig.\[figure1\] are converge. C. The future of the universe {#c.the-future-of-the-universe .unnumbered} ----------------------------- The critical points correspond to the fixed points where $x_{\phi}'=0$, $y_{\phi}'=0$, $x_{\varphi}'=0$, $y_{\varphi}'=0$, which have been calculated and given in Table I, and there are self-similar solutions with $$\label{Hubbleeq2} \frac{\dot{H}}{H^2}=-\frac{3}{2}\left( {-\frac{y_\phi (\gamma-x^2_\phi)}{\sqrt{1-x^2_\phi}}-\frac{y_\varphi (\gamma+x^2_\varphi)}{\sqrt{1+x^2_\varphi}}+\gamma} \right)$$ This corresponds to an expanding universe with a scale factor $a(t)$ given by $a\propto t^p$, where $$\label{Hubbleeq3} p=\frac{2}{3\left( {-\frac{y_\phi (\gamma-x^2_\phi)}{\sqrt{1-x^2_\phi}}-\frac{y_\varphi (\gamma+x^2_\varphi)}{\sqrt{1+x^2_\varphi}}+\gamma} \right)}$$ We now study the stability around the critical points given in Table I. Consider small perturbations $\delta x_\phi$, $\delta y_\phi$, $\delta x_\varphi$, and $\delta x_\varphi$ about the critical points $(x_{\phi c}, y_{\phi c}, x_{\varphi c}, y_{\varphi c})$: $x_{\phi c}\rightarrow x_{\phi c}+\delta x_\phi$, $y_{\phi c}\rightarrow y_{\phi c}+\delta y_\phi$, $x_{\varphi c}\rightarrow x_{\varphi c}+\delta x_\varphi$, $y_{\varphi c}\rightarrow y_{\varphi c}+\delta y_\varphi$. [c@ c@ c @c @c @c ]{} Label & $x_{\phi c}$ & $y_{\phi c}$ & $x_{\varphi c}$ & $y_{\varphi c}$ & $Existence$\ $A.$ & $0$ & 0 & 0& 0 & all $\gamma$\ $B.$ & $0$ & 0 & $-\sqrt{\beta_{\varphi }y_{\varphi c}}$ &$ \frac{\sqrt{\beta_\varphi^2+4}+\beta_\varphi}{2}$ &all $\gamma$\ $C.$ & $\pm1$ & 0 & 0 & 0 & all $\gamma$\ $D.$ & $\pm1$ & 0 &$-\sqrt{\beta_{\varphi }y_{\varphi c}}$ &$ \frac{\sqrt{\beta_\varphi^2+4}+\beta_\varphi}{2}$ & all $\gamma$\ $E.$ & $1$ & $\frac{1}{\beta_\phi}$ & $0$ & $0$ & $\gamma=1$\ $F.$ & $-1$ & $\frac{\beta^2_\varphi y^2_{\varphi c}}{\beta_\phi}$ & $-\sqrt{\beta_{\varphi }y_{\varphi c}}$ &$ \frac{\sqrt{\beta_\varphi^2+4}+\beta_\varphi}{2}$ & $\gamma=1$\ $G.$&$\sqrt{\gamma}$&$ \frac{\gamma}{\beta_\phi}$&0&0&$\gamma<\frac{1}{2}(\beta_\phi\sqrt{\beta_\phi^2+4}-\beta_\phi^2)$\ $H.$&$\sqrt{\beta_\phi y_{\phi c}} $&$\frac{\sqrt{\beta^2_\phi+4}-\beta_\phi}{2}$&0&0&all $\gamma$\ \ Substituting into Eqs. (\[yundong1\])$-$(\[yundong4\]), lead to the first-order differential equations: $$\begin{aligned} \left( \begin{array}{c} \delta x_\phi' \\ \delta y_\phi'\\ \delta x_\varphi'\\ \delta y_\varphi'\\ \end{array} \right) = {\cal M} \left( \begin{array}{c} \delta x_\phi \\ \delta y_\phi\\ \delta x_\varphi\\ \delta y_\varphi\\ \end{array} \right) \ , \label{uvdif}\end{aligned}$$ where ${\cal M}$ is a matrix that depends upon $x_{\phi c}, y_{\phi c}, x_{\varphi c}$ and $ y_{\varphi c}$. The general solution for the evolution of linear perturbations can be written as $$\begin{aligned} \label{perturbation} \begin{array}{c} \delta x_\phi= u_{11}~exp~(m_1N)+u_{12}~exp~(m_2N)+u_{13}~exp~(m_3N)+u_{14}~exp~(m_4N)\\ \delta y_\phi=u_{21}~exp~(m_1N)+u_{22}~exp~(m_2N)+u_{23}~exp~(m_3N)+u_{24}~exp~(m_4N)\\ \delta x_\varphi=u_{31}~exp~(m_1N)+u_{32}~exp~(m_2N)+u_{33}~exp~(m_3N)+u_{34}~exp~(m_4N) \\ \delta y_\varphi=u_{41}~exp~(m_1N)+u_{42}~exp~(m_2N)+u_{43}~exp~(m_3N)+u_{44}~exp~(m_4N)\\ \end{array} \label{uvdif}\end{aligned}$$ ![\[figure3\] Evolution of the equation of state ($\omega$)and density parameters($\Omega_{DE}$) as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5\times10^{-11}$.](fig7.eps "fig:"){height="2.5in" width="3.2in"} ![\[figure3\] Evolution of the equation of state ($\omega$)and density parameters($\Omega_{DE}$) as a function of N for the dark energy model with $\beta_\phi=\beta_\varphi=1/3,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=0.9999999$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=2.0$, $y_{\varphi i}=6.5\times10^{-11}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=6.5\times10^{-11}$;  c. dotted line: $x_{\phi i}=0.5$, $y_{\phi i}=6\times10^{-11}$, $x_{\varphi i}=1.0$, $y_{\varphi i}=6.5\times10^{-11}$.](fig8.eps "fig:"){height="2.5in" width="3.2in"} where $m_1$, $m_2$, $m_3$, and $m_4$ are the eigenvalues of the matrix ${\cal M}$. Thus stability requires the real part of all eigenvalues being negatives [@CLW97; @GPZ02]. We obtain the eigenvalues and stability for the fixed points in Table II. The system has a fixed point A which is a fluid-dominated solution, a fixed point B which is a phantom tachyon-dominated solution, a fixed point C which is tachyon kinetic-dominated solution, a fixed point D which is the two-field dominated solution, two fixed points E and F which exist only for $\gamma=1$, a fixed point G in which the energy densities $\rho_\phi$ and $\rho_\gamma$ decrease with the same rate, a fixed H which is tachyon dominated solution. In Fig.\[figure3\], we plot the evolution of the equation of state in the future. We find that the cosmic evolutionary track is towards the stable fixed point B in this model. This can be seen by compare Table I with Fig.\[figure2\]: Substituting $\beta_\varphi=1/3$ into the stable fixed point B ( $x_{\phi c}=0$, $y_{\phi c}=0$, $x_{\varphi c}=-\sqrt{\beta_{\varphi }y_{\varphi c}}$, $y_{\varphi c}= (\sqrt{\beta_\varphi^2+4}+\beta_\varphi)/2$ ) , we get $x_{\phi c}=0$, $y_{\phi c}=0$, $x_{\varphi c}=-0.627285$, $ y_{\varphi c}=1.180460$, which are consistent with Fig.\[figure2\]. And from Eq. (\[w\]), we know $\omega=-(1+x_{\varphi c}^2)=-1.393487$ at the fix point B, which is consistent with Fig.\[figure3\]. Generally speaking, if the initial values of $x_\phi,~y_\phi,~x_\varphi$ and $y_\varphi$ are not the values of the unstable critical point given in Table I, they will evolve towards the stable critical point B ( If the physical constraints $1-x_{\phi }^2>0$ is not violated ). This point can be seen from Eq. (\[perturbation\]). (If one evolutionary track is towards the unstable critical point, when the values of $x_\phi,y_\phi,x_\varphi$ and  $y_\varphi$ differ from $x_{\phi c}, y_{\phi c}, x_{\varphi c}, y_{\varphi c}$ by an amount $\vec{\delta}$, then from Eq. (\[perturbation\]), we know the $\vec{\delta}$ will be larger, instead of becoming smaller. ) D. Discussions {#d.discussions .unnumbered} --------------- - In a tachyon dark energy model, the tachyon is the only source of the dark energy, and there are three kind stable critical points [@AL04], which existence depend on the value of $\gamma$. But in our model, there is only one stable critical point, which existence is independent of the value of $\gamma$. In a tachyon dark energy model, the value of $x_\phi$ and $y_\phi$ can be non-zero at the stable critical points. But in our model the value of $x_\phi$ and $y_\phi$ must be zero, which we have shown in table I. This’s not accident. This point can be seen as follows. At the critical point, the value of $x_{\phi c}$, $y_{\phi c}$, $x_{\varphi c}$ and $y_{\varphi c}$ are fixed, and the value of $x_{\varphi c}$ is non-zero, otherwise the value of $y_{\varphi c}$ is zero (see Eq. (\[yundong3\])), which means $\rho_{\varphi c}=0$, so $x_{\varphi c}=0$ is impossible. $$\label{pw1} \rho_{\phi c}=\frac{V({\phi_c})}{\sqrt{1-\dot{\phi_ c}^2}}=\frac{3M_p^2y_{\phi c}}{\sqrt{1-x^2_{\phi c}}}H^2~~~~~~~\omega_{\phi c}=\frac{p_{\phi c}}{\rho_{\phi c}}=-1+\dot{{\phi_c}}^2=-1+x_{\phi c}^2\geq-1$$ $$\label{pw2} \rho_{\varphi c}=\frac{V(\varphi_c)}{\sqrt{1-\dot{\varphi_c}^2}}=\frac{3M_p^2y_{\varphi c}}{\sqrt{1-x^2_{\varphi c}}}H^2~~~~~~~\omega_{\varphi c}=\frac{p_{\varphi c}}{\rho_{\varphi c}}=-1-\dot{\varphi_ c}^2=-1-x_{\varphi c}^2<-1$$ If $y_{\phi c}\neq0$, then from Eq. (\[pw1\]), we know $H^2$ is nonincreasing at the fix points, since $\rho_{\phi c}$ is nonincreasing. If $y_{\varphi c}\neq 0$, then from Eq. (\[pw2\]), we know $H^2$ is increasing at the fix points, since $\rho_{\varphi c}$ is increasing. So either $y_{\phi c}$ or $y_{\varphi c}$ will be zero . Since $\rho_{\varphi c}$ is increasing and $\rho_{\phi c}$ is nonincreasing, so we choose $y_{\phi c}=0$. And from Eq. (\[yundong1\]), we know $x_{\phi c}=0$. (This is because $y_{\phi c}=0$, and if the physical constraints $1-x_{\phi c}^2>0$ is not violated .) - In a tachyon dark energy model with inverse square potential $V(\phi)=M_\phi^2\phi^{-2}$, in order to have significant acceleration at late times($a\propto t^p, p\equiv\frac{1}{2}\left(\frac{M_\phi}{M_p}\right)^2\gg1$), we clearly require $M_\phi$ much larger than the Planck mass  [@CGST04]. Such a large mass is problematic as we expect general relativity itself to break down in such a regime. This problem is fortunately alleviated for the inverse power-law potential $V=M_\phi^{4-n}\phi^{-n}$ with $0<n<2$. In our model, since there is another field $\varphi$, with the equation of state $\omega_\varphi<-1$, a significant acceleration at late times can be obtained much easier. But the value of $M_\phi$ still can’t be small very much , since if $\beta_\phi=\frac{4M_p^2}{3M_\phi^2}$ is larger , the risk of $1-\dot{\phi}^2$ becoming nonpositive is increasing, which can be see from Eqs. (\[yundong1\]) and (\[yundong2\]). - The speed of sound describe the evolution of small perturbations. In a tachyon dark energy model the sound speed is $$c^2_s=\frac{p_{\phi X}}{\rho_{\phi X}}=1-\dot{\phi}^2$$ where $X$ denotes the partial derivative with respect to $X=\frac{1}{2}(\partial_\mu\phi)^2$. Since the value of $1-\dot{\phi}^2$ is necessarily nonpositive because of the square root in the Lagrangian density Eq. (\[Lagrangian density1\]), the energy and pressure are real, and inhomogeneous perturbations have a positive sound speed, so the theory is stable. In our model, there are two fields. Physically we can use the independent sound speed of each component to describe the whole system. However, the present constraints on the sound speed of dark energy are so weak that considering to study the two independent sound speed is not justified at present. So it can use the effective sound speed, as some authors do [@KS84; @XCQZZ07]. As we have shown above, when N is large enough, the fractional energy density of phantom tachyon $\Omega_\varphi\rightarrow 1$. So ultimately the effective Lagrangian density is Eq. (\[Lagrangian density2\]), and the effective sound speed is $$c^2_s=\frac{p_{\varphi X}}{\rho_{\varphi X}}=1+\dot{\varphi}^2>1$$ This means that perturbations of the background scalar field can travel faster than light as measured in the preferred frame where the background field is homogeneous. But there is no violation of causality. The theory of the k-essence-like scalar fields with the Lorentz invariant action is not possible create closed time-like curves in the Friedmann universe and hence we cannot send the signal to our own past using the superluminal signals build out of the ¡°superluminal¡± scalar field perturbations [@ECSPM01]. ANOTHER TACHYON-QUINTOM MODEL INCLUDING THE INTERACTION BETWEEN TWO FIELDS =========================================================================== In order to show some impact of interactions between the two scalars on the evolution of the universe, we consider another system, which include a fluid with barotropic equation of state $p_\gamma=(\gamma-1)\rho_\gamma$, $0<\gamma\leq2$, and two scalars with interaction between them. Maybe there are many more interactions between the two scalars, but for simplicity we only consider the below Lagrangian density, to see whether there are some interesting results or not. The Lagrangian density of the scalars we choose are: $$\mathcal{L}=-V(\phi,\varphi)\sqrt{1+g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-g^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi}$$ In this section, we turn our attention to the possibility of the scalars as a source of the dark energy. We restrict to spatially homogeneous time dependent solutions for which $\partial_i\phi =\partial_i\varphi = 0$. Thus the energy densities and the pressure of the fields are $$\label{eee1}\rho=\frac{V(\phi,\varphi)}{\sqrt{1-\dot{\phi}^2+\dot{\varphi}^2 }},~~~~p=-V(\phi,\varphi)\sqrt{1-\dot{\phi}^2 +\dot{\varphi}^2}$$ Here a dot is derivation with respect to synchronous time. The background equations of motion are $$\label{eq11} \frac{\ddot{\phi}+\ddot{\phi}\dot{\varphi}^2-\ddot{\varphi}\dot{\phi}\dot{\varphi}}{1-\dot{\phi}^2+\dot{\varphi}^2}+3H\dot{\phi}+(1+\dot{\varphi}^2)\frac{dV(\phi)}{Vd\phi}=0$$ $$\label{eq22}\frac{\ddot{\varphi}-\ddot{\varphi}\dot{\phi}^2+\ddot{\phi}\dot{\phi}\dot{\varphi}}{1-\dot{\phi}^2+\dot{\varphi}^2}+3H\dot{\varphi}-(1-\dot{\phi}^2)\frac{dV(\varphi)}{Vd\varphi}=0$$ $$\dot{\rho}_\gamma=-3\gamma H\rho_\gamma$$ $$\begin{aligned} \label{Hubble1} \dot{H}=-\frac{1}{2M_p^2} \left( {\frac{\dot{\phi}^2V(\phi,\varphi)}{\sqrt{1-\dot{\phi}^2+\dot{\varphi}^2}}-\frac{\dot{\varphi}^2V(\phi,\varphi)}{\sqrt{1-\dot{\phi}^2+\dot{\varphi}^2}}+\gamma\rho_\gamma} \right)\end{aligned}$$ together with a constraint equation for the Hubble parameter: $$\label{Hubbleeq1} H^2=\frac{1}{3M_p^2}\left( {\frac{V(\phi,\varphi)}{\sqrt{1-\dot{\phi}^2+\dot{\varphi}^2}}+ \rho_\gamma} \right)$$ The potentials we considered are still inverse square potentials: $$\label{V1} V(\phi,\varphi)=\lambda_1M_p^2\phi^{-1}\varphi^{-1}+\lambda_2M^2_p\phi^{-2}+\lambda_3M_p^2\varphi^{-2}$$ We define the following dimensionless quantities : $$\label{df2} x_\phi=\dot{\phi},~y_\phi=\frac{\phi^{-1}}{\sqrt{3}H},~X_\varphi=\dot{\varphi},~y_\varphi=\frac{\varphi^{-1}}{\sqrt{3}H},~z=\frac{\rho_\gamma}{3H^2M_p^2}$$ Now the Eqs. (\[Hubbleeq1\]) and (\[Hubble1\]) can be rewrite as follow: $$\label{HH2} 1=\frac{\lambda_1y_\phi y_\varphi+\lambda_2y_\phi^2 +\lambda_3 y_\varphi^2}{\sqrt{1-x_\phi^2+x_\varphi^2}}+z= \Omega_{DE} +z$$ $$\frac{H'}{H}=-\frac{3}{2}\left[\frac{-\lambda_1y_\phi y_\varphi(\gamma-x_\phi^2+x_\varphi^2)-\lambda_2y^2_\phi(\gamma-x_\phi^2+x_\varphi^2) -\lambda_3y^2_\varphi(\gamma-x_\phi^2+x_\varphi^2)}{\sqrt{1-x_\phi^2+x_\varphi^2}}+\gamma\right]$$ where $\Omega_{DE}$ measure the dark energy density as a fraction of the critical density, a prime denotes a derivative with respect to the logarithm of the scale factor, $N={\rm ln}\,a$. Then the evolution Eqs. (\[eq11\])and (\[eq22\]) can be written to an autonomous system: $$\begin{aligned} \begin{array}{c} \label{em1} x'_\phi=-3(1-x_\phi^2)\left[x_\phi+\left(\frac{-\sqrt{3}\lambda_1y_\phi-2\sqrt{3}\lambda_2y_\phi^2/y_\varphi}{3\lambda_1+3\lambda_2y_\phi/y_\varphi+3\lambda_3y_\varphi/y_\phi}\right)(1+x_\varphi^2)\right] \\-3x_\phi x_\varphi\left[x_\varphi-\left(\frac{-\sqrt{3}\lambda_1y_\varphi-2\sqrt{3}\lambda_3y_\varphi^2/y_\phi}{3\lambda_1+3\lambda_2y_\phi/y_\varphi+3\lambda_3y_\varphi/y_\phi}\right)(1-x_\phi^2)\right] \end{array}\end{aligned}$$ $$\begin{aligned} \label{em2} \begin{array}{l} y'_\phi=y_\phi \left[\frac{-3\lambda_1y_\phi y_\varphi(\gamma-x_\phi^2+x_\varphi^2)-3\lambda_2y^2_\phi(\gamma-x_\phi^2+x_\varphi^2) -3\lambda_3y^2_\varphi(\gamma-x_\phi^2+x_\varphi^2)}{2\sqrt{1-x_\phi^2+x_\varphi^2}}\right]-\sqrt{3}x_\phi y^2_\phi+\frac{3}{2}\gamma y_\phi \\ \end{array}\end{aligned}$$ $$\begin{aligned} \label{em3} \begin{array}{c} x'_\varphi=-3(1+x_\varphi^2)\left[x_\varphi-\left(\frac{-\sqrt{3}\lambda_1y_\varphi-2\sqrt{3}\lambda_3y_\varphi^2/y_\phi}{3\lambda_1+3\lambda_2y_\phi/y_\varphi+3\lambda_3y_\varphi/y_\phi}\right)(1-x_\phi^2)\right] \\~+3x_\phi x_\varphi\left[x_\phi+\left(\frac{-\sqrt{3}\lambda_1y_\phi-2\sqrt{3}\lambda_2y_\phi^2/y_\varphi}{3\lambda_1+3\lambda_2y_\phi/y_\varphi+3\lambda_3y_\varphi/y_\phi}\right)(1+x_\varphi^2)\right] \end{array}\end{aligned}$$ $$\begin{aligned} \label{em4} \begin{array}{l} y'_\varphi=y_\varphi \left[\frac{-3\lambda_1y_\phi y_\varphi(\gamma-x_\phi^2+x_\varphi^2)-3\lambda_2y^2_\phi(\gamma-x_\phi^2+x_\varphi^2) -3\lambda_3y^2_\varphi(\gamma-x_\phi^2+x_\varphi^2)}{2\sqrt{1-x_\phi^2+x_\varphi^2}}\right]-\sqrt{3}x_\varphi y^2_\varphi+\frac{3}{2}\gamma y_\varphi \\ \end{array}\end{aligned}$$ The equation of state of the dark energy is $$\label{wwww} \omega=\frac{p}{\rho}=-1+\dot{\phi}^2-\dot{\varphi}^2=-1+x_\phi^2-x_\varphi^2$$ For simplicity, we confine ourselves to the case $\lambda_1\neq0,~\lambda_2=\lambda_3=0$, then the Eqs. (\[em1\])$-$ (\[em4\]) can be reduced to ![\[figure8\] Evolution of the equation of state ($\omega$)and density parameters($\Omega_{DE}$) as a function of N for the dark energy model with $\lambda_1=1,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=1000.0$, $y_{\phi i}=1\times10^{-5}$, $x_{\varphi i}=1000.0$, $y_{\varphi i}=1\times10^{-5}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=1\times10^{-5}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=1\times10^{-5}$;  c. dotted line: $x_{\phi i}=0.9999999$, $y_{\phi i}=1\times10^{-5}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=1\times10^{-5}$.](fig9.eps "fig:"){height="2.5in" width="3.2in"} ![\[figure8\] Evolution of the equation of state ($\omega$)and density parameters($\Omega_{DE}$) as a function of N for the dark energy model with $\lambda_1=1,~\gamma=1$. Initial conditions (at $N=-8$) :  a. solid line: $x_{\phi i}=1000.0$, $y_{\phi i}=1\times10^{-5}$, $x_{\varphi i}=1000.0$, $y_{\varphi i}=1\times10^{-5}$;  b. dashed line: $x_{\phi i}=1\times10^{-12}$, $y_{\phi i}=1\times10^{-5}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=1\times10^{-5}$;  c. dotted line: $x_{\phi i}=0.9999999$, $y_{\phi i}=1\times10^{-5}$, $x_{\varphi i}=1\times10^{-12}$, $y_{\varphi i}=1\times10^{-5}$.](fig10.eps "fig:"){height="2.5in" width="3.2in"} $$\label{em5} x'_\phi=-3(1-x_\phi^2)(x_\phi-\frac{y_\phi}{\sqrt{3}}(1+x_\varphi^2)) -3x_\phi x_\varphi(x_\varphi+\frac{y_\varphi}{\sqrt{3}}(1-x_\phi^2))$$ $$\label{em6} y'_\phi=\frac{-3\lambda_1y^2_\phi y_\varphi(\gamma-x_\phi^2+x_\varphi^2)}{2\sqrt{1-x_\phi^2+x_\varphi^2}}-\sqrt{3}x_\phi y^2_\phi+\frac{3}{2}\gamma y_\phi$$ $$\label{em7} x'_\varphi=-3(1+x_\varphi^2)(x_\varphi+\frac{y_\varphi}{\sqrt{3}}(1-x_\phi^2)) ~+3x_\phi x_\varphi(x_\phi-\frac{y_\phi}{\sqrt{3}}(1+x_\varphi^2))$$ $$\label{em8} y'_\varphi=\frac{-3\lambda_1y_\phi y^2_\varphi(\gamma-x_\phi^2+x_\varphi^2)}{2\sqrt{1-x_\phi^2+x_\varphi^2}} -\sqrt{3}x_\varphi y^2_\varphi+\frac{3}{2}\gamma y_\varphi$$ The evolution of the equation of state $\omega$ and density parameters are shown in Fig.\[figure8\] with $\lambda_1=1,~\gamma=1$. We choose $N=-8$ as the initial number of e-folds, so choose the $\gamma=1$ in Eqs. (\[em6\])  and (\[em8\]) is a good approximation. From Fig.\[figure8\] we can see that this model is not sensitive to the initial kinetic energy density of the two fields ($x_\phi=\dot{\phi},x_\varphi=\dot{\varphi}$). From Eq. (\[eee1\]) and Fig.\[figure8\], we know that the initial energy density of $\phi$ varied by about four orders of magnitude is still consistent with current observational constraints. But the initial potential energy density of the two fields require fine-tuning to agree with observations. [c@ c@ c @c @c @c ]{} Label & $x_\phi$ & $y_\phi$ & $x_\varphi$ & $y_\varphi$ & $Existence$\ $A.$ & $0$ & 0 & 0& 0 & all $\gamma$\ $B.$ & $\sqrt{\frac{\gamma}{2}}$ & $ \sqrt{\frac{3\gamma}{2}}$& 0 &0 &all $\gamma$\ Label $m_{1}$ $m_{2}$ $m_{3}$ $m_{4}$ Stability ------- ----------------------------------------------- ----------------------------------------------- --------- --------------------- ----------- -- $A.$ $-3$ $\frac{3\gamma}{2}$ $-3$ $\frac{3\gamma}{2}$ unstable $B.$ $\frac{1}{2}(-3+3\sqrt{1+2\gamma^2-4\gamma})$ $\frac{1}{2}(-3-3\sqrt{1+2\gamma^2-4\gamma})$ $-3$ $\frac{3\gamma}{2}$ unstable The critical points correspond to the fixed points where $x_{\phi}'=0$, $y_{\phi}'=0$, $x_{\varphi}'=0$, $y_{\varphi}'=0$, which have been calculated and given in Table III. To study the stability of the critical points, we substitute the linear perturbations about the critical points into Eq. (\[em5\])$-$ (\[em8\]) and keep terms to the first-order in the perturbations. The four perturbation equations give rise to four eigenvalues. The stability requires the real part of all eigenvalues be negative (see Table IV for the eigenvalues of perturbation equations and the stability of critical points). So there are no stable critical points. Since the two kind critical points all have $y_\varphi=0$, which means that $z=1$(see Eq. (\[HH2\])). The physical constraints $1-\dot{\phi}^2+\dot{\varphi}^2>0$ set limit on the equation of state $\omega$ (see Eq. (\[wwww\])) : $\omega<0$. Compare this with $\gamma$, ($\gamma=4/3$ for radiation, $\gamma=1$ for dust matter), we know that the energy densities of the fields $\rho$ decrease more slowly than $\rho_\gamma$, so $z=1$ and $y_\varphi=0$ are impossible. SUMMARY ======= In this paper we have studied the tachyon-quintom dark energy models, in which during the evolution of the universe the equation of state $w$ changes from $w>-1$ to $w<-1$. Firstly, the model we studied is made up of two fields, one is tachyon, the other is phantom tachyon. In order to construct a autonomous system, the potentials we choose are inverse square potentials. We find the model is not sensitive to the initial kinetic energy density of tachyon and phantom tachyon, and we analyze the reason in detail. The initial energy density (at N=8 ) of the tachyon varied by nearly four orders of magnitude is still consistent with current observational constraints. The phase-space analysis of the spatially flat FRW models shows that there exist a unique stable critical point, and we compare it with tachyon model at last. Then we consider another form of two-field model which include the interaction between two fields. For the case of $\lambda_1\neq0$, $ \lambda_2=0$, $\lambda_3=0$, the phase-space analysis shows that there is no stable critical point. In some sense, this work means that multiple kessence-like fields can implement the quintom, which extends the possibilities that the quintom is realized and is worth further study. $$$$ [**Acknowledgements:**]{} This work is supported in part by NSFC under Grant No: 10491306, 10521003, 10775179, 10405029, 10775180, in part by the Scientific Research Fund of GUCAS(NO.055101BM03), in part by CAS under Grant No: KJCX3-SYW-N2. [99]{} S. Perlmutter et al., Astrophys. J. 483, 565 (1997); A. G. Riess et al., Astron. J. 116, 1009 (1998); Astron. J. 117, 707 (1999). D. N. Spergel et al., Astrophys. J. Suppl. 148, 175 (2003). A. G. Riess et al., Astrophys. J. 607, 665 (2004). E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D15,1753 (2006). I. Zlatev, L. M. Wang and P. J. Steinhardt, Phys. Rev. Lett. 82, 896 (1999); P. J. Steinhardt, L. M. Wang and I. Zlatev, Phys. Rev. D 59, 123504 (1999); L. Amendola, Phys. Rev. D 62, 043511 (2000). R. R. Caldwell, Phys. Lett. B 545, 23 (2002); S. Nojiri and S. D. Odintsov, Phys. Lett. B 562, 147 (2003). V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D 9, 373 (2000); P. J E. Peebles and B. Ratra, Rev. Mod. Phys. 75, 559 (2003); T. Padmanabhan, Phys. Rep. 380, 235 (2003); V. Sahni, Lect. Notes Phys. 653, 141 (2004); arXiv:astro-ph/0502032; E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006). U. Alam, V. Sahni, T. D. Saini and A. A. Starobinsky, Mon. Not. Roy. Ast. Soc. 354 275 (2004); U. Alam, V. Sahni and A. A. Starobinsky, JCAP 0406 008 (2004). B. Feng, X. L. Wang and X. M. Zhang, Phys. Lett. B 607, 35 (2005) . B. Feng, M. Li, Y. S. Piao and X. Zhang, Phys. Lett. B 634, 101 (2006) ; Z. K. Guo, Y. S. Piao, X. M. Zhang and Y. Z. Zhang, Phys. Lett. B 608, 177 (2005); X. F. Zhang, H. Li, Y. S. Piao and X. M. Zhang, Mod. Phys. Lett. A 21, 231 (2006) ; M. Z. Li, B. Feng and X. M. Zhang, JCAP 0512, 002 (2005); Y.F. Cai, H. Li, Y.S. Piao, X.M. Zhang, Phys.Lett.B646, 141, (2007); Y.F. Cai, J. Wang, Class.Quant.Grav.25, 165014, (2008). H. Wei, R.G. Cai, D.F. Zeng, Class.Quant.Grav. 22, 3189 (2005) ; H. Wei, R.G. Cai, Phys.Lett. B634, 9 (2006) . X. Zhang, Commun.Theor.Phys.44, 762 (2005); Phys.Rev.D74, 103505, (2006) . I. Y. Aref¡¯eva, A. S. Koshelev and S. Y. Vernov, Phys. Rev. D 72, 064017 (2005); S. Y. Vernov, arXiv:astro-ph/0612487; A. S. Koshelev, JHEP 0704, 029 (2007). H. Mohseni Sadjadi, M. Alimohammadi, Phys. Rev. D 74, 043506 (2006); J. Sadeghi, M. R. Setare, A. Banijamali, F. Milani, Phys. Lett. B 662, 92 (2008); M. R. Setare , J. Sadeghi, A. R. Amani, Phys. Lett. B 660, 299 (2008); M. R. Setare, E. N. Saridakis, arXiv:0810.4775; M. R. Setare, E. N. Saridakis, arXiv:0809.0114. L.P. Chimento, M. Forte, R. Lazkoz, M.G. Richarte, arXiv:0811.3643; G. Leon, R. Cardenas, J.L. Morales, arXiv:0812.0830. Y.F. Cai, M.Z. Li, J.X. Lu, Y.S. Piao, T.T. Qiu, X.M. Zhang, Phys. Lett. B651, 1(2007). A Sen, JHEP 0204,048, (2002); JHEP 0207,065 (2002). G. W. Gibbons, Phys. Lett. B 537, 1 (2002); G. Shiu and I. Wasserman, Phys. Lett. B 541, 6 (2002); T. Padmanabhan and T. R. Choudhury, Phys. Rev. D 66, 081301 (2002); A.Frolov, L.Kofman, A. Starobinsky Phys.Lett.B545,8, (2002). M. Fairbairn and M. H. G. Tytgat, Phys. Lett. B 546, 1 (2002); L. Kofman and A. Linde, JHEP 0207, 004 (2002); M. Sami, Mod. Phys. Lett. A 18, 691 (2003); Y.S. Piao, R.G. Cai, X.M. Zhang, Y.Z. Zhang, Phys.Rev. D66 (2002) 121301 ; J. M. Cline, H. Firouzjahi and P. Martineau, JHEP 0211, 041 (2002); M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D 67, 063511 (2003); Y.S. Piao, Q.G. Huang, X.M. Zhang, Y.Z. Zhang, Phys.Lett. B570,1 (2003) ; Z.K. Guo, Y.S. Piao, R.G. Cai, Y.Z. Zhang, Phys.Rev. D68, 043508 (2003) ; S. Nojiri and S. D. Odintsov, Phys. Lett. B 571, 1 (2003);D. A. Steer and F. Vernizzi, Phys. Rev. D 70, 043527 (2004); V. Gorini, A. Y. Kamenshchik, U. Moschella and V. Pasquier, Phys. Rev. D 69, 123512 (2004); B. C. Paul and M. Sami, Phys. Rev. D 70, 027301 (2004); J. M. Aguirregabiria and R. Lazkoz, Mod. Phys. Lett. A 19, 927 (2004). T. Padmanabhan, Phys. Rev. D 66, 021301 (2002);A. Feinstein, Phys. Rev. D 66, 063511 (2002); J.S. Bagla, H. K. Jassal, and T. Padmanabhan, Phys. Rev. D 67, 063504 (2003);Gianluca Calcagni and Andrew R. Liddle, Phys.Rev.D74, 043528 (2006). D.Choudhury, D. Ghoshal, D. P. Jatkar, and S. Panda, Phys. Lett. B 544, 231 (2002); X.Z Li, J.G. Hao, D.J. Liu, Chin.Phys.Lett. 19 1584, (2002); J.G. Hao and X.Z. Li, Phys. Rev. D 66, 087301 (2002); M. R. Garousi, M. Sami, and S. Tsujikawa, Phys. Rev. D 70, 043536 (2004); V. H. Cardenas, Phys. Rev. D 73, 103512 (2006). J. M. Aguirregabiria and R. Lazkoz, Phys. Rev. D 69, 123502 (2004). E. J. Copeland, M. R. Garousi, M. Sami, and S. Tsujikawa, Phys. Rev. D 71, 043003 (2005). J.G. Hao, X.Z. Li, Phys. Rev. D 68, 043501 (2003). S. Tsujikawa , M. Sami Phys. Lett. B 603, 113 (2004). E.J. Copeland, A.R. Liddle, and D. Wands, Phys. Rev. D 57, 4686 (1998). Imogen P. C. Heard and David Wands, Class. Quant. Grav. 19, 5435 (2002). R. Bean, S.H. Hansen, and A. Melchiorri, Phys. Rev. D 64, 103508 (2001). Z.K. Guo, Y.S. Piao, and Y.Z. Zhang, Phys. Lett. B 568, 1 (2003). H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. 78, 1 (1984). J.Q.Xia, Y.F.Cai, T.T.Qiu, G.B.Zhao, and X.M.Zhang, Int.J.Mod.Phys. D17,1229 (2008). J.K. Erickson, R.R. Caldwell, P.J. Steinhardt, C. Armendariz-Picon, and V. Mukhanov, Phys. Rev. Lett. 88,121301(2002);C. Armendariz-Picon, E. A. Lim, JCAP 0508:007, (2005); J.-P. Bruneton, Phys. Rev. D75:085013, (2007); Jin U Kang, Vitaly Vanchurin, Sergei Winitzki Phys.Rev.D76:083511,(2007); E. Babichev, V. Mukhanov, A. Vikman, JHEP 0802:101,(2008).
--- abstract: | We study an alternative model of infinitary term rewriting. Instead of a metric on terms, a partial order on partial terms is employed to formalise convergence of reductions. We consider both a weak and a strong notion of convergence and show that the metric model of convergence coincides with the partial order model restricted to total terms. Hence, partial order convergence constitutes a conservative extension of metric convergence, which additionally offers a fine-grained distinction between different levels of divergence. In the second part, we focus our investigation on strong convergence of orthogonal systems. The main result is that the gap between the metric model and the partial order model can be bridged by extending the term rewriting system by additional rules. These extensions are the well-known Böhm extensions. Based on this result, we are able to establish that – contrary to the metric setting – orthogonal systems are both infinitarily confluent and infinitarily normalising in the partial order setting. The unique infinitary normal forms that the partial order model admits are Böhm trees. address: ' Department of Computer Science, University of Copenhagen Universitetsparken 5, 2100 Copenhagen, Denmark' author: - Patrick Bahr bibliography: - 'po-inf-rew.bib' title: Partial Order Infinitary Term Rewriting --- Introduction {#sec:introduction .unnumbered} ============ Infinitary term rewriting [@kennaway03book] extends the theory of term rewriting by giving a meaning to transfinite rewriting sequences. Its formalisation [@dershowitz91tcs] is chiefly based on the metric space of terms as studied by Arnold and Nivat [@arnold80fi]. Other models for transfinite reductions, using for example general topological spaces [@rodenburg98jsyml] or partial orders [@corradini93tapsoft; @blom04rta], were mainly considered to pursue quite specific purposes and have not seen nearly as much attention as the metric model. In this paper we introduce a novel foundation of infinitary term rewriting based on the partially ordered set of partial terms [@goguen77jacm]. We show that this model of infinitary term rewriting is superior to the metric model. This assessment is supported by two findings: First, the partial order model of infinitary term rewriting conservatively extends the metric model. That is, anything that can be done in the metric model can be achieved in the partial order model as well by simply restricting it to the set of total terms. Secondly, unlike the metric model, the partial order model provides a fine-grained distinction between different levels of divergence and exhibits nice properties like infinitary confluence and normalisation of orthogonal systems. The defining core of a theory of infinitary term rewriting is its notion of convergence for transfinite reductions: which transfinite reductions are “admissible” and what is their final outcome. In this paper we study both variants of convergence that are usually considered in the established theory of metric infinitary term rewriting: weak convergence [@dershowitz91tcs] and strong convergence [@kennaway95ic]. For both variants we introduce a corresponding notion of convergence based on the partially ordered set of partial terms. The first part of this paper is concerned with comparing the metric model and the partial order model both in their respective weak and strong variants. In both cases, the partial order approach constitutes a conservative extension of the metric approach: a reduction in the metric model is converging iff it is converging in the partial order model and only contains total terms. In the second part we focus on strong convergence in orthogonal systems. To this end we reconsider the theory of meaningless terms of Kennaway et al. [@kennaway99jflp]. In particular, we consider Böhm extensions. The Böhm extension of a term rewriting system adds rewrite rules which admit contracting meaningless terms to $\bot$. The central result of the second part of this paper is that the additional rules in Böhm extensions close the gap between partial order convergence and metric convergence. More precisely, we show that reachability w.r.t. partial order convergence in a term rewriting system coincides with reachability w.r.t. metric convergence in the corresponding Böhm extension. From this result we can easily derive a number of properties for strong partial order convergence in orthogonal systems: - Infinitary confluence, - infinitary normalisation, and - compression, i.e. each reduction can be compressed to length at most $\omega$ The first two properties exhibit another improvement over the metric model which does not have either of these. Moreover, it means that each term has a unique infinitary normal form – its Böhm tree. The most important tool for establishing these results is provided by a notion of complete developments that we have transferred from the metric approach to infinitary rewriting [@kennaway95ic]. We show, that the final outcome of a complete development is unique and that, in contrast to the metric model, the partial order model admits complete developments for any set of redex occurrences. To this end, we use a technique similar to paths and finite jumps known from metric infinitary term rewriting [@kennaway03book; @ketema11ic]. ### Outline {#outline .unnumbered} After providing the basic preliminaries for this paper in Section \[sec:preliminaries\], we will briefly recapitulate the metric model of infinitary term rewriting including meaningless terms and Böhm extensions in Section \[sec:metr-infin-term\]. In Section \[sec:part-order-infin\], we introduce our novel approach to infinitary term rewriting based on the partial order on terms. In Section \[sec:comp-mrs-conv\], we compare both models and establish that the partial order model provides a conservative extension of the metric model. In the remaining part of this paper, we focus on the strong notion of convergence. In Section \[sec:compl-devel\], we establish a theory of complete developments in the setting of partial order convergence. This is then used in Section \[sec:relation-bohm-trees\] to prove the equality of reachability w.r.t. partial order convergence and reachability w.r.t. metric convergence in the Böhm extension. Finally, we evaluate our results and point to interesting open questions in Section \[sec:conclusions\]. Preliminaries {#sec:preliminaries} ============= We assume the reader to be familiar with the basic theory of ordinal numbers, orders and topological spaces [@kelley55book], as well as term rewriting [@terese03book]. In the following, we briefly recall the most important notions. Transfinite Sequences --------------------- We use $\alpha, \beta, \gamma, \lambda, \iota$ to denote ordinal numbers. A *transfinite sequence* (or simply called *sequence*) $S$ of length $\alpha$ in a set $A$, written $(a_\iota)_{\iota < \alpha}$, is a function from $\alpha$ to $A$ with $\iota \mapsto a_\iota$ for all $\iota \in \alpha$. We use $\len{S}$ to denote the length $\alpha$ of $S$. If $\alpha$ is a limit ordinal, then $S$ is called *open*. Otherwise, it is called *closed*. If $\alpha$ is a finite ordinal, then $S$ is called *finite*. Otherwise, it is called *infinite*. For a finite sequence $(a_i)_{i < n}$ or a sequence $(a_i)_{i < \omega}$ of length $\omega$, we also use the notation $\seq{a_0,a_1,\dots,a_{n-1}}$ respectively $\seq{a_0,a_1,\dots}$. In particular, $\emptyseq$ denotes an empty sequence. The *concatenation* $(a_\iota)_{\iota<\alpha}\concat (b_\iota)_{\iota<\beta}$ of two sequences is the sequence $(c_\iota)_{\iota<\alpha+\beta}$ with $c_\iota = a_\iota$ for $\iota < \alpha$ and $c_{\alpha+\iota} = b_\iota$ for $\iota < \beta$. A sequence $S$ is a (proper) *prefix* of a sequence $T$, denoted $S \le T$ (resp. $S < T$), if there is a (non-empty) sequence $S'$ with $S\concat S' = T$. The prefix of $T$ of length $\beta$ is denoted $\prefix{T}{\beta}$. The binary relation $\le$ forms a complete semilattice (see Section \[sec:partial-orders\] below). Similarly, a sequence $S$ is a (proper) *suffix* of a sequence $T$ if there is a (non-empty) sequence $S'$ with $S'\concat S = T$. Let $S = (a_\iota)_{\iota < \alpha}$ be a sequence. A sequence $T = (b_{\iota})_{\iota < \beta}$ is called a *subsequence* of $S$ if there is a monotone function $f\fcolon \beta \to \alpha$ such that $b_\iota = a_{f(\iota)}$ for all $\iota < \beta$. To indicate this, we write $\subseq{S}{f}$ for the subsequence $T$. If $f(\iota) = f(0) + \iota$ for all $\iota < \beta$, then $\subseq{S}{f}$ is called a *segment* of $S$. That is, $T$ is a segment of $S$ iff there are two sequences $T_1, T_2$ such that $S = T_1\concat T \concat T_2$. We write $\segm{S}{\beta}{\gamma}$ for the segment $\subseq{S}{f}$, where $f\fcolon \alpha' \to \alpha$ is the mapping defined by $f(\iota) = \beta + \iota$ for all $\iota < \alpha'$, with $\alpha'$ the unique ordinal with $\gamma = \beta + \alpha'$. Note that in particular $\segm{S}{0}{\alpha} = \prefix{S}{\alpha}$ for each sequence $S$ and ordinal $\alpha \le \len{S}$. Metric Spaces {#sec:metric-spaces} ------------- A pair $(M,\dd)$ is called a *metric space* if $\dd \fcolon M \times M \to \realnn$ is a function satisfying $\dd(x,y) = 0$ iff $x=y$ (identity), $\dd(x, y) = \dd(y, x)$ (symmetry), and $\dd(x, z) \le \dd(x, y) + \dd(y, z)$ (triangle inequality), for all $x,y,z\in M$. If $\dd$ instead of the triangle inequality, satisfies the stronger property $\dd(x, z) \le \max \set{ \dd(x, y),\dd(y, z)}$ (strong triangle), then $(M,\dd)$ is called an *ultrametric space*. Let $(a_\iota)_{\iota<\alpha}$ be a sequence in a metric space $(M,\dd)$. The sequence $(a_\iota)_{\iota<\alpha}$ *converges* to an element $a\in M$, written $\lim_{\iota\limto\alpha} a_\iota$, if, for each $\varepsilon \in \realp$, there is a $\beta < \alpha$ such that $\dd(a,a_\iota) < \varepsilon$ for every $\beta < \iota < \alpha$; $(a_\iota)_{\iota<\alpha}$ is *continuous* if $\lim_{\iota\limto\lambda} a_\iota = a_\lambda$ for each limit ordinal $\lambda < \alpha$. The sequence $(a_\iota)_{\iota<\alpha}$ is called *Cauchy* if, for any $\varepsilon \in \realp$, there is a $\beta<\alpha$ such that, for all $\beta < \iota < \iota' < \alpha$, we have that $\dd(m_\iota,m_{\iota'}) < \varepsilon$. A metric space is called *complete* if each of its non-empty Cauchy sequences converges. Partial Orders {#sec:partial-orders} -------------- A *partial order* $\le$ on a set $A$ is a binary relation on $A$ that is *transitive*, *reflexive*, and *antisymmetric*. The pair $(A,\le)$ is then called a *partially ordered set*. We use $<$ to denote the strict part of $\le$, i.e. $a < b$ iff $a \le b$ and $b \not\le a$. A sequence $(a_\iota)_{\iota<\alpha}$ in $(A,\le)$ is called a (*strict*) *chain* if $a_\iota \le a_\gamma$ (resp. $a_\iota < a_\gamma$) for all $\iota < \gamma < \alpha$. A subset $D$ of the underlying set $A$ is called *directed* if it is non-empty and each pair of elements in $D$ has an upper bound in $D$. A partially ordered set $(A, \le)$ is called a *complete semilattice* if it has a *least element*, every *directed subset* $D$ of $A$ has a *least upper bound* (*lub*) $\Lub D$, and every subset of $A$ having an upper bound also has a least upper bound. Hence, complete semilattices also admit a *greatest lower bound* (*glb*) $\Glb B$ for every *non-empty* subset $B$ of $A$. In particular, this means that for any non-empty sequence $(a_\iota)_{\iota<\alpha}$ in a complete semilattice, its *limit inferior*, defined by $\liminf_{\iota \limto \alpha}a_\iota = \Lub_{\beta<\alpha} \left(\Glb_{\beta \le \iota < \alpha} a_\iota\right)$, exists. It is easy to see that the limit inferior of closed sequences is simply the last element of the sequence. This is, however, only a special case of the following more general proposition: \[prop:liminfSuffix\] Let $(a_\iota)_{\iota < \alpha}$ be a sequence in a complete semilattice and $(b_\iota)_{\iota< \beta}$ a non-empty suffix of $(a_\iota)_{\iota < \alpha}$. Then $\liminf_{\iota \limto \alpha} a_\iota = \liminf_{\iota \limto \beta} b_\iota$. Let $a = \liminf_{\iota \limto \alpha} a_\iota$ and $b = \liminf_{\iota \limto \beta} b_\iota$. Since $(b_\iota)_{\iota< \beta}$ is a suffix of $(a_\iota)_{\iota < \alpha}$, there is some $\delta < \alpha$ such that $b_\iota = a_{\delta+\iota}$ for all $\iota < \beta$. Hence, we know that $a = \Lub_{\gamma<\alpha} \Glb_{\gamma \le \iota < \alpha} a_\iota$ and $b = \Lub_{\delta \le \gamma<\alpha} \Glb_{\gamma \le \iota < \alpha} a_\iota$. Let $c_\gamma = \Glb_{\gamma \le \iota < \alpha} a_\iota$ for each $\gamma < \alpha$, $A = \setcom{c_\gamma}{\gamma < \alpha}$ and $B = \setcom{c_\gamma}{\delta \le \gamma < \alpha}$. Note that $a = \Lub A$ and $b = \Lub B$. Because $B \subseteq A$, we have that $b \le a$. On the other hand, since $c_\gamma \le c_{\gamma'}$ for $\gamma \le \gamma'$, we find, for each $c_\gamma \in A$, some $c_{\gamma'} \in B$ with $c_\gamma \le c_{\gamma'}$. Hence, $a \le b$. Therefore, due to the antisymmetry of $\le$, we can conclude that $a = b$. Note that the limit in a metric space has the same behaviour as the one for the limit inferior described by the proposition above. However, one has to keep in mind that – unlike the limit – the limit inferior is not invariant under taking cofinal subsequences! With the prefix order $\le$ on sequences we can generalise concatenation to arbitrary sequences of sequences: Let $(S_\iota)_{\iota < \alpha}$ be a sequence of sequences in a common set. The concatenation of $(S_\iota)_{\iota < \alpha}$, written $\Concat_{\iota < \alpha} S_\iota$, is recursively defined as the empty sequence $\emptyseq$ if $\alpha = 0$, $\left(\Concat_{\iota < \alpha'} S_\iota\right) \concat S_{\alpha'}$ if $\alpha = \alpha' + 1$, and $\Lub_{\gamma < \alpha} \Concat_{\iota < \gamma} S_\iota$ if $\alpha$ is a limit ordinal. For instance, the concatenation $\Concat_{i<\omega} \seq{i,i+1}$ yields the sequence $\seq{0,1,1,2,2,\dots}$ of length $\omega$, and the concatenation $\Concat_{\iota<\alpha} \seq{\iota}$, for any ordinal $\alpha$, yields the sequence $(\iota)_{\iota<\alpha}$. Terms {#sec:terms} ----- Unlike in the traditional – i.e. finitary – framework of term rewriting, we consider the set $\iterms$ of *infinitary terms* (or simply *terms*) over some *signature* $\Sigma$ and a countably infinite set $\calV$ of variables. A *signature* $\Sigma$ is a countable set of symbols. Each symbol $f$ is associated with its arity $\srank{f}\in \nat$, and we write $\Sigma^{(n)}$ for the set of symbols in $\Sigma$ which have arity $n$. The set $\iterms$ is defined as the *greatest* set $T$ such that, for each element $t \in T$, we either have $t \in \calV$ or $t = f(t_0,\dots, t_{k-1})$, where $f \in \Sigma^{(k)}$, and $t_0,\dots,t_{k-1}\in T$. A symbol $c \in \Sigma^{(0)}$ of arity $0$ is also called a *constant symbol*, and we use the shorthand $c$ to denote a term $c()$. We consider $\iterms$ as a superset of the set $\terms$ of *finite terms*. For each term $t \in \iterms$, we define the *set of positions* in $t$, denoted $\pos{t}$, as the smallest set of finite sequences in $\nat$ such that $\emptyseq \in \pos t$, and $\seq i\concat \pi \in \pos t$ whenever $t = f(t_0,\dots,t_{k-1})$, $i<k$, and $\pi \in \pos{t_i}$. Given a position $\pi \in \pos t$, we define the *subterm* of $t$ at $\pi$, denoted $\atPos{t}{\pi}$, by recursion on $\pi$ as follows: $\atPos t \emptyseq = t$, and $\atPos{f(t_0,\dots,t_{k-1})}{\seq i \concat \pi} = \atPos{t_i}{\pi}$. Moreover, we write $t(\pi)$ for the symbol in $t$ at $\pi$, i.e.$t(\pi) = f$ if $\atPos t \pi = f(t_0,\dots,t_{k-1})$ and $t(\pi) = v$ if $\atPos t \pi = v \in \calV$. For terms $s,t \in \iterms$ and a position $\pi \in \pos{t}$, we write $\substAtPos{t}{\pi}{s}$ for the term $t$ with the subterm at $\pi$ replaced by $s$, i.e.$$\substAtPos{t}{\emptyseq}{s} = s,\quad\text{ and }\quad \substAtPos{f(t_0,\dots,t_{k-1})}{\seq i\concat \pi}{s} = f(t_0,\dots,t_{i-1},\substAtPos{t_i}{\pi}{s},t_{i+1},\dots,t_{k-1}).$$ Note that while the set of terms $\iterms$ is defined coinductively, the set of positions of a term is defined inductively. Consequently, the subterm at a position and substitution at a position are defined by recursion. Two terms $s$ and $t$ are said to *coincide* in a set of positions $P \subseteq \pos{s} \cap \pos{t}$ if $s(\pi) = t(\pi)$ for all $\pi \in P$. A position is also called an *occurrence* if the focus lies on the subterm at that position rather than the position itself. Two positions $\pi_1, \pi_2$ are called *disjoint* if neither $\pi_1 \le \pi_2$ nor $\pi_2 \le \pi_1$. A *context* is a “term with holes”, which are represented by a distinguished variable $\Box$. We write $\Cxt{b}{,\dots,}$ for a context with at least one occurrence of $\Box$, and $\Cxt{a}{,\dots,}$ for a context with zero more occurrences of $\Box$. $\Cxt{b}{t_1, \dots, t_n}$ denotes the result of replacing the occurrences of $\Box$ in $C$ (from left to right) by $t_1,\dots,t_n$. $\Cxt{a}{t_1, \dots, t_n}$ is defined accordingly. A *substitution* $\sigma$ is a mapping from $\calV$ to $\iterms$. Its *domain*, denoted $\dom{\sigma}$, is the set $\setcom{x \in \calV}{\sigma(x) \neq x}$ of variables not mapped to itself by $\sigma$. Substitutions are uniquely extended to functions from $\iterms$ to $\iterms$: $\sigma(f (t_1, \dots, t_n )) = f (\sigma(t_1 ), \dots ,\sigma(t_n ))$ for $f \in \Sigma^{(n)}$ and $t_1, \dots, t_n \in \iterms$. Instead of $\sigma(s)$, we shall also write $s\sigma$. On $\iterms$ a similarity measure $\similar{\cdot}{\cdot} \in \nat \cup \set{\infty}$ can be defined by setting $$\similar{s}{t} = \min \setcom{\len{\pi}}{\pi \in \pos{s}\cap\pos{t}, s(\pi) \neq t(\pi)} \cup \set{\infty} \qquad \text {for } s,t\in \iterms$$ That is, $\similar{s}{t}$ is the minimal depth at which $s$ and $t$ differ, or $\infty$ if $s = t$. Based on this, a distance function $\dd$ can be defined by $\dd(s,t) = 2^{-\similar{s}{t}}$, where we interpret $2^{-\infty}$ as $0$. Note that $0\le\dd(s,t)\le 1$. In particular, $\dd(s,t) = 0$ iff $s$ and $t$ coincide, and $\dd(s,t) = 1$ iff $s$ and $t$ differ at the root. The pair $(\iterms, \dd)$ is known to form a complete ultrametric space [@arnold80fi]. *Partial terms*, i.e. terms over signature $\Sigma_\bot = \Sigma \uplus \set{\bot}$ with $\bot$ a fresh constant symbol, can be endowed with a binary relation $\lebot$ by defining $s \lebot t$ iff $s$ can be obtained from $t$ by replacing some subterm occurrences in $t$ by $\bot$. Interpreting the term $\bot$ as denoting “undefined”, $\lebot$ can be read as “is less defined than”. The pair $(\ipterms,\lebot)$ is known to form a complete semilattice [@goguen77jacm]. For a partial term $t\in \ipterms$ we use the notation $\posNonBot{t}$ and $\posFun{t}$ for the set $\setcom{\pi\in \pos{t}}{t(\pi) \neq \bot}$ of non-$\bot$ positions resp. the set $\setcom{\pi\in\pos{t}}{t(\pi) \in \Sigma}$ of positions of function symbols. With this, $\lebot$ can be characterised alternatively by $s \lebot t$ iff $s(\pi) = t(\pi)$ for all $\pi \in \posNonBot{s}$. To explicitly distinguish them from partial terms, we call terms in $\iterms$ *total*. Term Rewriting Systems {#sec:abstr-reduct-syst} ---------------------- A *term rewriting system* (TRS) $\calR$ is a pair $(\Sigma, R)$ consisting of a signature $\Sigma$ and a set $R$ of *term rewrite rules* of the form $l \to r$ with $l \in \iterms \setminus \calV$ and $r \in \iterms$ such that all variables occurring in $r$ also occur in $l$. Note that this notion of a TRS deviates slightly from the standard notion of TRSs in the literature on infinitary rewriting [@kennaway03book] in that it allows infinite terms on the left-hand side of rewrite rules! This generalisation will be necessary to accommodate Böhm extensions, which are introduced later in Section \[sec:mean-terms-bohm\]. TRSs having only finite left-hand sides are called *left-finite*. As in the finitary setting, every TRS $\calR$ defines a rewrite relation $\to[\calR]$: $$s \to[\calR] t \iff \exists \pi \in \pos{s}, l\to r \in R, \sigma\colon\; \atPos{s}{\pi} = l\sigma, t = \substAtPos{s}{\pi}{r\sigma}$$ Instead of $s \to[\calR] t$, we sometimes write $s \to[\pi,\rho] t$ in order to indicate the applied rule $\rho$ and the position $\pi$, or simply $s \to t$. The subterm $\atPos{s}{\pi}$ is called a *$\rho$-redex* or simply *redex*, $r\sigma$ its *contractum*, and $\atPos{s}{\pi}$ is said to be *contracted* to $r\sigma$. Let $\rho\fcolon l \to r$ be a term rewrite rule. The *pattern* of $\rho$ is the context $l\sigma$, where $\sigma$ is the substitution $\setcom{x \mapsto \Box}{x \in \calV}$ that maps all variables to $\Box$. If $t$ is a $\rho$-redex, then the pattern $P$ of $\rho$ is also called the *redex pattern* of $t$ w.r.t. $\rho$. When referring to the occurrences in a pattern, occurrences of the symbol $\Box$ are neglected. Let $\rho_1\fcolon l_1 \to r_1$, $\rho_2\fcolon l_2 \to r_2$ be rules in a TRS $\calR$. The rules $\rho_1,\rho_2$ are said to *overlap* if there is a non-variable position $\pi$ in $l_1$ such that $\atPos{l_1}{\pi}$ and $l_2$ are unifiable and $\pi$ is not the root position $\emptyseq$ in case $\rho_1,\rho_2$ are renamed copies of the same rule. A TRS is called *non-overlapping* if none of its rules overlap. A term $t$ is called *linear* if each variable occurs at most once in $t$. The TRS $\calR$ is called *left-linear* if the left-hand side of every rule in $\calR$ is linear. It is called *orthogonal* if it is left-linear and non-overlapping. Metric Infinitary Term Rewriting {#sec:metr-infin-term} ================================ In this section we briefly recall the metric model of infinitary term rewriting [@kennaway95ic] and some of its properties. We will use the metric model in two ways: Firstly, it will serve as a yardstick to compare the partial order model to. But most importantly, we will use known results for metric infinitary rewriting and transfer them to the partial order model. In order to accomplish the latter, we shall develop correspondence theorems (Theorem \[thr:weakExt\] and Theorem \[thr:strongExt\]) that relate convergence in the metric model and convergence in the partial order model. Specifically, these correspondence results show that the two notions of convergence coincide if we restrict ourselves to total terms. At first we have to make clear what a *reduction* in our setting of infinitary rewriting is: \[def:red\] Let $\calR$ be a TRS. A *reduction step* $\phi$ in $\calR$ is a tuple $(s,\pi,\rho,t)$ such that $s \to[\pi,\rho] t$; we also write $\phi\fcolon s \to[\pi,\rho] t$. A *reduction* $S$ in $\calR$ is a sequence $(\phi_\iota)_{\iota < \alpha}$ of reduction steps in $\calR$ such that there is a sequence $(t_\iota)_{\iota < \wsuc\alpha}$ of terms, with $\wsuc\alpha = \alpha$ if $S$ is open and $\wsuc\alpha = \alpha + 1$ if $S$ is closed, such that $\phi_\iota\fcolon t_\iota \to t_{\iota+1}$. If $S$ is finite, we write $S\fcolon t_0 \fto{*} t_\alpha$. This definition of reductions is a straightforward generalisation of finite reductions. As an example consider the TRS with the single rule $a \to f(a)$. In this system we get a reduction $S\fcolon a \fto{*} f(f(f(a)))$ of length $3$: $$S = \seq{\phi_0\fcolon a \to f(a),\phi_1\fcolon f(a) \to f(f(a)),\phi_2\fcolon f(f(a)) \to f(f(f(a)))}$$ In a more concise notation we write $$S\fcolon a \to f(a) \to f(f(a)) \to f(f(f(a)))$$ Clearly, we can extend this reduction arbitrarily often which results in the following infinite reduction of length $\omega$: $$T\fcolon a \to f(a) \to f^2(a) \to f^3(a) \to f^4(a) \to \dots$$ However, this is as far we can go with this simple definition of reductions. As soon as we go beyond $\omega$, we get reductions which do not make sense. For example, consider the following reduction: $$T\concat S\fcolon a \to f(a) \to f^2(a) \to f^3(a) \to f^4(a) \to \dots\; a \to f(a) \to f(f(a)) \to f(f(f(a)))$$ The reduction $T$ of length $\omega$ can be extended by an arbitrary reduction, e.g. by the reduction $S$. The notion of reductions according to Definition \[def:red\] is only meaningful if restricted to reductions of length at most $\omega$. The problem is that the $\omega$-th step in the reduction, viz. the second step of the form $a \to f(a)$ in the example above, is completely independent of all previous steps since it does not have an immediate predecessor. This issue occurs at each limit ordinal number. An appropriate definition of a reduction of length beyond $\omega$ requires a notion of continuity to bridge the gaps that arise at limit ordinals. In the next section we will present the well-know metric approach to this. Later in Section \[sec:part-order-infin\], we will introduce a novel approach using partial orders. Metric Convergence {#sec:metric-convergence} ------------------ In this section we consider two notions of *convergence* based on the metric on terms as defined in Section \[sec:terms\]. We consider both the weak [@dershowitz91tcs] and the strong [@kennaway95ic] variant known from the literature. Related to this notion of convergence is a corresponding notion of *continuity*. In order to distinguish both from the partial order model that we will introduce in Section \[sec:part-order-infin\] we will use the names *weak* resp. *strong $\mrs$-convergence* and *weak* resp. *strong $\mrs$-continuity*. It is important to understand that a reduction is a *sequence of reduction steps* rather than just a sequence of terms. This is crucial for a proper definition of strong convergence resp.continuity, which does not only depend on the sequence of terms that are derived within the reduction but does also depend on the positions where the contractions take place: Let $\calR$ be a TRS and $S = (\phi_\iota\fcolon t_\iota \to[\pi_\iota] t_{\iota+1})_{\iota < \alpha}$ a non-empty reduction in $\calR$. The reduction $S$ is called 1. *weakly $\mrs$-continuous* in $\calR$, written $S\fcolon t_0 \mwacont[\calR]$, if $\lim_{\iota \limto \lambda} t_\iota = t_\lambda$ for each limit ordinal $\lambda < \alpha$. 2. *strongly $\mrs$-continuous* in $\calR$, written $S\fcolon t_0 \macont[\calR]$, if it is weakly $\mrs$-continuous and for each limit ordinal $\lambda < \alpha$, the sequence $(\len{\pi_\iota})_{\iota < \lambda}$ of contraction depths tends to infinity. 3. *weakly $\mrs$-converging* to $t$ in $\calR$, written $S\fcolon t_0 \mwato[\calR] t$, if it is weakly $\mrs$-continuous and $t = \lim_{\iota \limto \wsuc\alpha} t_\iota$. 4. *strongly $\mrs$-converging* to $t$ in $\calR$, written $S\fcolon t_0 \mato[\calR] t$, if it is strongly $\mrs$-continuous, weakly $\mrs$-converges to $t$ and, in case that $S$ is open, $(\len{\pi_\iota})_{\iota<\alpha}$ tends to infinity. Whenever $S\fcolon t_0 \mwato[\calR] t$ or $S\fcolon t_0 \mato[\calR] t$, we say that $t$ is weakly resp. strongly *$\mrs$-reachable* from $t_0$ in $\calR$. By abuse of notation we use $\mwato[\calR]$ and $\mato[\calR]$ as a binary relation to indicate weakly resp. strongly $\mrs$-reachability. In order to indicate the length of $S$ and the TRS $\calR$, we write $S\fcolon t_0 \mwto{\alpha}[\calR] t$ resp. $S\fcolon t_0 \mto{\alpha}[\calR] t$. The empty reduction $\emptyseq$ is considered weakly/strongly $\mrs$-continuous and $\mrs$-convergent for any identical start and end term, i.e. $\emptyseq\fcolon t\mato[\calR] t$ for all $t \in \terms$. From the above definition it is clear that strong $\mrs$-convergence implies both weak $\mrs$-convergence and strong $\mrs$-continuity and that both weak $\mrs$-convergence and strong $\mrs$-continuity imply weak $\mrs$-continuity, respectively. This is indicated in Figure \[fig:contConvRel\]. (sconv) [strong $\mrs$-convergence]{}; (wconv) [weak $\mrs$-convergence]{}; (scont) [strong $\mrs$-continuity]{}; (wcont) [weak $\mrs$-continuity]{}; (sconv) edge (wconv) (sconv) edge (scont) (wconv) edge (wcont) (scont) edge (wcont); It is important to recognise that $\mrs$-convergence implies $\mrs$-continuity. Hence, only meaningful, i.e. $\mrs$-continuous, reductions can be $\mrs$-convergent. For a reduction to be weakly $\mrs$-continuous, each open *proper* prefix of the underlying sequence $(t_\iota)_{\iota<\wsuc\alpha}$ of terms must converge to the term following next in the sequence – or, equivalently, $(t_\iota)_{\iota<\wsuc\alpha}$ must be continuous. For strong $\mrs$-continuity, additionally, the depth at which contractions take place has to tend to infinity for each of the reduction’s open proper prefixes. The convergence properties do only differ from the continuity properties in that they require the above conditions to hold for *all* open prefixes, i.e. including the whole reduction itself unless it is closed. For example, considering the rule $a \to f(a)$, the reduction $g(a) \to g(f(a)) \to g(f(f(a))) \to \dots$ strongly $\mrs$-converges to the infinite term $g(f^\omega)$. The first step takes place at depth $1$, the next step at depth $2$ and so on. Having the rule $g(x) \to g(f(x))$ instead, the reduction $g(a) \to g(f(a)) \to g(f(f(a))) \to \dots$ is trivially strongly $\mrs$-continuous but is now not strongly $\mrs$-convergent since every step in this reduction takes place at depth $0$, i.e. the sequence of reduction depths does not tend to infinity. However, the reduction still weakly $\mrs$-converges to $g(f^\omega)$. In contrast to the strong notions of continuity and convergence, the corresponding weak variants are independent from the rules that are applied during the reduction. What makes strong $\mrs$-convergence (and -continuity) strong is the fact that it employs a conservative overapproximation of the differences between consecutive terms in the reduction. For weak $\mrs$-convergence the distance $\dd(t_\iota,t_{\iota+1})$ between consecutive terms in a reduction $(t_\iota \to_{\pi_\iota} t_{\iota+1})_{\iota<\lambda}$ has to tend to $0$. For strong $\mrs$-convergence the depth $\len{\pi_\iota}$ of the reduction steps has to tend to infinity. In other words, $2^{-\len{\pi_\iota}}$ has to tend to $0$. Note that $2^{-\len{\pi_\iota}}$ is a conservative overapproximation of $\dd(t_\iota,t_{\iota+1})$, i.e. $2^{-\len{\pi_\iota}} \ge \dd(t_\iota,t_{\iota+1})$. So strong $\mrs$-convergence is simply weak $\mrs$-convergence w.r.t. this overapproximation of $\dd$ [@bahr10rta]. If this approximation is actually precise, i.e. coincides with the actual value, both notions of $\mrs$-convergence coincide. \[rem:mcont\] The notion of $\mrs$-continuity can be defined solely in terms of $\mrs$-convergence [@bahr10rta]. More precisely, we have for each reduction $S = (t_\iota \to t_{\iota+1})_{\iota < \alpha}$ that $S$ is weakly $\mrs$-continuous iff every (open) proper prefix of $\prefix{S}{\beta}$ weakly $\mrs$-converges to $t_\beta$. Analogously, strong $\mrs$-continuity can be characterised in terms of strong $\mrs$-convergence. An easy consequence of this is that $\mrs$-converging reductions are closed under concatenation, i.e. $S\fcolon s \mwato t$, $T\fcolon t \mwato u$ implies $S\concat T \fcolon s \mwato u$ and likewise for strong $\mrs$-convergence. For the most part our focus in this paper is set on strong $\mrs$-convergence and its partial order correspondent that we will introduce in Section \[sec:part-order-infin\]. Weak $\mrs$-convergence is well-known to be rather unruly [@simonsen04ipl]. Strong convergence is far more well-behaved [@kennaway95ic]. Most prominently, we have the following Compression Lemma [@kennaway95ic] which in general does not hold for weak $\mrs$-convergence: \[thr:mrsCompr\] For each left-linear, left-finite TRS, $s \mato t$ implies $s \mto{\le \omega} t$. As an easy corollary we obtain that the final term of a strongly $\mrs$-converging reduction can be approximated arbitrarily accurately by a finite reduction: \[cor:mrsFinApprox\] Let $\calR$ be a left-linear, left-finite TRS and $s \mato t$. Then, for each depth $d \in \nat$, there is a finite reduction $s \fto{*} t'$ such that $t$ and $t'$ coincide up to depth $d$, i.e. $\dd(t,t') < 2^{-d}$. Assume $s \mato t$. By Theorem \[thr:mrsCompr\], there is a reduction $S\fcolon s \mto{\le \omega} t$. If $S$ is of finite length, then we are done. If $S\fcolon s \mto{\omega} t$, then, by strong $\mrs$-convergence, there is some $n < \omega$ such that all reductions steps in $S$ after $n$ take place at a depth greater than $d$. Consider $\prefix{S}{n}\fcolon s \fto{*} t'$. It is clear that $t$ and $t'$ coincide up to depth $d$. As a special case of the above corollary, we obtain that $s \mato t$ implies $s \fto* t$ whenever $t$ is a finite term. An important difference between $\mrs$-converging reductions and finite reductions is the confluence of orthogonal systems. In contrast to finite reachability, $\mrs$-reachability of orthogonal TRSs – even in its strong variant – does not necessarily have the diamond property, i.e. orthogonal systems are confluent but not infinitarily confluent [@kennaway95ic]: \[ex:mconfl\] Consider the orthogonal TRS consisting of the *collapsing* rules $\rho_1\fcolon f(x) \to x$ and $\rho_2\fcolon g(x) \to x$ and the infinite term $t = g(f(g(f(\dots))))$. We then obtain the reductions $S\fcolon t \mato g^\omega$ and $T\fcolon t \mato f^\omega$ by successively contracting all $\rho_1$- resp. $\rho_2$-redexes. However, there is no term $s$ such that $g^\omega \mato s \mafrom f^\omega$ (or $g^\omega \mwato s \mwafrom f^\omega$) as both $g^\omega$ and $f^\omega$ can only be rewritten to themselves, respectively. In the following section we discuss a method for obtaining an appropriate notion of transfinite reachability based on strong $\mrs$-reachability which actually has the diamond property. Meaningless Terms and Böhm Trees {#sec:mean-terms-bohm} -------------------------------- At the end of the previous section we have seen that orthogonal TRSs are in general not infinitarily confluent. However, as Kennaway et al. [@kennaway95ic] have shown, orthogonal TRSs are infinitarily confluent modulo so-called *hyper-collapsing* terms – in the sense that two forking strongly $\mrs$-converging reductions $t \mato t_1, t \mato t_2$ can always be extended by two strongly $\mrs$-converging reductions $t_1 \mato t_3, t_2 \mato t_3'$ such that the resulting terms $t_3, t_3'$ only differ in the hyper-collapsing subterms they contain. This result was later generalised by Kennaway et al.[@kennaway99jflp] to develop an axiomatic theory of *meaningless terms*. Intuitively, a set of meaningless terms in this setting consists of terms that are deemed meaningless since, from a term rewriting perspective, they cannot be distinguished from one another and they do not contribute any information to any computation. Kennaway et al. capture this by a set of axioms that characterise a set of meaningless terms. For orthogonal TRSs, one such set of terms, in fact the least such set, is the set of *root-active* terms [@kennaway99jflp]: Let $\calR$ be a TRS and $t \in \iterms$. Then $t$ is called *root-active* if for each reduction $t \fto{*} t'$, there is a reduction $t' \fto{*} s$ to a redex $s$. The set of all root-active terms of $\calR$ is denoted $\rAct[\calR]$ or simply $\rAct$ if $\calR$ is clear from the context. Intuitively speaking, as the name already suggests, root-active terms are terms that can be contracted at the root arbitrarily often, e.g.the terms $f^\omega$ and $g^\omega$ from Example \[ex:mconfl\]. In this paper we are only interested in this particular set of meaningless terms. So for the sake of brevity we restrict our discussion in this section to the set $\rAct$ instead of the original more general axiomatic treatment by Kennaway et al.[@kennaway99jflp]. Since, denotationally, root-active terms cannot be distinguished from each other it is appropriate to equate them [@kennaway99jflp]. This can be achieved by introducing a new constant symbol $\bot$ and making each root-active term equal to $\bot$. By adding rules which enable rewriting root-active terms to $\bot$, this can be encoded into an existing TRS [@kennaway99jflp]: Let $\calR = (\Sigma, R)$ be a TRS, and $\calU \subseteq \iterms$. 1. A term $t \in \iterms$ is called a *$\bot,\calU$-instance* of a term $s \in \ipterms$ if $t$ can be obtained from $s$ by replacing each occurrence of $\bot$ in $s$ with some term in $\calU$. 2. $\calU_\bot$ is the set of terms in $\ipterms$ that have a $\bot,\calU$-instance in $\calU$. 3. The *Böhm extension* of $\calR$ w.r.t. $\calU$ is the TRS $\calB_{\calR,\calU} = (\Sigma_\bot,R \cup B)$, where $$B = \setcom{t \to \bot}{t \in \calU_\bot\setminus\set{\bot}}$$ We write $s \to[\calU,\bot] t$ for a reduction step using a rule in $B$. If $\calR$ and $\calU$ are clear from the context, we simply write $\calB$ and $\to[\bot]$ instead of $\calB_{\calR,\calU}$ and $\to[\calU,\bot]$, respectively. A reduction that is strongly $\mrs$-converging in the Böhm extension $\calB$ is called *Böhm-converging*. A term $t$ is called *Böhm-reachable* from $s$ if there is a Böhm-converging reduction from $s$ to $t$. The definition of $\calU_\bot$ is quite subtle and deserves further attention before we move on. According to the definition, a term $t$ is in $\calU_\bot$ iff the term obtained from $t$ by replacing occurrences of $\bot$ in $t$ by terms from $\calU$ is also in $\calU$. More illuminating, however, is the converse view, i.e. how to construct a term in $\calU_\bot$ from a term in $\calU$. First of all, any term in $\calU$ is also in $\calU_\bot$. Secondly, we may obtain a term in $\calU_\bot$ by taking a term $t \in \calU$ and replacing any number of subterms of $t$ that are in $\calU$ by $\bot$. For Böhm extensions, this means that we may contract any term $t \in \calU$ to $\bot$, even if we already contracted some proper subterms of $t$ to $\bot$ before. It is at this point where we, in fact, need the generality of allowing infinite terms on the left-hand side of rewrite rules: The additional rules of a Böhm extension allow possibly infinite terms $t \in \calU_\bot\setminus\set{\bot}$ on the left-hand side. \[rem:cloSub\] Note that, for orthogonal TRSs, $\rAct$ is closed under substitutions and, hence, so is $\rAct_\bot$ [@kennaway99jflp]. Therefore, whenever $\Cxt{b}{t} \to[\rAct,\bot] \Cxt{b}{\bot}$, we can assume that $t \in \rAct_\bot$. With the additional rules provided by the Böhm extension, we gain infinitary confluence of orthogonal systems: \[thr:bohmCR\] Let $\calR$ be an orthogonal, left-finite TRS. Then the Böhm extension $\calB$ of $\calR$ w.r.t. $\rAct$ is infinitarily confluent, i.e. $s_1 \mafrom[\calB] t \mato[\calB] s_2$ implies $s_1 \mato[\calB] t' \mafrom[\calB] s_2$. The lack of confluence for strongly $\mrs$-converging reductions is resolved in Böhm extensions by allowing (sub-)terms, which where previously not joinable, to be contracted to $\bot$. Returning to Example \[ex:mconfl\], we can see that $g^\omega$ and $f^\omega$ can be rewritten to $\bot$ as both terms are root-active. In fact, w.r.t. Böhm-convergence, every term of an orthogonal TRS has a normal form: \[thr:bohmWn\] Let $\calR$ be an orthogonal, left-finite TRS. Then the Böhm extension $\calB$ of $\calR$ w.r.t. $\rAct$ is infinitarily normalising, i.e. for each term $t$ there is a $\calB$-normal form Böhm-reachable from $t$. This means that each term $t$ of an orthogonal, left-finite TRS $\calR$ has a unique normal form in $\calB_{\calR, \rAct}$. This normal form is called the *Böhm tree* of $t$ (w.r.t. $\rAct$) [@kennaway99jflp]. The rest of this paper is concerned with establishing an alternative to the metric notion of convergence based on the partial order on terms that is equivalent to the Böhm extension approach. Partial Order Infinitary Rewriting {#sec:part-order-infin} ================================== In this section we introduce an alternative model of infinitary term rewriting which uses the partial order on terms to formalise convergence of transfinite reductions. To this end we will turn to partial terms which, like in the setting of Böhm extensions, have an additional constant symbol $\bot$. The result will be a more fine-grained notion of convergence in which, intuitively speaking, a reduction can be diverging in some positions but at the same time converging in other positions. The “diverging parts” are then indicated by a $\bot$-occurrence in the final term of the reduction: \[ex:prsConv\] Consider the TRS consisting of the rules $h(x) \to h(g(x)), b \to g(b)$ and the term $t= f(h(a),b)$. In this system, we have the reduction $$S\fcolon f(h(a),b) \to f(h(g(a)),b) \to f(h(g(a)),g(b)) \to f(h(g(g(a))),g(b)) \to \dots$$ which alternately contracts the redex in the left and in the right argument of $f$. The reduction $S$ weakly $\mrs$-converges to the term $f(h(g^\omega),g^\omega)$. But it does not *strongly* $\mrs$-converge as the depth at which contractions are performed does not tend to infinity. However, this does only happen in the left argument of $f$, not in the other one. Within the partial order model we will still be able to obtain that $S$ weakly converges to $f(h(g^\omega),g^\omega)$ but we will also obtain that it strongly converges to the term $f(\bot,g^\omega)$. That is, we will be able to identify that the reduction $S$ strongly converges except at position $\seq{0}$, the first argument of $f$. Partial Order Convergence {#sec:part-order-conv} ------------------------- In order to formalise continuity and convergence in terms of the complete semilattice $(\ipterms,\lebot)$ instead of the complete metric space $(\iterms, \dd)$, we move from the limit of the metric space to the limit inferior of the complete semilattice: Let $\calR = (\Sigma,R)$ be a TRS and $S = (\phi_\iota\fcolon t_\iota \to[\pi_\iota] t_{\iota+1})_{\iota<\alpha}$ a non-empty reduction in $\calR_\bot = (\Sigma_\bot,R)$. The reduction $S$ is called 1. *weakly $\prs$-continuous* in $\calR$, written $S\fcolon t_0 \pwacont[\calR]$, if $\liminf_{\iota \limto \lambda} t_\iota = t_\lambda$ for each limit ordinal $\lambda < \alpha$. 2. *strongly $\prs$-continuous* in $\calR$, written $S\fcolon t_0 \pacont[\calR]$, if $\liminf_{\iota \limto \lambda} c_\iota = t_\lambda$ for each limit ordinal $\lambda < \alpha$, where $c_\iota = \substAtPos{t_\iota}{\pi_\iota}{\bot}$. Each $c_\iota$ is called the *context* of the reduction step $\phi_\iota$, which we indicate by writing $\phi_\iota\fcolon t_\iota \to[c_\iota] t_{\iota+1}$. 3. *weakly $\prs$-converging* to $t$ in $\calR$, written $S\fcolon t_0 \pwato[\calR] t$, if it is weakly $\prs$-continuous and $t = \liminf_{\iota \limto \wsuc\alpha} t_\iota$. 4. *strongly $\prs$-converging* to $t$ in $\calR$, written $S\fcolon t_0 \pato[\calR] t$, if it is strongly $\prs$-continuous and $S$ is closed with $t=t_{\alpha+1}$ or $t = \liminf_{\iota \limto \alpha} c_\iota$. Whenever $S\fcolon t_0 \pwato[\calR] t$ or $S\fcolon t_0 \pato[\calR] t$, we say that $t$ is weakly resp. strongly *$\prs$-reachable* from $t_0$ in $\calR$. By abuse of notation we use $\pwato[\calR]$ and $\pato[\calR]$ as a binary relation to indicate weak resp. strong $\prs$-reachability. In order to indicate the length of $S$ and the TRS $\calR$, we write $S\fcolon t_0 \pwto{\alpha}[\calR] t$ resp. $S\fcolon t_0 \pto{\alpha}[\calR] t$. The empty reduction $\emptyseq$ is considered weakly/strongly $\prs$-continuous and $\prs$-convergent for any start and end term, i.e. $\emptyseq\fcolon t\pato[\calR] t$ for all $t \in \terms$. The definitions of weak $\prs$-continuity and weak $\prs$-convergence are straightforward “translations” from the metric setting to the partial order setting replacing the limit $\lim_{\iota \limto \alpha}$ by the limit inferior $\liminf_{\iota \limto \alpha}$. On the other hand, the definitions of the strong counterparts seem a bit different compared to the metric model: Whereas strong $\mrs$-convergence simply adds a side condition regarding the depth $\len{\pi_\iota}$ of the reduction steps, strong $\prs$-convergence is defined in a different way compared to the weak variant. Instead of the terms $t_\iota$ of the reduction, it considers the contexts $c_\iota = \substAtPos{t_\iota}{\bot}{\pi_\iota}$. However, one can surmise some similarity due to the fact that the partial order model of strong convergence indirectly takes into account the position $\pi_\iota$ of each reduction step as well. Moreover, for the sake of understanding the intuition of strong $\prs$-convergence it is better to compare the contexts $c_\iota$ rather with the glb of two consecutive terms $t_\iota \glbbot t_{\iota +1}$ instead of the term $t_\iota$ itself. The following proposition allows precisely that. \[prop:liminfOpen\] Let $(a_\iota)_{\iota < \lambda}$ be an open sequence in a complete semilattice. Then it holds that $\liminf_{\iota \limto \lambda} a_\iota = \liminf_{\iota \limto \lambda} (a_\iota \glb a_{\iota+1})$. Let $\ol a = \liminf_{\iota \limto \lambda} a_\iota$ and $\oh a = \liminf_{\iota \limto \lambda} (a_\iota \glb a_{\iota+1})$. Since $a_\iota \glb a_{\iota+1} \le a_\iota$ for each $\iota < \lambda$, we have $\oh a \le \ol a$. On the other hand, consider the sets $\ol A_\alpha = \setcom{a_\iota}{\alpha \le \iota < \lambda}$ and $\oh A_\alpha = \setcom{a_\iota\glb a_{\iota+1}}{\alpha \le \iota < \lambda}$ for each $\alpha < \lambda$. Of course, we then have $\Glb\ol A_\alpha \le a_\iota$ for all $\alpha\le\iota<\lambda$, and thus also $\Glb\ol A_\alpha \le a_\iota \glb a_{\iota+1}$ for all $\alpha\le\iota<\lambda$. Hence, $\Glb \ol A_\alpha$ is a lower bound of $\oh A_\alpha$ which implies that $\Glb \ol A_\alpha \le \Glb \oh A_\alpha$. Consequently, $\ol a \le \oh a$ and, due to the antisymmetry of $\le$, we can conclude that $\ol a = \oh a$. With this in mind we can replace $\liminf_{\iota \limto \lambda} t_\iota$ in the definition of weak $\prs$-convergence resp.$\prs$-continuity with $\liminf_{\iota \limto \lambda} t_\iota \glbbot t_{\iota+1}$. From there it is easier to see the intention of moving from $t_\iota \glbbot t_{\iota+1}$ to the context $\substAtPos{t_\iota}{\pi_\iota}{\bot}$ in order to model strong convergence: What makes the notion of strong $\prs$-convergence (and $\prs$-continuity) *strong*, similar to the notion of strong $\mrs$-convergence (resp. $\mrs$-continuity), is the choice of taking the contexts $\substAtPos{t_\iota}{\pi_\iota}{\bot}$ for defining the limit behaviour of reductions instead of the whole terms $t_\iota$. The context $\substAtPos{t_\iota}{\pi_\iota}{\bot}$ provides a conservative underapproximation of the shared structure $t_\iota \glbbot t_{\iota + 1}$ of two consecutive terms $t_\iota$ and $t_{\iota+1}$ in a reduction step $\phi_\iota\fcolon t_\iota \to[\pi_\iota] t_{\iota+1}$. More specifically, we have that $\substAtPos{t_\iota}{\pi_\iota}{\bot} \lebot t_\iota \glbbot t_{\iota + 1}$. That is, as in the metric model of strong convergence, the difference between two consecutive terms is overapproximated by using the position of the reduction step as an indicator. Likewise, strong $\prs$-convergence is simply weak $\prs$-convergence w.r.t. this underapproximation of $t_\iota \glbbot t_{\iota+1}$ [@bahr10rta]. If this approximation is actually precise, i.e.coincides with the actual value, both notions of $\prs$-convergence coincide. \[rem:pcont\] As for the metric model, also in the partial order model, continuity can be defined solely in terms of convergence [@bahr10rta]. More precisely, we have for each reduction $S = (t_\iota \to t_{\iota+1})_{\iota < \alpha}$ that $S$ is weakly $\prs$-continuous iff every (open) proper prefix of $\prefix{S}{\beta}$ weakly $\prs$-converges to $t_\beta$. Analogously, strong $\prs$-continuity can be characterised in terms of strong $\prs$-convergence. An easy consequence of this is that $\prs$-converging reductions are closed under concatenation, i.e. $S\fcolon s \wato t$, $T\fcolon t \wato u$ implies $S\concat T \fcolon s \wato u$ and likewise for strong $\prs$-convergence. In order to understand the difference between weak and strong $\prs$-convergence let us look at a simple example: \[ex:weakVsStrong\] Consider the TRS with the single rule $f(x,y) \to f(y,x)$. This rule induces the following reduction: $$S\fcolon f(a,f(g(a),g(b))) \to f(a,f(g(b),g(a))) \to f(a,f(g(a),g(b))) \to\;\dots$$ $S$ simply alternates between the terms $f(a,f(g(a),g(b)))$ and $f(a,f(g(b),g(a)))$ by swapping the arguments of the inner $f$ occurrence. The reduction is depicted in Figure \[fig:pconv\]. The picture illustrates the parts of the terms that remain *unchanged* and those that remain completely *untouched* by the corresponding reduction step by using a lighter resp. a darker shade of grey. The unchanged part corresponds to the glb of the two terms of a reduction step, viz. for the first step $$f(a,f(g(a),g(b))) \glbbot f(a,f(g(b),g(a))) = f(a,f(g(\bot),g(\bot)))$$ By symmetry, the glb of the terms of the second step is the same one. It is depicted in Figure \[fig:pconvWeak\]. Let $(t_i)_{i < \omega}$ be the sequence of terms of the reduction $S$. By definition, $S$ weakly $\prs$-converges to $\liminf_{i\limto\omega} t_i$. According to Proposition \[prop:liminfOpen\], this is equal to $\liminf_{i\limto\omega} (t_i \glbbot t_{i+1})$. Since $t_i \glbbot t_{i+1}$ is constantly $f(a,f(g(\bot),g(\bot)))$, the reduction sequence weakly $\prs$-converges to $f(a,f(g(\bot),g(\bot)))$. Similarly, the part of the term that remains untouched by the reduction step corresponds to the context. For the first step, this is $f(a,\bot)$. It is depicted in Figure \[fig:pconvStrong\]. By definition, $S$ strongly $\prs$-converges to $\liminf_{i\limto\omega} c_i$ for $(c_i)_{i<\omega}$ the sequence of contexts of $S$. As one can see in Figure \[fig:pconv\], the context constantly remains $f(a,\bot)$. Hence, $S$ strongly $\prs$-converges to $f(a,\bot)$. The example sequence is a particularly simple one as both the glbs $t_i \glbbot t_{i+1}$ and the contexts $c_i$ remain stable. In general, this is not necessary, of course. One can clearly see from the definition that, as for their metric counterparts, weak resp. strong $\prs$-convergence implies weak resp. strong $\prs$-continuity. In contrast to the metric model, however, also the converse implication holds! Since the partial order $\lebot$ on partial terms forms a complete semilattice, the limit inferior is defined for any non-empty sequence of partial terms. Hence, any weakly resp. strongly $\prs$-continuous reduction is also weakly resp. strongly $\prs$-convergent. This is a major difference to $\mrs$-convergence/-continuity. Nevertheless, $\prs$-convergence constitutes a meaningful notion of convergence: The final term of a $\prs$-convergent reduction contains a $\bot$ subterm at each position at which the reduction is “locally diverging” as we have seen in Example \[ex:prsConv\] and Example \[ex:weakVsStrong\]. In fact, as we will show in Section \[sec:comp-mrs-conv\], whenever there are no ’$\bot$’s involved, i.e. if there is no “local divergence”, $\mrs$-convergence and $\prs$-convergence coincide – both in the weak and the strong variant. Recall that strong $\mrs$-continuity resp. $\mrs$-convergence implies weak $\mrs$-continuity resp. $\mrs$-convergence. This is not the case in the partial order setting. The reason for this is that strong $\prs$-convergence resp. $\prs$-continuity is defined differently compared to its weak variant. It uses the contexts instead of the terms in the reduction, whereas in the metric setting the strong notion of convergence is a mere restriction of the weak counterpart as we have observed earlier. \[ex:strWeakPconv\] Consider the TRS consisting of the rules $\rho_1\fcolon h(x) \to h(g(x)), \rho_2\fcolon f(x) \to g(x)$ and the reductions $$S\fcolon f(h(a)) \to[\rho_1] f(h(g(a))) \to[\rho_1] f(h(g(g(a)))) \to[\rho_1] \dots \quad \text{ and } \quad T\fcolon f(\bot) \to[\rho_2] g(\bot)$$ Then the reduction $$S\concat T\fcolon f(h(a)) \to[\rho_1] f(h(g(a))) \to[\rho_1] f(h(g(g(a)))) \to[\rho_1] \dots \; f(\bot) \to[\rho_2] g(\bot)$$ is clearly both strongly $\prs$-continuous and -convergent. On the other hand it is neither weakly $\prs$-continuous nor -convergent for the simple fact that $S$ does not weakly $\prs$-converge to $f(\bot)$ but to $f(h(g^\omega))$. Nevertheless, by observing that $\liminf_{\iota \limto \alpha} c_\iota \lebot \liminf_{\iota \limto \alpha} t_\iota$ since $c_\iota \lebot t_\iota$ for each $\iota < \alpha$, we obtain the following weaker relation between weak and strong $\prs$-convergence: Let $\calR$ be a left-linear TRS with $s \pato[\calR] t$. Then there is a term $t' \gebot t$ with $s \pwato[\calR] t'$. Let $S = (\phi_\iota\fcolon t_\iota \to[\rho_\iota] t_{\iota+1})_{\iota<\alpha}$ be a reduction strongly $\prs$-converging to $t_\alpha$. By induction we construct for each prefix $\prefix{S}{\beta}$ of $S$ a reduction $S'_\beta = (\phi'_\iota\fcolon t'_\iota \to[\rho_\iota] t'_{\iota+1})_{\iota<\beta}$ weakly $\prs$-converging to a term $t'_\beta$ such that $t_\iota \lebot t'_\iota$ for each $\iota \le \alpha$. The proposition then follows from the case where $\beta = \alpha$. The case $\beta = 0$ is trivial. If $\beta = \gamma + 1$, then by induction hypothesis we have a reduction $S'_\gamma\fcolon t'_0 \pwato[\calR] t'_\gamma$. Since $t_\gamma \lebot t'_\gamma$ and $ t_\gamma$ is a $\rho_\gamma$-redex, also $t'_\gamma$ is a $\rho_\gamma$-redex due to the left-linearity of $\calR$. Hence, there is a reduction step $\phi'_\gamma\fcolon t'_\gamma \to t'_\beta$. One can easily see that then $t_\beta \lebot t'_\beta$. Hence, $S'_\beta = S'_\gamma \concat \seq{\phi'_\gamma}$ satisfies desired conditions. If $\beta$ is a limit ordinal, we can apply the induction hypothesis to obtain for each $\gamma < \beta$ a reduction $S'_\gamma = (\phi'_\iota\fcolon t'_\iota \to[\rho_\iota] t'_{\iota+1})_{\iota<\gamma}$ that weakly $\prs$-converges to $t'_\gamma \gebot t_\gamma$. Hence, according to Remark \[rem:pcont\], $S'_\beta = (\phi'_\iota\fcolon t'_\iota \to[\rho_\iota] t'_{\iota+1})_{\iota<\beta}$ is weakly $\prs$-continuous. Therefore, we obtain that $S'_\beta$ weakly $\prs$-converges to $t'_\beta = \liminf_{\iota \limto \beta} t'_\iota$. Moreover, since $c_\iota \lebot t_\iota$ and $t_\iota \lebot t'_\iota$ for each $\iota < \beta$, we can conclude that $$t_\beta = \liminf_{\iota \limto \beta} c_\iota \lebot \liminf_{\iota \limto \beta} t_\iota \lebot \liminf_{\iota \limto \beta} t'_\iota = t'_\beta. \eqno{\qEd}$$ And indeed, returning to Example \[ex:strWeakPconv\], we can see that there is a reduction $$f(h(a)) \to[\rho_1] f(h(g(a))) \to[\rho_1] f(h(g(g(a)))) \to[\rho_1] \dots \; f(h(g^\omega)) \to[\rho_2] g(h(g^\omega))$$ that, starting from $f(h(a))$, weakly $\prs$-converges to $g(h(g^\omega))$ which is strictly larger than $g(\bot)$. A simple example shows that left-linearity is crucial for the above proposition: Let $\calR$ be a TRS consisting of the rules $$\rho_1\fcolon a\to a,\quad \rho_2\fcolon b\to b, \quad \rho_3\fcolon f(x,x)\to c.$$ We then get the strongly $\prs$-converging reduction $$f(a,b) \to[\rho_1] f(a,b) \to[\rho_2] f(a,b) \to[\rho_1] f(a,b) \to[\rho_2]\dots\; f(\bot,\bot) \to[\rho_3] c$$ Yet, there is no reduction in $\calR$ that, starting from $f(a,b)$, weakly $\prs$-converges to $c$. Strong p-Convergence {#sec:strong-prs-conv} -------------------- In this paper we are mainly focused on the strong notion of convergence. To this end, the rest of this section will be concerned exclusively with strong $\prs$-convergence. We will, however, revisit weak $\prs$-convergence in Section \[sec:comp-mrs-conv\] when comparing it to weak $\mrs$-convergence. Note that in the partial order model we have to consider reductions over the extended signature $\Sigma_\bot$, i.e. reductions containing partial terms. Thus, from now on, we assume reductions in a TRS over $\Sigma$ to be implicitly over $\Sigma_\bot$. When we want to make it explicit that a reduction $S$ contains only total terms, we say that $S$ is *total*. When we say that a strongly $\prs$-convergent reduction $S\fcolon s \pato t$ is total, we mean that both the reduction $S$ and the final term $t$ are total.[^1] In order to understand the behaviour strong $\prs$-convergence, we need to look at how the lub and glb of a set of terms looks like. The following two lemmas provide some insight. \[lem:lubbot\] For each $T \subseteq \ipterms$ and $t = \Lubbot T$, the following holds 1. $\pos{t} = \bigcup_{s\in T} \pos{s}$ \[item:lubbotI\] 2. $t(\pi) = f$ iff there is some $s \in T$ with $s(\pi) = f$ for each $f \in \Sigma\cup\calV$, and position $\pi$. \[item:lubbotII\] Clause (\[item:lubbotI\]) follows straightforwardly from clause (\[item:lubbotII\]). The “if” direction of (\[item:lubbotII\]) follows from the fact that if $s \in T$, then $s\lebot t$ and, therefore, $s(\pi) = f$ implies $t(\pi) = f$. For the “only if” direction assume that no $s \in T$ satisfies $s(\pi) = f$. Since, $s\lebot t$ for each $s\in T$, we have $\pi \nin \posNonBot{s}$ for each $s \in T$. But then $t' = \substAtPos{t}{\pi}{\bot}$ is an upper bound of $T$ with $t' \lbot t$. This contradicts the assumption that $t$ is the least upper bound of $T$. \[lem:glbbot\] Let $T \subseteq \ipterms$ and $P$ a set of positions closed under prefixes such that all terms in $T$ coincide in all positions in $P$, i.e. $s(\pi) = t(\pi)$ for all $\pi \in P$ and $s,t \in T$. Then the glb $\Glbbot T$ also coincides with all terms in $T$ in all positions in $P$. Construct a term $s$ such that it coincides with all terms in $T$ in all positions in $P$ and has $\bot$ at all other positions. That is, given an arbitrary term $t \in T$, we define $s$ as the unique term with $s(\pi) = t(\pi)$ for all $\pi \in P$, and $s(\pi\concat\seq i) = \bot$ for all $\pi \in P$ with $\pi\concat \seq i \in \pos t \setminus P$. Then $s$ is a lower bound of $T$. By construction, $s$ coincides with all terms in $T$ in all positions in $P$. Since $s \lebot \Glbbot T$, this property carries over to $\Glbbot T$. Following the two lemmas above, we can observe that – intuitively speaking – the limit inferior $\liminf_{\iota \limto \alpha} t_\iota$ of a sequence of terms is the term that contains those parts that become *eventually stable* in the sequence. Remaining holes in the term structure are filled with ’$\bot$’s. Let us see what this means for strongly $\prs$-converging reductions: \[lem:nonBotLimRed\] Let $\calR = (\Sigma,R)$ be a TRS and $S\fcolon s \pto{\lambda}[\calR] t$ an open reduction with $S = (t_\iota \to[\pi_\iota,c_\iota] t_{\iota + 1})_{\iota<\lambda}$. Then the following statements are equivalent for all positions $\pi$: 1. $t(\pi)\neq \bot$. \[item:nonBotLimRed1\] 2. there is some $\alpha < \lambda$ such that $c_\iota(\pi) = t(\pi) \neq \bot$ for all $\alpha \le \iota < \lambda$. \[item:nonBotLimRed2\] 3. there is some $\alpha < \lambda$ such that $t_\alpha(\pi) = t(\pi) \neq \bot$ and $\pi_\iota \not\le \pi$ for all $\alpha \le \iota < \lambda$. \[item:nonBotLimRed3\] 4. there is some $\alpha < \lambda$ such that $t_\alpha(\pi)\neq \bot$ and $\pi_\iota \not\le \pi$ for all $\alpha \le \iota < \lambda$. \[item:nonBotLimRed4\] At first consider the implication from to . To this end, let $t(\pi)\neq \bot$ and $s_\gamma = \Glbbot_{\gamma \le \iota < \lambda} c_\iota$ for each $\gamma < \lambda$. Note that then $t = \Lubbot_{\gamma < \lambda} s_\gamma$. Applying Lemma \[lem:lubbot\] yields that there is some $\alpha < \lambda$ such that $s_{\alpha}(\pi) = t(\pi)$. Moreover, for each $\alpha \le \iota < \lambda$, we have $s_\alpha \lebot c_\iota$ and, therefore, $s_\alpha(\pi) = c_\iota(\pi)$. Consequently, we obtain $c_\iota(\pi) = t(\pi)$ for all $\alpha \le \iota < \lambda$. Next consider the implication from to . Let $\alpha < \lambda$ be such that $c_\iota(\pi) = t(\pi) \neq \bot$ for all $\alpha \le \iota < \lambda$. Recall that $c_\iota = \substAtPos{t_\iota}{\pi_\iota}{\bot}$ for all $\iota < \lambda$. Hence, the fact that $c_\iota(\pi)\neq\bot$ for all $\alpha \le \iota < \lambda$ implies that $t_\alpha(\pi) = c_\alpha(\pi)$ and that $\pi_\iota \not\le \pi$ for all $\alpha \le \iota < \lambda$. Since $c_\alpha(\pi) = t(\pi) \neq \bot$, we also have $t_\alpha(\pi) = t(\pi) \neq \bot$. The implication from to is trivial. Finally, consider the implication from to . For this purpose, let $\alpha < \lambda$ be such that (1) $\pi \in \posNonBot{t_\alpha}$ and (2) $\pi_\iota \not\le \pi$ for all $\alpha \le \iota < \lambda$. Consider the set $P$ consisting of all positions in $t_\alpha$ that are prefixes of $\pi$. $P$ is obviously closed under prefixes and, because of (2), all terms in the set $T = \setcom{c_\iota}{\alpha\le \iota <\lambda}$ coincide in all positions in $P$. According to Lemma \[lem:glbbot\], also $s_\alpha = \Glbbot T$ coincides with all terms in $T$ in all positions in $P$. Since $\pi \in P$ and $c_\alpha \in T$, we thereby obtain that $c_\alpha(\pi) = s_\alpha(\pi)$. As we also have $t_\alpha(\pi) = c_\alpha(\pi)$, due to (2), and $\pi \in \posNonBot{t_\alpha}$ we can infer that $\pi \in \posNonBot{s_\alpha}$. Since $s_\alpha \lebot t$, we can then conclude $\pi \in \posNonBot{t}$. The above lemma is central for dealing with strongly $\prs$-convergent reductions. It also reveals how the final term of a strongly $\prs$-convergent reduction is constructed. According to the equality of (\[item:nonBotLimRed1\]) and (\[item:nonBotLimRed3\]), the final term has the non-$\bot$ symbol $f$ at some position $\pi$ iff some term $t_\alpha$ in the reduction also had this symbol $f$ at this position $\pi$ and no reduction after that term occurred at $\pi$ or above. In this way, the final outcome of a strongly $\prs$-convergent reduction consists of precisely those parts of the intermediate terms which become *eventually persistent* during the reduction, i.e.are from some point on not subjected to contraction any more. Now we turn to a characterisation of the parts that are not included in the final outcome of a strongly $\prs$-convergent reduction, i.e.those that do not become persistent. These parts are either omitted or filled by the placeholder $\bot$. We will call these positions *volatile*: \[def:volatile\] Let $\calR$ be a TRS and $S = (t_\iota \to[\pi_\iota] t_{\iota + 1})_{\iota < \lambda}$ an open $\prs$-converging reduction in $\calR$. A position $\pi$ is said to be *volatile* in $S$ if, for each ordinal $\beta < \lambda$, there is some $\beta \le \gamma < \lambda$ such that $\pi_\gamma = \pi$. If $\pi$ is volatile in $S$ and no proper prefix of $\pi$ is volatile in $S$, then $\pi$ is called *outermost-volatile*. In Example \[ex:prsConv\] the position $\seq{0}$ is outermost-volatile in the reduction $S$. \[ex:volPos\] Consider the TRS $\calR$ consisting of the rules $$\rho_1\fcolon h(x) \to g(x),\qquad \rho_2\fcolon s(g(x)) \to s(h(s(x)))$$ $\calR$ admits the following reduction $S$ of length $\omega$: $$\begin{aligned} S\fcolon f(s(0),s(h(0))) &\to[\rho_1] f(s(0),s(g(0))) \to[\rho_2] f(s(0),s(h(s(0)))) \\&\to[\rho_1] f(s(0),s(g(s(0)))) \to[\rho_2] f(s(0),s(h(s(s(0))))) \end{aligned}$$ The reduction $S$ $\prs$-converges to $f(s(0),\bot)$, i.e. we have $S\fcolon f(s(0),s(h(0))) \pto\omega[\calR]f(s(0),\bot)$. Figure \[fig:volPos\] illustrates the reduction indicating the position of each reduction step by two circles and a reduction arrow in between. One can clearly see that both $\pi_1 = \seq{1}$ and $\pi_2 = \seq{1,0}$ are volatile in $S$. Again and again reductions take place at $\pi_1$ and $\pi_2$. Since these are the only volatile positions and $\pi_1$ is a prefix of $\pi_2$, we have that $\pi_1$ is an outermost-volatile position in $S$. As we shall see later in Section \[sec:relation-bohm-trees\], volatility is closely related to root-active terms: if a reduction has a volatile position $\pi$, then we find a term in the reduction with a root-active subterm at $\pi$. Conversely, from each root-active term starts a reduction with volatile position $\emptyseq$ (cf.Proposition \[prop:rootAct\]). This connection between volatility and root-activeness is the cornerstone of the correspondence between $\prs$-convergence and Böhm-convergence that we prove in Section \[sec:relation-bohm-trees\]. The following lemma shows that $\bot$ symbols are produced precisely at outermost-volatile positions in open reduction. \[lem:botLimRed\] Let $S = (t_\iota \to[\pi_\iota] t_{\iota + 1})_{\iota < \alpha}$ an open reduction $\prs$-converging to $t_\alpha$ in some TRS. Then, for every position $\pi$, we have the following: 1. If $\pi$ is volatile in $S$, then $\pi \nin \posNonBot{t_\alpha}$. \[item:botLimRed1\] 2. $t_\alpha(\pi) = \bot$ iff 1. $\pi$ is outermost-volatile in $S$, or \[item:botLimRed2a\] 2. there is some $\beta < \alpha$ such that $t_\beta(\pi) = \bot$ and $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. \[item:botLimRed2b\] \[item:botLimRed2\] 3. Let $t_\iota$ be total for all $\iota < \alpha$. Then $t_\alpha(\pi) = \bot$ iff $\pi$ is outermost-volatile in $S$. \[item:botLimRed3\] This follows from Lemma \[lem:nonBotLimRed\], in particular the equivalence of (\[item:nonBotLimRed1\]) and (\[item:nonBotLimRed3\]). At first consider the “only if” direction. To this end, suppose that $t_\alpha(\pi) = \bot$. In order to show that then or holds, we will prove that must hold true whenever does not hold. For this purpose, we assume that $\pi$ is not outermost-volatile in $S$. Note that no proper prefix $\pi'$ of $\pi$ can be volatile in $S$ as this would imply, according to clause , that $\pi' \nin \posNonBot{t_\alpha}$ and, therefore, $\pi \nin \pos{t_\alpha}$. Hence, $\pi$ is also not volatile in $S$. In sum, no prefix of $\pi$ is volatile in $S$. Consequently, there is an upper bound $\beta < \alpha$ on the indices of reduction steps taking place at $\pi$ or above. But then $t_\beta(\pi) = \bot$ since otherwise Lemma \[lem:nonBotLimRed\] would imply that $t_\alpha(\pi) \neq \bot$. This shows that holds. For the converse direction, we will show that both and independently imply that $t_\alpha(\pi) = \bot$: Let $\pi$ be outermost-volatile in $S$. By clause , this implies $\pi\nin\posNonBot{t_\alpha}$. Hence, it remains to be shown that $\pi \in \pos{t_\alpha}$. If $\pi = \emptyseq$, then this is trivial. Otherwise, $\pi$ is of the form $\pi'\concat i$. Since all proper prefixes of $\pi$ are not volatile, there is some $\beta < \alpha$ such that $\pi_\beta = \pi$ and $\pi_\iota \not\le \pi'$ for all $\beta \le \iota < \alpha$. This implies that $\pi \in \pos{t_\beta}$. Hence, $t_\beta(\pi') = f$ is a symbol having an arity of at least $i+1$. Consequently, according to Lemma \[lem:nonBotLimRed\], also $t_\alpha(\pi') = f$. Since $f$’s arity is at least $i+1$, also $\pi = \pi'\concat i \in \pos{t_\alpha}$. Let $\beta < \alpha$ such that $t_\beta(\pi) = \bot$ and $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. According to Proposition \[prop:liminfSuffix\], we have that $t_\alpha = \Lubbot_{\beta\le \gamma < \alpha} \Glbbot_{\gamma \le \iota < \alpha} c_\iota$. Define $s_\gamma = \Glbbot_{\gamma \le \iota < \alpha} c_\iota$ for each $\gamma < \alpha$. Since from $\beta$ onwards no reduction takes place at $\pi$ or above, it holds that all $c_\iota$, for $\beta \le \iota < \alpha$, coincide in all prefixes of $\pi$. By Lemma \[lem:glbbot\], this also holds for all $s_\iota$ and $c_\iota$ with $\beta \le \iota < \alpha$. Since $c_\beta(\pi) = t_\beta(\pi) = \bot$, this means that $s_\iota(\pi) = \bot$ for all $\beta \le \iota < \alpha$. Recall that $t_\alpha = \Lubbot_{\beta\le \gamma < \alpha} s_\gamma$. Hence, according to Corollary \[lem:lubbot\], we can conclude that $t_\alpha(\pi) = \bot$. is a special case of : If each $t_\iota$, $\iota < \alpha$, is total, then cannot be true. Clause (\[item:botLimRed2\]) shows that a $\bot$ subterm in the final term can only have its origin either in a preceding term which already contains this $\bot$ which then becomes stable, or in an outermost-volatile position. That is, it is exactly the outermost-volatile positions that generate ’$\bot$’s. We can apply this lemma to Example \[ex:volPos\]: As we have seen, the position $\pi_1 = \seq{1}$ is outermost-volatile in the reduction $S$ mentioned in the example. Hence, $S$ strongly $\prs$-converges to a term that has, according to Lemma \[lem:botLimRed\], the symbol $\bot$ at position $\pi_1$. That is, $S$ strongly $\prs$-converges to $f(s(0),\bot)$. This characterisation of the final outcome of a $\prs$-converging reduction clearly shows that the partial order model captures the intuition of strong convergence in transfinite reductions even though it allows that every continuous reduction is also convergent: The final outcome only represents the parts of the reduction that *are* converging. Locally diverging parts are cut off and replaced by $\bot$. In fact, the absence of such local divergence, or volatility, as we call it here, is equivalent to the absence of $\bot$: \[lem:totalRed\] Let $\calR$ be a TRS, $s$ a total term in $\calR$, and $S\fcolon s \pato[\calR] t$. $S\fcolon s \pato[\calR] t$ is total iff no prefix of $S$ has a volatile position. The “only if” direction follows straightforwardly from Lemma \[lem:botLimRed\]. We prove the “if” direction by induction on the length of $S$. If $\len{S} = 0$, then the totality of $S$ follows from the assumption of $s$ being total. If $\len{S}$ is a successor ordinal, then the totality of $S$ follows from the induction hypothesis since single reduction steps preserve totality. If $\len{S}$ is a limit ordinal, then the totality of $S$ follows from the induction hypothesis using Lemma \[lem:botLimRed\]. Moreover, as we shall show in the next section, if local divergences are excluded, i.e. if total reductions are considered, both the metric model and the partial order model coincide. Comparing m-Convergence and p-Convergence {#sec:comp-mrs-conv} ========================================= In this section we want to compare the metric and the partial order model of convergence. In particular, we shall show that the partial order model is only a conservative extension of the metric model: If we only consider total reductions, i.e. reductions over terms in $\iterms$, then $\mrs$-convergence and $\prs$-convergence coincide both in their weak and strong variant. The first and rather trivial observation to this effect is that already on the level of single reduction steps the partial order model conservatively extends the metric model: \[fact:step\] Let $\calR = (\Sigma,R)$ be a TRS, $\calR_\bot = (\Sigma_\bot, R)$, and $s, t \in \ipterms$. Then we have $$s \to[\calR,\pi] t \quad \text{ iff } \quad s \to[\calR_\bot,\pi] t \text{ and } s \text{ is total}.$$ The next step is to establish that the underlying structures that are used to formalise convergence exhibit this behaviour as well. That is, the limit inferior in the complete semilattice $(\ipterms,\lebot)$ is conservative extension of the limit in the complete metric space $(\iterms,\dd)$. More precisely, we want to have that for a sequence $(t_\iota)_{\iota<\alpha}$ in $\iterms$ $$\begin{gathered} \liminf_{\iota \limto \alpha} t_\iota = \lim_{\iota \limto \alpha} t_\iota \qquad \text{ whenever}\quad \begin{aligned} &\lim_{\iota \limto \alpha} t_\iota \text{ is defined, or}\\ &\liminf_{\iota \limto \alpha} t_\iota \text{ is a total term.} \end{aligned}\end{gathered}$$ Note that, as a corollary, the above property implies that $\lim_{\iota \limto \alpha} t_\iota$ is defined iff $\liminf_{\iota \limto \alpha} t_\iota$ is a total term. In Section \[sec:limit-infer-cons\] we shall establish the above property. This result is then used in Section \[sec:prs-conv-cons\] in order to show the desired property that $\prs$-convergence is a conservative extension of $\mrs$-convergence in both their respective weak and strong variant. Complete Semilattice vs. Complete Metric Space {#sec:limit-infer-cons} ---------------------------------------------- In order to compare the complete semilattice of partial terms with the complete metric space of term, it is convenient to have an alternative characterisation of the similarity $\similar{s}{t}$ of two terms $s,t$, which in turn provides an alternative characterisation of the metric $\dd$ on terms. To this end we use the *truncation* of a term at a certain depth. This notion was originally used by Arnold and Nivat [@arnold80fi] to show that the $\dd$ is a complete ultrametric on terms: \[def:trunc\] Let $d \in \nat \cup \set{\infty}$ and $t \in \ipterms$. The *truncation* $\trunc{t}{d}$ of $t$ at depth $d$ is defined inductively on $d$ as follows $$\begin{aligned} \trunc{t}{0} &= \bot \hspace{40pt} \trunc{t}{\infty} = t \\ \trunc{t}{d + 1} &= \begin{cases} t &\text{ if } t \in \calV\\ f(\trunc{t_1}{d},\dots,\trunc{t_k}{d}) &\text{ if } t = f(t_1,\dots, t_k) \end{cases} \end{aligned}$$ More concisely we can say that the truncation of a term $t$ at depth $d$ replaces all subterms at depth $d$ with $\bot$. From this we can easily establish the following two properties of the truncation: \[prop:trunc\] For each two $s,t \in \ipterms$ we have 1. $\trunc{t}{d} \lebot t$ for all $d\in \nat\cup \set\infty$. 2. $\trunc{s}{d} \lebot t$ implies $\trunc{s}{d} = \trunc{t}{d}$ for all $d\in \nat\cup \set\infty$ given $s$ is total. 3. $\trunc{s}{d} = \trunc{t}{d}$ for all $d\in \nat$ iff $s = t$. Straightforward. Recall that the similarity of two terms is the minimal depth at which they differ resp. $\infty$ if they are equal. However, saying that two terms differ at a certain minimal depth $d$ is the same as saying that the truncation of the two terms at that depth $d$ coincide. This provides an alternative characterisation of similarity: \[prop:simTrunc\] For each pair $s,t \in \iterms$ we have $$\similar{s}{t} = \max \setcom{d\in \nat\cup\set\infty}{\trunc{s}{d} = \trunc{t}{d}}$$ Straightforward. We can use this characterisation to show the first part of the compatibility of the metric and the partial order: \[lem:limLiminf\] Let $(t_\iota)_{\iota < \alpha}$ be a convergent sequence in $(\iterms,\dd)$. Then $\lim_{\iota \limto \alpha} t_\iota = \liminf_{\iota \limto \alpha} t_\iota$. If $\alpha$ is a successor ordinal, this is trivial. Let $\alpha$ be a limit ordinal, $\oh t = \lim_{\iota \limto \alpha} t_\iota$, and $\ol t = \liminf_{\iota \limto \alpha} t_\iota$. Then for each $\epsilon \in \realp$ there is a $\beta < \alpha$ such that $\dd(\oh t, t_\iota) < \epsilon$ for all $\beta \le \iota < \alpha$. Hence, for each $d \in \nat$ there is a $\beta < \alpha$ such that $\similar{\oh t}{t_\iota} > d$ for all $\beta \le \iota < \alpha$. According to Proposition \[prop:simTrunc\], $\similar{\oh t}{t_\iota} > d$ implies $\trunc{\oh t}{d} = \trunc{t_\iota}{d}$, which, according to Proposition \[prop:trunc\], implies $\trunc{\oh t}{d} \lebot t_\iota$. Therefore, $\trunc{\oh t}{d}$ is a lower bound of $T_\beta = \setcom{t_\iota}{\beta \le \iota < \alpha}$, i.e. $\trunc{\oh t}{d} \lebot \Glbbot T_\beta$. Since $\ol t = \Lubbot_{\beta<\alpha} \Glbbot T_\beta$, we also have that $\Glbbot T_\beta \lebot \ol t$. By transitivity, we obtain $\trunc{\oh t}{d} \lebot \ol t$ for each $d \in \nat$. Since $\oh t$ is total, we can thus conclude, according to Proposition \[prop:trunc\], that $\oh t = \ol t$. Before we continue, we want introduce another characterisation of similarity which bridges the gap to the partial order $\lebot$. In order to follow this approach, we need the to define the *$\bot$-depth* of a term $t \in \ipterms$. It is the minimal depth of an occurrence of the subterm $\bot$ in $t$: $$\sdepth{t}{\bot} = \min \setcom{\len\pi}{t(\pi) = \bot}\cup \set\infty$$ Intuitively, the glb $s \glbbot t$ of two terms $s,t$ represents the common structure that both terms share. The similarity $\similar{s}{t}$ is a much more condensed measure. It only provides the depth up two which the terms share a common structure. Using the $\bot$-depth we can directly condense the glb $s \glbbot t$ to the similarity $\similar{s}{t}$: \[prop:simDepth\] For each pair $s,t \in \iterms$ we have $$\similar{s}{t} = \sdepth{s \glbbot t}{\bot}$$ Follows from Lemma \[lem:glbbot\]. We can employ this alternative characterisation of similarity to show the second part of the compatibility of the metric and the partial order: \[lem:liminfCauchy\] Let $(t_\iota)_{\iota<\alpha}$ be a sequence in $\iterms$ such that $\liminf_{\iota\limto\alpha} t_\iota$ is total. Then $(t_\iota)_{\iota<\alpha}$ is Cauchy. For $\alpha$ a successor ordinal this is trivial. For the case that $\alpha$ is a limit ordinal, suppose that $(t_\iota)_{\iota<\alpha}$ is not Cauchy. That is, there is an $\epsilon \in \realp$ such that for all $\beta < \alpha$ there is a pair $\beta < \iota,\iota' < \alpha$ with $\dd(t_\iota,t_{\iota'}) \ge \epsilon$. Hence, there is a $d \in \nat$ such that for all $\beta < \alpha$ there is a pair $\beta < \iota,\iota' < \alpha$ with $\similar{t_\iota}{t_{\iota'}} \le d$, which, according to Proposition \[prop:simDepth\], is equivalent to $\sdepth{t_\iota \glbbot t_{\iota'}}{\bot} \le d$. That is, $$\begin{gathered} \label{eq:liminfCauchy} \text{for each } \beta < \alpha \text{ there are } \beta < \iota , \iota' < \alpha \text{ with } \sdepth{t_\iota \glbbot t_{\iota'}}{\bot} \le d \tag{1} \end{gathered}$$ Let $s_\beta = \Glbbot_{\beta \le \iota < \alpha} t_\iota$. Then $s_\beta \lebot t_\iota \glbbot t_{\iota'}$ for all $\beta \le \iota, \iota' < \alpha$, which implies $\sdepth{s_\beta}{\bot} \le \sdepth{t_\iota \glbbot t_{\iota'}}{\bot}$. By combining this with , we obtain $\sdepth{s_\beta}{\bot} \le d$. More precisely, we have that $$\begin{gathered} \label{eq:liminfCauchyI} \text{for each } \beta < \alpha \text{ there is a } \pi \in \pos{s_\beta} \text{ with } \len\pi \le d \text{ and } s_\beta(\pi) = \bot. \tag{2} \end{gathered}$$ Let $\ol t = \liminf_{\iota \limto \alpha} t_\iota$. Note that $\ol t = \Lubbot_{\beta < \alpha} s_\beta$. Since, according to Lemma \[lem:lubbot\], $\pos{\ol t} = \bigcup_{\beta < \alpha}\pos{s_\beta}$ we can reformulate as follows: $$\begin{gathered} \label{eq:liminfCauchyIp} \text{for each } \beta < \alpha \text{ there is a } \pi \in \pos{\ol t} \text{ with } \len\pi \le d \text{ and } s_\beta(\pi) = \bot. \tag{2'} \end{gathered}$$ Since there are only finitely many positions in $\ol t$ of length at most $d$, there is some $\pi^* \in \pos{\ol t}$ such that $$\begin{gathered} \label{eq:liminfCauchyII} \text{for each } \beta < \alpha \text{ there is a } \beta \le \gamma < \alpha \text{ with } s_\gamma(\pi^*) = \bot. \tag{3} \end{gathered}$$ Since $s_\beta \lebot s_\gamma$, whenever $\beta \le \gamma$, we can rewrite as follows: $$\begin{gathered} \label{eq:liminfCauchyIIp} s_\beta(\pi^*) = \bot \text{ for all } \beta < \alpha \text{ with } \pi^* \in \pos{s_\beta}. \tag{3'} \end{gathered}$$ Since $\pi^* \in \pos{\ol t}$, we can employ Lemma \[lem:lubbot\] to obtain from that $\ol t(\pi^*) = \bot$. This contradicts the assumption that $\ol t = \liminf_{\iota\limto\alpha} t_\iota$ is total. The following proposition combines Lemma \[lem:limLiminf\] and Lemma \[lem:liminfCauchy\] in order to obtain the desired property that the metric and the partial order are compatible: \[prop:poMetric\] For every sequence $(t_\iota)_{\iota<\alpha}$ in $\iterms$ the following holds: $$\begin{gathered} \liminf_{\iota \limto \alpha} t_\iota = \lim_{\iota \limto \alpha} t_\iota \qquad \text{ whenever}\quad \begin{aligned} &\lim_{\iota \limto \alpha} t_\iota \text{ is defined, or}\\ &\liminf_{\iota \limto \alpha} t_\iota \text{ is a total term.} \end{aligned} \end{gathered}$$ If $\lim_{\iota \limto \alpha}$ is defined, the equality follows from Lemma \[lem:limLiminf\]. If $\liminf_{\iota \limto \alpha} t_\iota$ is total, the sequence $(t_\iota)_{\iota<\alpha}$ is Cauchy by Lemma \[lem:liminfCauchy\]. Then, as the metric space $(\iterms,\dd)$ is complete, $(t_\iota)_{\iota<\alpha}$ converges and we can apply Lemma \[lem:limLiminf\] to conclude the equality. p-Convergence vs.  m-Convergence {#sec:prs-conv-cons} -------------------------------- In the previous section we have established that the metric and the partial order on (partial) terms are compatible in the sense that the corresponding notions of limit and limit inferior coincide whenever the limit is defined or the limit inferior is a total term. As weak $\mrs$-convergence and weak $\prs$-convergence are solely based on the limit in the metric space resp. the limit inferior in the partially ordered set, we can directly apply this result to show that both notions of convergence coincide on total reductions: \[thr:weakExt\] For every reduction $S$ in a TRS the following equivalences hold: 1. $S\fcolon s \pwacont$ is total iff $S\fcolon s \mwacont$, and \[item:weakExtI\] 2. $S\fcolon s \pwato t$ is total iff $S\fcolon s \mwato t$. \[item:weakExtII\] Both equivalences follow directly from Proposition \[prop:poMetric\] and Fact \[fact:step\], both of which are applicable as we presuppose that each term in the reduction is total. In order to replicate Theorem \[thr:weakExt\] for the strong notions of convergence, we first need the following two lemmas that link the property of increasing contraction depth to volatile positions and the limit inferior, respectively: \[lem:strongConvPos\] Let $S = (t_\iota \to[\pi_\iota] t_{\iota+1})_{\iota < \lambda}$ be an open reduction. Then $(\len{\pi_\iota})_{\iota < \lambda}$ tends to infinity iff, for each position $\pi$, there is an ordinal $\alpha < \lambda$ such that $\pi_\iota \neq \pi$ for all $\alpha \le \iota < \lambda$. The “only if” direction is trivial. For the converse direction, suppose that $\len{\pi_\iota}$ does not tend to infinity as $\iota$ approaches $\lambda$. That is, there is some depth $d \in \nat$ such that there is no upper bound on the indices of reduction steps taking place at depth $d$. Let $d^*$ be the minimal such depth. That is, there is some $\alpha < \lambda$ such that all reduction steps in $\segm{S}{\alpha}{\lambda}$ are at depth at least $d^*$, i.e. $\len{\pi_\iota} \ge d^*$ holds for all $\alpha \le \iota < \lambda$. Of course, also in $\segm{S}{\alpha}{\lambda}$ the indices of steps at depth $d^*$ are not bounded from above. As all reduction steps in $\segm{S}{\alpha}{\lambda}$ take place at depth $d^*$ or below, $\trunc{t_\iota}{d^*} = \trunc{t_{\iota'}}{d^*}$ holds for all $\alpha \le \iota,\iota' < \lambda$. That is, all terms in $\segm{S}{\alpha}{\lambda}$ have the same set of positions of length $d^*$. Let $P^* = \setcom{\pi \in \pos{t_n}}{\len{\pi} = d^*}$ be this set. Since there is no upper bound on the indices of steps in $\segm{S}{\alpha}{\lambda}$ taking place at a position in $P^*$, yet, $P^*$ is finite, there has to be some position $\pi^*\in P^*$ for which there is also no such upper bound. This contradicts the assumption that there is always such an upper bound. \[lem:limInfTrunc\] Let $(t_\iota)_{\iota<\lambda}$ be a sequence in $\ipterms$ and $(d_\iota)_{\iota <\lambda}$ a sequence in $\nat$ such that $\lambda$ is a limit ordinal and $(d_\iota)_{\iota<\lambda}$ tends to infinity. Then $\liminf_{\iota \limto \lambda} t_\iota = \liminf_{\iota \limto \lambda} \trunc{t_\iota}{d_\iota}$. Let $\ol t = \liminf_{\iota \limto \lambda} \trunc{t_\iota}{d_\iota}$ and $\oh t = \liminf_{\iota \limto \lambda} t_\iota$. Since, according to Proposition \[prop:trunc\], $\trunc{t_\iota}{d_\iota} \lebot t_\iota$ for each $\iota < \lambda$, we have that $\ol t \lebot \oh t$. Thus, it remains to be shown that also $\oh t \lebot \ol t$ holds. That is, we have to show that $\oh t(\pi) = \ol t(\pi)$ holds for all $\pi \in \posNonBot{\oh t}$. Let $\pi \in \posNonBot{\oh t}$. That is, $\oh t(\pi) = f \neq \bot$. Hence, by Lemma \[lem:lubbot\], there is some $\alpha < \lambda$ with $(\Glbbot_{\alpha\le\iota<\lambda} t_\iota)(\pi) = f$. Let $P = \setcom{\pi'}{\pi' \le \pi}$ be the set of all prefixes of $\pi$. Note that $\Glbbot_{\alpha\le\iota<\lambda} t_\iota \lebot t_\gamma$ for all $\alpha \le \gamma < \lambda$. Hence, $\Glbbot_{\alpha\le\iota<\lambda} t_\iota$ and $t_\gamma$ coincide in all occurrences in $P$ for all $\alpha \le \gamma < \lambda$. Because $(d_\iota)_{\iota < \lambda}$ tends to infinity, there is some $\alpha \le \beta < \lambda$ such that $d_\gamma > \len{\pi}$ for all $\beta \le \gamma < \lambda$. Consequently, since $\trunc{t_\gamma}{d_\gamma}$ and $t_\gamma$ coincide in all occurrences of length smaller than $d_\gamma$ for all $\gamma < \lambda$, we have that $\trunc{t_\gamma}{d_\gamma}$ and $t_\gamma$ coincide in all occurrences in $P$ for all $\beta \le \gamma < \lambda$. Hence, $\trunc{t_\gamma}{d_\gamma}$ and $\Glbbot_{\alpha\le\iota<\lambda} t_\iota$ coincide in all occurrences in $P$ for all $\beta \le \gamma < \lambda$. Hence, according to Lemma \[lem:glbbot\], $\Glbbot_{\alpha\le\iota<\lambda} t_\iota$ and $\Glbbot_{\beta\le\iota<\lambda} \trunc{t_\iota}{d_\iota}$ coincide in all occurrences in $P$. Particularly, it holds that $(\Glbbot_{\beta\le\iota<\lambda} \trunc{t_\iota}{d_\iota})(\pi) = f$ which in turn implies by Lemma \[lem:lubbot\] that $\ol t(\pi) = f$. We now can prove the counterpart of Theorem \[thr:weakExt\] for strong convergences: \[thr:strongExt\] For every reduction $S$ in a TRS the following equivalences hold: 1. $S\fcolon s \pacont$ is total iff $S\fcolon s \macont$, and \[item:strongExtI\] 2. $S\fcolon s \pato t$ is total iff $S\fcolon s \mato t$. \[item:strongExtII\] It suffices to only prove (\[item:strongExtII\]) since (\[item:strongExtI\]) follows from (\[item:strongExtII\]) according to Remark \[rem:pcont\] resp. Remark \[rem:mcont\]. Let $S = (\phi_\iota\fcolon t_\iota \to[\pi_\iota,c_\iota] t_{\iota+1})_{\iota<\alpha}$ be a reduction in a TRS $\calR_\bot$. We continue the proof by induction on $\alpha$. The case $\alpha = 0$ is trivial. If $\alpha$ is a successor ordinal $\beta + 1$, we can reason as follows $$\begin{aligned} S\fcolon t_0 \pato t_\alpha \text{ total } &\text{ iff }\; \prefix{S}{\beta}\fcolon t_0 \pato t_\beta \text{ and } t_\beta\to[\calR] t_\alpha \tag{Remark~\ref{rem:pcont}, Fact~\ref{fact:step}}\\% &\text{ iff }\; \prefix{S}{\beta}\fcolon t_0 \mato t_\beta \text{ and } t_\beta \to[\calR] t_\alpha \tag{ind.\ hyp.}\\% &\text{ iff }\; S\fcolon t_0 \mato t_\alpha \tag{Remark~\ref{rem:mcont}} \end{aligned}$$ Let $\alpha$ be a limit ordinal. At first consider the “only if” direction. That is, we assume that $S\fcolon t_0 \pato t_\alpha$ is total. According to Remark \[rem:pcont\], we have that $\prefix{S}{\beta}\fcolon t_0 \pato t_\beta$ for each $\beta < \alpha$. Applying the induction hypothesis yields $\prefix{S}{\beta}\fcolon t_0 \mato t_\beta$ for each $\beta < \alpha$. That is, following Remark \[rem:mcont\], we have $S\fcolon t_0 \macont$. Since $c_\iota \lebot t_\iota$ for all $\iota < \alpha$, we have that $t_ \alpha = \liminf_{\iota \limto \alpha} c_\iota \lebot \liminf_{\iota \limto \alpha} t_\iota$. Because $t_\alpha$ is total and, therefore, maximal w.r.t. $\lebot$, we can conclude that $t_\alpha = \liminf_{\iota \limto \alpha} t_\iota$. According to Proposition \[prop:poMetric\], this also means that $t_\alpha = \lim_{\iota \limto \alpha} t_\iota$. For strong $\mrs$-convergence it remains to be shown that $(\len{\pi_\iota})_{\iota<\alpha}$ tends to infinity. So let us assume that this is not the case. By Lemma \[lem:strongConvPos\], this means that there is a position $\pi$ such that, for each $\beta < \alpha$, there is some $\beta \le \gamma < \alpha$ such that the step $\phi_\gamma$ takes place at position $\pi$. By Lemma \[lem:botLimRed\], this contradicts the fact that $t_\alpha$ is a total term. Now consider the converse direction and assume that $S \fcolon t_0 \mato t_\alpha$. Following Remark \[rem:mcont\] we obtain $\prefix{S}{\beta} \fcolon t_0 \mato t_\beta$ for all $\beta < \alpha$, to which we can apply the induction hypothesis in order to get $\prefix{S}{\beta} \fcolon t_0 \pato t_\beta$ for all $\beta < \alpha$ so that we have $S \fcolon t_0 \pacont$, according to Remark \[rem:pcont\]. It remains to be shown that $t_\alpha= \liminf_{\iota \limto \alpha} c_\iota$. Since $S$ strongly $\mrs$-converges to $t_\alpha$, we have that 1. $t_\alpha= \lim_{\iota \limto \alpha} t_\iota$, and that \[item:strongExtA\] 2. the sequence of depths $(d_\iota = \len{\pi_\iota})_{\iota<\alpha}$ tends to infinity. \[item:strongExtB\] Using Proposition \[prop:poMetric\] we can deduce from (\[item:strongExtA\]) that $t_\alpha= \liminf_{\iota \limto \alpha} t_\iota$. Due to (\[item:strongExtB\]), we can apply Lemma \[lem:limInfTrunc\] to obtain $$\liminf_{\iota \limto \alpha} t_\iota = \liminf_{\iota \limto \alpha} \trunc{t_\iota}{d_\iota} \quad\text{ and }\quad \liminf_{\iota \limto \alpha} c_\iota = \liminf_{\iota \limto \alpha} \trunc{c_\iota}{d_\iota}.$$ Since $\trunc{t_\iota}{d_\iota} = \trunc{c_\iota}{d_\iota}$ for all $\iota < \alpha$, we can conclude that $$t_\alpha = \liminf_{\iota \limto \alpha} t_\iota = \liminf_{\iota \limto \alpha} \trunc{t_\iota}{d_\iota} = \liminf_{\iota \limto \alpha} \trunc{c_\iota}{d_\iota} = \liminf_{\iota \limto \alpha} c_\iota. \eqno{\qEd}$$ The main result of this section is that we do not loose anything when switching from the metric model to the partial order model of infinitary term rewriting. Restricted to the domain of the metric model, i.e. total terms, both models coincide in the strongest possible sense as Theorem \[thr:weakExt\] and Theorem \[thr:strongExt\] confirm. At the same time, however, the partial order model provides more structure. Whenever the metric model can only conclude divergence, the partial order model can qualify the degree of divergence. If a reduction $\prs$-converges to $\bot$, it can be considered completely divergent. If it $\prs$-converges to a term that only contains $\bot$ as proper subterms, it can be recognised as being only partially divergent with the diverging parts of the reduction indicated by ’$\bot$’s, whereas complete absence of ’$\bot$’s then indicates complete convergence. In the rest of this paper we will put our focus on strong convergence. Theorem \[thr:strongExt\] will be one of the central tools in Section \[sec:relation-bohm-trees\] where we shall discover that Böhm-reachability coincides with strong $\prs$-reachability in orthogonal systems. The other crucial tool that we will leverage is the existence and uniqueness of complete developments. This is the subject of the subsequent section. Strongly p-Converging Complete Developments {#sec:compl-devel} =========================================== The purpose of this section is to establish a theory of residuals and complete developments in the setting of strongly $\prs$-convergent reductions. Intuitively speaking, the residuals of a set of redexes are the remains of this set of redexes after a reduction, and a complete development of a set of redexes is a reduction which only contracts residuals of these redexes and ends in a term with no residuals. Complete developments are a well-known tool for proving (finitary) confluence of orthogonal systems [@terese03book]. It has also been lifted to the setting of strongly $\mrs$-convergent reductions in order to establish (restricted forms of) infinitary confluence of orthogonal systems [@kennaway95ic]. As we have seen in Example \[ex:mconfl\], $\mrs$-convergence in general does not have this property. After introducing residuals and complete developments in Section \[sec:residuals\], we will show in Section \[sec:complete-development\] resp.Section \[sec:uniqueness\] that complete developments do always exist and that their final terms are uniquely determined. We then use this in Section \[sec:results\] to show the Infinitary Strip Lemma for strongly $\prs$-converging reductions which is a crucial tool for proving our main result in Section \[sec:relation-bohm-trees\]. Residuals {#sec:residuals} --------- At first we need to formalise the notion of residuals. It is virtually equivalent to the definition for strongly $\mrs$-convergent reduction by Kennaway et al. [@kennaway95ic]: \[def:desc\] Let $\calR$ be a TRS, $S\fcolon t_0 \pto{\alpha}[\calR] t_\alpha$, and $U \subseteq \posNonBot{t_0}$. The *descendants* of $U$ by $S$, denoted $\dEsc{U}{S}$, is the set of positions in $t_\alpha$ inductively defined as follows: 1. If $\alpha = 0$, then $\dEsc{U}{S} = U$. \[item:descA\] 2. If $\alpha = 1$, i.e. $S\fcolon t_0 \to[\pi,\rho] t_1$ for some $\rho\fcolon l \to r$, take any $u\in U$ and define the set $R_u$ as follows: If $\pi \not\le u$, then $R_u = \set{u}$. If $u$ is in the pattern of the $\rho$-redex, i.e. $u = \pi\concat\pi'$ with $\pi' \in \posFun{l}$, then $R_u = \emptyset$. Otherwise, i.e. if $u = \pi \concat w \concat x$, with $\atPos{l}{w} \in \calV$, then $R_u = \setcom{\pi \concat w' \concat x}{\atPos{r}{w'} = \atPos{l}{w}}$. Define $\dEsc{U}{S} = \bigcup_{u \in U} R_u$. \[item:descB\] 3. If $\alpha = \beta + 1$, then $\dEsc{U}{S} = \dEsc{(\dEsc{U}{\prefix{S}{\beta}})}{\segm{S}{\beta}{\alpha}}$ \[item:descC\] 4. If $\alpha$ is a limit ordinal, then $\dEsc{U}{S} = \posNonBot{t_\alpha} \cap \liminf_{\iota \limto \alpha} \dEsc{U}{\prefix{S}{\iota}}$\ That is, $u \in \dEsc{U}{S} \quad \text{ iff } \quad u \in \posNonBot{t_\alpha} \text{ and } \exists \beta < \alpha \forall \beta \le \iota < \alpha\fcolon u \in \dEsc{U}{\prefix{S}{\iota}}$ \[item:descD\] If, in particular, $U$ is a set of redex occurrences, then $\dEsc{U}{S}$ is also called the set of *residuals* of $U$ by $S$. Moreover, by abuse of notation, we write $\dEsc{u}{S}$ instead of $\dEsc{\set{u}}{S}$. Clauses (\[item:descA\]), (\[item:descB\]) and (\[item:descC\]) are as in the finitary setting. Clause (\[item:descD\]) lifts the definition to the infinitary setting. However, the only difference to the definition of Kennaway et al. is, that we consider partial terms here. Yet, for technical reasons, the notion of descendants has to be restricted to non-$\bot$ occurrences. Since $\bot$ cannot be a redex, this is not a restriction for residuals, though. \[rem:desc\] One can easily see that the descendants of a set of non-$\bot$-occurrences is again a set of non-$\bot$-occurrences. The restriction to non-$\bot$-occurrences has to be made explicit for the case of open reductions. In fact, without this explicit restriction the definition would yield descendants which might not even be occurrences in the final term $t_\alpha$ of the reduction. For example, consider the system with the single rule $f(x) \to x$ and the strongly $\prs$-convergent reduction $$S\fcolon f^\omega \to f^\omega \to\; \dots \;\bot$$ in which each reduction step contracts the redex at the root of $f^\omega$. Consider the set $U =\set{\emptyseq, \seq{0},\seq{0,0},\seq{0,0,0},\dots}$ of all positions in $t^\omega$. Without the abovementioned restriction, the descendants of $U$ by $S$ would be $U$ itself as the descendants of $U$ by each proper prefix of $S$ is also $U$. However, none of the positions $\seq{0},\seq{0,0},\seq{0,0,0},\dots \in U$ is even a position in the final term $\bot$. The position $\emptyseq \in U$ occurs in $\bot$, but only as a $\bot$-occurrence. With the restriction to non-$\bot$-occurrences we indeed get the expected result $\dEsc{U}{S} = \emptyset$. The definition of descendants of open reductions is quite subtle which makes it fairly cumbersome to use in proofs. The lemma below establishes an alternative characterisation which will turn out to be useful in later proofs: \[lem:descLimRed\] Let $\calR$ be a TRS, $S\fcolon s \pto{\lambda}[\calR] t$ and $U \subseteq \posNonBot{s}$, with $\lambda$ a limit ordinal and $S = ({t_\iota\to[\pi_\iota,c_\iota]t_{\iota + 1}})_{\iota < \lambda}$. Then it holds that for each position $\pi$ $$\pi \in \dEsc{U}{S} \quad \text{iff} \quad \text{there is some } \beta < \lambda \text{ with } \pi \in \dEsc{U}{\prefix{S}{\beta}} \text{ and }\forall \beta \le \iota < \lambda\;\; \pi_\iota \not\le \pi.$$ We first prove the “only if” direction. To this end, assume that $\pi \in \dEsc{U}{S}$. Hence, it holds that $$\begin{gathered} \pi \in \posNonBot{t} \text{ and there is some } \gamma_1 < \lambda \text{ such that } \pi \in \dEsc{U}{\prefix{S}{\iota}} \text{ for all } \gamma_1 \le \iota < \lambda \tag{1} \label{eq:descLimRed1} \end{gathered}$$ Particularly, we have that $t(\pi) \neq \bot$. Applying Lemma \[lem:nonBotLimRed\] then yields that $$\begin{gathered} \text{there is some } \gamma_2 < \lambda \text{ such that } \pi_\iota \not\le \pi \text{ for all } \gamma_2 \le \iota < \lambda \tag{2} \label{eq:descLimRed2} \end{gathered}$$ Now take $\beta = \max \set{\gamma_1,\gamma_2}$. Then it holds that $\pi \in \dEsc{U}{\prefix{S}{\beta}}$ and that $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \lambda$ due to and , respectively. Next, consider the converse direction of the statement: Let $\beta < \lambda$ be such that $\pi \in \dEsc{U}{\prefix{S}{\beta}}$ and $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \lambda$. We will show that $\pi \in \dEsc{U}{S}$ by proving the stronger statement that $\pi \in \dEsc{U}{\prefix{S}{\gamma}}$ for all $\beta \le \gamma \le \lambda$. We do this by induction on $\gamma$. For $\gamma = \beta$, this is trivial. Let $\gamma = \gamma' + 1 > \beta$. Note that, by definition, $\dEsc{U}{\prefix{S}{\gamma}} = \dEsc{\left(\dEsc{U}{\prefix{S}{\gamma'}}\right)} {\segm{S}{\gamma'}{\gamma}}$. Hence, since for the $\gamma'$-th step we have, by assumption, $\pi_{\gamma'} \not\le \pi$ and for the preceding reduction we have, by induction hypothesis, that $\pi \in \dEsc{U}{\prefix{S}{\gamma'}}$, we can conclude that $\pi \in \dEsc{U}{\prefix{S}{\gamma}}$. Let $\gamma > \beta$ be a limit ordinal. By induction hypothesis, we have that $\pi \in \dEsc{U}{\prefix{S}{\iota}}$ for each $\beta \le \iota < \gamma$. Particularly, this implies that $\pi \in \posNonBot{t_\beta}$. Together with the assumption that $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \gamma$, this yields that $\pi \in \posNonBot{t_\gamma}$ according to Lemma \[lem:nonBotLimRed\]. Hence, $\pi \in \dEsc{U}{\prefix{S}{\gamma}}$. The following lemma confirms the expected monotonicity of descendants: \[lem:descMon\] Let $\calR$ be a TRS, $S\fcolon s \pato[\calR] t$ and $U,V \subseteq \posNonBot{s}$. If $U\subseteq V$, then $\dEsc{U}{S} \subseteq \dEsc{V}{S}$. Straightforward induction on the length of $S$. This lemma can be generalised such that we can see that descendants are defined “pointwise”: \[prop:descPoint\] Let $\calR$ be a TRS, $S\fcolon s \pato[\calR] t$ and $U \subseteq \posNonBot{s}$. Then it holds that $\dEsc{U}{S} = \bigcup_{u \in U} \dEsc{u}{S}$. Let $S = (t_\iota \to[\pi_\iota,c_\iota] t_{\iota + 1})_{\iota < \alpha}$. For $\alpha = 0$ and $\alpha = 1$, the statement is trivially true. If $\alpha = \alpha' + 1 > 1$, then abbreviate $\prefix{S}{\alpha'}$ and $\segm{S}{\alpha'}{\alpha}$ by $S_1$ and $S_2$, respectively, and reason as follows: $$\begin{aligned} \dEsc{U}{S} & =\dEsc{(\dEsc{U}{S_1})}{S_2} % \stackrel{IH}{=} \dEsc{\underbrace{(\bigcup_{u \in U} \overbrace{\dEsc{u}{S_1}}^{V_u})}_V}{S_2} % \stackrel{IH}= \bigcup_{u\in V} \dEsc{u}{S_2} \\ &= \bigcup_{u \in U} \bigcup_{v \in V_u} \dEsc{v}{S_2} % \stackrel{IH}= \bigcup_{u\in U} \dEsc{V_u}{S_2} % = \bigcup_{u\in U} \dEsc{(\dEsc{u}{S_1})}{S_2} % = \bigcup_{u \in U} \dEsc{u}{S} \end{aligned}$$ Let $\alpha$ be a limit ordinal. The “$\supseteq$” direction of the equation follows from Lemma \[lem:descMon\]. For the converse direction, assume that $\pi \in \dEsc{U}{S}$. By Lemma \[lem:descLimRed\], there is some $\beta < \alpha$ such that $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$ and $\pi \in \dEsc{U}{\prefix{S}{\beta}}$. Applying the induction hypothesis yields that $\pi \in \bigcup_{u \in U} \dEsc{u}{\prefix{S}{\beta}}$, i.e. there is some $u^* \in U$ such that $\pi \in \dEsc{u^*}{\prefix{S}{\beta}}$. By employing Lemma \[lem:descLimRed\] again, we can conclude that $\pi \in \dEsc{u^*}{S}$ and, therefore, that $\pi \in \bigcup_{u \in U} \dEsc{u}{S}$. Note that the above proposition fails if we would include $\bot$-occurrences in our definition of descendants: Reconsider the example in Remark \[rem:desc\] and assume we would drop the restriction to non-$\bot$-occurrences. Then the residuals $\dEsc{u}{S}$ of each occurrence $u\in U$ would be empty, whereas the residuals $\dEsc{U}{S}$ of all occurrences would be the root occurrence $\seq{}$. \[prop:descUnique\] Let $\calR$ be TRS, $S\fcolon s \pato[\calR] t$ and $U,V \subseteq \posNonBot{s}$. If $U \cap V = \emptyset$, then $\dEsc{U}{S}\cap\dEsc{V}{S} = \emptyset$. We will prove the contraposition of the statement. To this end, suppose that there is some occurrence $w \in \dEsc{U}{S} \cap \dEsc{V}{S}$. By Proposition \[prop:descPoint\], there are occurrences $u \in U$ and $v \in V$ such that $w \in \dEsc{u}{S}\cap\dEsc{v}{S}$. We will show by induction on the length of $S$ that then $u = v$ and, therefore, $U \cap V \neq\emptyset$. If $S$ is empty, then this is trivial. If $S$ is of successor ordinal length or open, then $u=v$ follows from the induction hypothesis. \[rem:prsAncestor\] The two propositions above imply that each descendant $u' \in \dEsc{U}{S}$ of a set $U$ of occurrences is the descendant of a uniquely determined occurrence $u \in U$, i.e. $u' \in \dEsc{u}{S}$ for exactly one $u\in U$. This occurrence $u$ is also called the *ancestor* of $u'$ by $S$. The following proposition confirms a property of descendants that one expects intuitively: The descendants of descendants are again descendants. That is, the concept of descendants is composable. \[prop:descSeqRed\] Let $\calR$ be a TRS, $S\fcolon t_0 \pato[\calR] t_1$, $T\fcolon t_1 \pato[\calR] t_2$, and $U \subseteq \posNonBot{t_0}$. Then $\dEsc{U}{S\concat T} = \dEsc{(\dEsc{U}{S})}{T}$. Straightforward proof by induction on the length of $T$. The following proposition confirms that the disjointness of occurrences is propagated through their descendants: \[prop:disjDesc\] The descendants of a set of pairwise disjoint occurrences are pairwise disjoint as well. Let $S\fcolon s \pto{\alpha} t$ and let $U$ be a set of pairwise disjoint occurrences in $s$. We show that $\dEsc{U}{S}$ is also a set of pairwise disjoint occurrences by induction on $\alpha$. For $\alpha$ being $0$, the statement is trivial, and, for $\alpha$ being a successor ordinal, the statement follows straightforwardly from the induction hypothesis. Let $\alpha$ be limit ordinal and suppose that there are two occurrences $u,v \in \dEsc{U}{S}$ which are not disjoint. By definition, there are ordinals $\beta_1,\beta_2 < \alpha$ such that $u \in \dEsc{U}{\prefix{S}{\iota}}$ for all $\beta_1\le\iota<\alpha$, and $v \in \dEsc{U}{\prefix{S}{\iota}}$ for all $\beta_2\le\iota<\alpha$. Let $\beta = \max\set{\beta_1,\beta_2}$. Then we have that $u,v \in \dEsc{U}{\prefix{S}{\beta}}$. This, however, contradicts the induction hypothesis which, in particular, states that $\dEsc{U}{\prefix{S}{\beta}}$ is a set of pairwise disjoint occurrences. For the definition of complete developments it is important that the descendants of redex occurrences are again redex occurrences: \[prop:residual\] Let $\calR$ be an orthogonal TRS, $S\fcolon s \pato[\calR] t$ and $U$ a set of redex occurrences in $s$. Then $\dEsc{U}{S}$ is a set of redex occurrences in $t$. Let $S = (t_\iota \to[\pi_\iota,c_\iota] t_{\iota + 1})_{\iota < \alpha}$. We proceed by induction on $\alpha$. For $\alpha$ being $0$, the statement is trivial, and, for $\alpha$ a successor ordinal, the statement follows straightforwardly from the induction hypothesis. So assume that $\alpha$ is a limit ordinal and that $\pi \in \dEsc{U}{S}$. We will show that $\atPos{t}{\pi}$ is a redex. From Lemma \[lem:descLimRed\] we obtain that $$\begin{gathered} \text{there is some } \beta < \alpha \text{ with } \pi \in \dEsc{U}{\prefix{S}{\beta}} \text{ and } \pi_\iota \not\le \pi \text{ for all } \beta \le \iota < \alpha. \tag{1} \label{eq:prsDisjRedex1} \end{gathered}$$ By applying the induction hypothesis, we get that $\pi$ is a redex occurrence in $t_\beta$. Hence, there is some rule $l\to r \in R$ such that $\atPos{t_\beta}{\pi}$ is an instance of $l$. We continue this proof by showing the following stronger claim: $$\begin{aligned} \text{for all } \beta \le \gamma \le \alpha &&&\atPos{t_\gamma}{\pi} \text{ is an instance of } l, \text{ and} \tag{2} \label{eq:prsDisjRedex2} \\ &&&\atPos{c_\iota}{\pi} \text{ is an instance of } l \text{ for all } \beta \le \iota < \gamma \tag{3} \label{eq:prsDisjRedex3} \end{aligned}$$ For the special case $\gamma = \alpha$ the above claim implies that $\atPos{t}{\pi}$ is a redex. We proceed by an induction on $\gamma$. For $\gamma = \beta$, part of the claim has already been shown and is vacuously true. Let $\gamma = \gamma' + 1 > \beta$. According to the induction hypothesis, and hold for $\gamma'$. Hence, it remains to be shown that both $\atPos{t_\gamma}{\pi}$ and $\atPos{c_{\gamma'}}{\pi}$ are instances of $l$. At first consider $\atPos{c_{\gamma'}}{\pi}$. Recall that $c_{\gamma'} = \substAtPos{t_{\gamma'}}{\pi_{\gamma'}}{\bot}$. At first consider the case where $\pi$ and $\pi_{\gamma'}$ are disjoint. Then $\atPos{c_{\gamma'}}{\pi} = \atPos{t_{\gamma'}}{\pi}$. Since, by induction hypothesis, $\atPos{t_{\gamma'}}{\pi}$ is an instance of $l$, so is $\atPos{c_{\gamma'}}{\pi}$. Next, consider the case where $\pi$ and $\pi_{\gamma'}$ are not disjoint. Because of , we then have that $\pi < \pi_{\gamma'}$, i.e. there is some non-empty $\pi'$ with $\pi_{\gamma'} = \pi \concat \pi'$. Since $\calR$ is non-overlapping, $\pi'$ cannot be a position in the pattern of the redex $\atPos{t_{\gamma'}}{\pi}$ w.r.t. $l$. Therefore, also $\atPos{c_{\gamma'}}{\pi}$ is an instance of $l$. So in either case $\atPos{c_{\gamma'}}{\pi}$ is an instance of $l$. Since $c_{\gamma'} \lebot t_\gamma$, also $\atPos{t_\gamma}{\pi}$ is an instance of $l$. Let $\gamma > \beta$ be a limit ordinal. Part of the claim follows immediately from the induction hypothesis. Hence, $\atPos{c_\iota}{\pi}$ is an instance of $l$ for all $\beta \le \iota < \gamma$. This and implies that all terms in the set $T = \setcom{c_\iota}{\beta \le \iota < \gamma}$ coincide in all occurrences in the set $$P = \setcom{\pi'}{\pi'\le \pi} \cup \setcom{\pi\concat\pi'}{\pi' \in \posFun{l}}$$ $P$ is obviously closed under prefixes. Therefore, we can apply Lemma \[lem:glbbot\] in order to obtain that $\Glbbot T$ coincides with all terms in $T$ in all occurrences in $P$. Since $\Glbbot T \lebot t_\gamma$, this property carries over to $t_\gamma$. Consequently, also $\atPos{t_\gamma}{\pi}$ is an instance of $l$. Next we want to establish an alternative characterisation of descendants based on labellings. This is a well-known technique [@terese03book] that keeps track of descendants by labelling the symbols at the relevant positions in the initial term. In order to formalise this idea, we need to extend a given TRS such that it can also deal with terms that contain labelled symbols: Let $\calR = (\Sigma,R)$ be a TRS. 1. The *labelled signature* $\Sigma^\lab$ is defined as $\Sigma \cup \setcom{f^\lab}{f \in \Sigma}$. The arity of the function symbol $f^\lab$ is the same as that of $f$. The symbols $f^\lab$ are called *labelled*; the symbols $f \in \Sigma$ are called *unlabelled*. Terms over $\Sigma^\lab$ are called *labelled terms*. Note that the symbol $\bot \in \Sigma_\bot$ has no corresponding labelled symbol $\bot^\lab$ in the labelled signature $\Sigma^\lab_\bot$. Likewise, there are no labelled variables. 2. Labelled terms can be projected back to the original unlabelled ones by removing the labels via the projection function $\unlab{\cdot}\fcolon\iterms[\Sigma^\lab_\bot] \funto \ipterms$: $$\begin{aligned} \unlab{\bot} &= \bot% \qquad \qquad \unlab{x} = x &&\text{for all } x \in \calV, \text{ and} \\% \unlab{f^\lab(t_1,\dots,t_k)} &= \unlab{f(t_1,\dots,t_k)} = f(\unlab{t_1},\dots,\unlab{t_k}) &&\text{for all } f \in \Sigma^{(k)} \end{aligned}$$ 3. The *labelled TRS* $\calR^\lab$ is defined as $(\Sigma^\lab,R^\lab)$, where $R^\lab = \setcom{l \to r}{\unlab{l} \to r \in R}$. 4. For each rule $l \to r \in R^\lab$, we define its unlabelled original $\unlab{l \to r} = \unlab{l} \to r$ in $R$. 5. Let $t \in \ipterms$ and $U \subseteq \posFun{t}$. The term $t^{(U)} \in \iterms[\Sigma_\bot^\lab]$ is defined by $$\begin{gathered} t^{(U)}(\pi) = \begin{cases} t(\pi) &\text{if } \pi \nin U\\ t(\pi)^\lab &\text{if } \pi \in U \end{cases} \end{gathered}$$ That is, $\unlab{t^{(U)}} = t$ and the labelled symbols in $t^{(U)}$ are exactly those at positions in $U$. The key property which is needed in order to make the labelling approach work is that any reduction in a left-linear TRS that starts in some term $t$ can be lifted for any labelling $t'$ of $t$ to a unique equivalent reduction in the corresponding labelled TRS that starts in $t'$: \[prop:liftLabelled\] Let $\calR = (\Sigma,R)$ be a left-linear TRS, $S = (s_\iota \to[\rho_\iota,\pi_\iota] s_{\iota + 1})_{\iota<\alpha}$ a reduction strongly $\prs$-converging to $s_\alpha$ in $\calR$ , and $t_0 \in \iterms[\Sigma_\bot^\lab]$ a labelled term with $\unlab{t_0} = s_0$. Then there is a unique reduction $T = (t_\iota \to[\rho'_\iota,\pi_\iota] t_{\iota + 1})_{\iota< \alpha }$ strongly $\prs$-converging to $t_\alpha$ in $\calR^\lab$ such that 1. $\unlab{t_\iota} = s_\iota$, $\unlab{\rho'_\iota} = \rho_\iota$, for all $\iota < \alpha$, and \[item:liftLabelled1\] 2. $\unlab{t_\alpha} = s_\alpha$. \[item:liftLabelled2\] We prove this by an induction on $\alpha$. For the case of $\alpha$ being zero, the statement is trivially true. For the case of $\alpha$ being a successor ordinal, the statement follows straightforwardly from the induction hypothesis (the argument is the same as for finite reductions; e.g. consult [@terese03book]). Let $\alpha$ be a limit ordinal. By induction hypothesis, for each proper prefix $\prefix{S}{\gamma}$ of $S$ there is a uniquely defined strongly $\prs$-convergent reduction $T_\gamma$ in $\calR^\lab$ satisfying and . Since the sequence $(\prefix{S}{\iota})_{\iota < \alpha}$ forms a chain w.r.t. the prefix order $\le$, so does the corresponding sequence $(T_\iota)_{\iota < \alpha}$. Hence the sequence $T = \Lub_{\iota < \alpha} T_\iota$ is well-defined. By construction, $T_\gamma \le T$ holds for each $\gamma < \alpha$, and we can use the induction hypothesis to obtain part of the proposition. In order to show $s_\alpha = \unlab{t_\alpha}$, we prove the two inequalities $s_\alpha \lebot \unlab{t_\alpha}$ and $s_\alpha \gebot \unlab{t_\alpha}$: To show $\unlab{t_\alpha} \lebot s_\alpha$, we take some $\pi \in \posNonBot{\unlab{t_\alpha}}$ and show that $\unlab{t_\alpha}(\pi) = s_\alpha(\pi)$. Let $f = \unlab{t_\alpha}(\pi)$. That is, either $t_\alpha(\pi) = f$ or $t_\alpha(\pi) = f^\lab$. In either case, we can employ Lemma \[lem:nonBotLimRed\] to obtain some $\beta < \alpha$ such that $t_\beta(\pi) = f$ resp. $t_\beta(\pi) = f^\lab$ and $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. Since, by , $s_\beta = \unlab{t_\beta}$, we have in both cases that $s_\beta(\pi) = f$. By applying Lemma \[lem:nonBotLimRed\] again, we get that $s_\alpha(\pi) = f$, too. Lastly, we show the converse inequality $s_\alpha \lebot \unlab{t_\alpha}$. For this purpose, let $\pi \in \posNonBot{s_\alpha}$ and $f = s_\alpha(\pi)$. By Lemma \[lem:nonBotLimRed\], there is some $\beta < \alpha$ such that $s_\beta(\pi) = f$ and $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. Since, by , $s_\beta = \unlab{t_\beta}$, we have that $t_\beta(\pi) \in \set{f,f^\lab}$. Applying Lemma \[lem:nonBotLimRed\] again then yields that $t_\alpha(\pi) \in \set{f,f^\lab}$ and, therefore, $\unlab{t_\alpha}(\pi) = f$. Having this, we can establish an alternative characterisation of descendants using labellings: \[prop:chaDesc\] Let $\calR$ be a left-linear TRS, $S\fcolon s_0 \pato[\calR] s_\alpha$, and $U \subseteq \posNonBot{s_0}$. Following Proposition \[prop:liftLabelled\], let $T\fcolon t_0 \pato[\calR] t_\alpha$ be the unique lifting of $S$ to $\calR^\lab$ starting with the term $t_0 = s_0^{(U)}$. Then it holds that $t_\alpha = s_\alpha^{(\dEsc{U}{S})}$. That is, for all $\pi \in \posNonBot{s_\alpha}$, it holds that $t_\alpha(\pi)$ is labelled iff $\pi \in \dEsc{U}{S}$. Let $S = (s_\iota \to[\pi_\iota] s_{\iota + 1})_{\iota<\alpha}$ and $T = (t_\iota \to[\pi_\iota] t_{\iota + 1})_{\iota< \alpha }$. We prove the statement by an induction on the length $\alpha$ of $S$. If $\alpha = 0$, then the statement is trivially true. If $\alpha$ is a successor ordinal, then a straightforward argument shows that the statement follows from the induction hypothesis. Here the restriction to left-linear systems is vital. Let $\alpha$ be a limit ordinal and let $\pi \in \posNonBot{s_\alpha}$. We can then reason as follows: $$\begin{aligned} t_\alpha(\pi) \text{ is labelled} \quad &\text{iff} \quad \exists \beta < \alpha\fcolon\; t_\beta(\pi) \text{ is labelled and } \forall \beta \le \iota < \alpha\colon \;\; \pi_\iota \not\le \pi \tag{Lem.~\ref{lem:nonBotLimRed}} \\ &\text{iff} \quad % \pi \in \dEsc{U}{\prefix{S}{\beta}} \text{ and } \forall \beta \le \iota < \alpha\colon \;\; \pi_\iota \not\le \pi \tag{ind.\ hyp.} \\ &\text{iff} \quad % \pi \in \dEsc{U}{S} \tag{Lem.~\ref{lem:descLimRed}} \end{aligned}$$ Constructing Complete Developments {#sec:complete-development} ---------------------------------- Complete developments are usually defined for (almost) orthogonal systems. This ensures that the residuals of redexes are again redexes. Since we are going to use complete developments for potentially overlapping systems as well, we need to make restrictions on the set of redex occurrences instead: Two distinct redex occurrences $u,v$ in a term $t$ are called *conflicting* if there is a position $\pi$ such that $v = u\concat \pi$ and $\pi$ is a pattern position of the redex at $u$, or, vice versa, $u = v\concat \pi$ and $\pi$ is a pattern position of the redex at $v$. If this is not the case, then $u$ and $v$ are called *non-conflicting*. One can easily see that in an orthogonal TRS any pair of redex occurrences is non-conflicting. \[def:devel\] Let $\calR$ be a left-linear TRS, $s$ a partial term in $\calR$, and $U$ a set of pairwise non-conflicting redex occurrences in $s$. 1. A *development* of $U$ in $s$ is a strongly $\prs$-converging reduction $S\fcolon s \pto{\alpha} t$ in which each reduction step $\phi_\iota\fcolon t_\iota \to[\pi_\iota] t_{\iota + 1}$ contracts a redex at $\pi_\iota \in \dEsc{U}{\prefix{S}{\iota}}$. 2. A development $S\fcolon s \pato t$ of $U$ in $s$ is called *complete*, denoted $S\fcolon s \pato[U] t$, if $\dEsc{U}{S} = \emptyset$. This is a straightforward generalisation of complete developments known from the finitary setting and coincides with the corresponding formalisation for metric infinitary rewriting [@kennaway95ic] if restricted to total terms. The restriction to non-conflicting redex occurrences is essential in order guarantee that the redex occurrences are independent from each other: Let $\calR$ be a left-linear TRS, $s$ a partial term in $\calR$, $U$ a set of pairwise non-conflicting redex occurrences in $s$, and $S\fcolon s \sato[U] t$ a development of $U$ in $s$. Then also $\dEsc{U}{S}$ is a set of pairwise non-conflicting redex occurrences. This can be proved by induction on the length of $S$. The part showing that the descendants are again redex occurrences can be copied almost verbatim from Proposition \[prop:residual\]. Instead of referring to the non-overlappingness of the system one can refer to the non-conflictingness of the preceding residuals which can be assumed by the induction hypothesis. The part of the induction proof that shows non-conflictingness is analogous to Proposition \[prop:disjDesc\]. It is relatively easy to show that complete developments of sets of non-conflicting redex occurrences do always exists in the partial order setting. The reason for this is that strongly $\prs$-continuous reductions do always strongly $\prs$-converge as well. This means that as long as there are (residuals of) redex occurrences left after an incomplete development, one can extend this development arbitrarily by contracting some of the remaining redex occurrences. The only thing that remains to be shown is that one can devise a reduction strategy which eventually contracts (all residuals of) all redexes. The proposition below shows that a parallel-outermost reduction strategy will always yield a complete development in a left-linear system. \[prop:exComplDev\] Let $\calR$ be a left-linear TRS, $t$ a partial term in $\calR$, and $U$ a set of pairwise non-conflicting redex occurrences in $t$. Then $U$ has a complete development in $t$. Let $t_0 = t$, $U_0 = U$ and $V_0$ the set of outermost occurrences in $U_0$. Furthermore, let $S_0\fcolon t_0 \pato[V_0] t_1$ be some complete development of $V_0$ in $t_0$. $S_0$ can be constructed by contracting the redex occurrences in $V_0$ in a left-to-right order. This step can be continued for each $i < \omega$ by taking $U_{i+1} = \dEsc{U_i}{S_i}$, where $S_{i}\fcolon t_{i} \pato[V_{i}] t_{i+1}$ is some complete development of $V_{i}$ in $t_{i}$ with $V_{i}$ the set of outermost redex occurrences in $U_{i}$. Note that then, by iterating Proposition \[prop:descSeqRed\], it holds that $$\begin{gathered} \dEsc{U}{S_0\concat \dots \concat S_{n-1}} = U_n \quad \text{ for all } n < \omega \tag{1} \label{eq:compDev1} \end{gathered}$$ If there is some $n < \omega$ for which $U_n = \emptyset$, then $S_0 \concat \dots \concat S_{n-1}$ is a complete development of $U$ according to . If this is not the case, consider the reduction $S = \Concat_{i < \omega} S_i$, i.e. the concatenation of all ’$S_i$’s. We claim that $S$ is a complete development of $U$. Suppose that this is not the case, i.e. $\dEsc{U}{S} \neq \emptyset$. Hence, there is some $u \in \dEsc{U}{S}$. Since all ’$U_i$’s are non-empty, so are the ’$V_i$’s. Consequently, all ’$S_i$’s are non-empty reductions which implies that $S$ is a reduction of limit ordinal length, say $\lambda$. Therefore, we can apply Lemma \[lem:descLimRed\] to infer from $u \in \dEsc{U}{S}$ that there is some $\alpha < \lambda$ such that $u \in \dEsc{U}{\prefix{S}{\alpha}}$ and all reduction steps beyond $\alpha$ do not take place at $u$ or above. This is not possible due to the parallel-outermost reduction strategy that $S$ adheres. This shows that complete developments of any set of redex occurrences do always exist in any (almost) orthogonal system. This is already an improvement over strongly $\mrs$-converging reductions, which only allow this if no collapsing rules are present or the considered set of redex occurrences does not contain an infinite set of nested collapsing redexes – also known as an *infinite collapsing tower*. We shall discuss the issue of collapsing rules as well as infinite collapsing towers in more detail in the subsequent section, where we will show that complete developments are also unique in the sense that the final outcome is uniquely determined by the initial set of redexes occurrences. Uniqueness of Complete Developments {#sec:uniqueness} ----------------------------------- The goal of this section is to show that the final term of a complete development is uniquely determined by the initial set of redex occurrences $U$. There are several techniques to show that in the metric model. One of these approaches, introduced by Kennaway and de Vries [@kennaway03book] and detailed by Ketema and Simonsen [@ketema10lmcs; @ketema05lpar] for infinitary combinatory reduction systems, uses so-called *paths*. Paths are constructed such that they, starting from the root, run through the initial term $t$ of the complete development, and whenever a redex occurrence of the development is encountered, the path jumps to the root of the right-hand side of the corresponding rule and jumps back to the term $t$ when it reaches a variable in the right-hand side. Figure \[fig:path1\] illustrates this idea. It shows a path in a term $t$ that encounters two redex occurrences of the complete development. As soon as such a redex occurrence is encountered, the path jumps to the right-hand side of the corresponding rule as indicated by the dashed arrows. Then the path runs through the right-hand side. When a variable is encountered, the path jumps back to the position of the term $t$ that matches the variable. This jump is again indicated by a dashed arrow. The path that is obtained by this construction is shown in Figure \[fig:path2\]. With the collection of the thus obtained paths one can then construct the final term of the complete development. This technique – slightly modified – can also be applied in the present setting. A path consists of nodes, which are connected by edges. We have two kinds of nodes: a node $(\top,\pi)$ represents a location in the term $t$ and a node $(r,\pi,u)$ represents a location in the right-hand side $r$ of a rule. These nodes of the form $(\top,\pi)$ and $(r,\pi,u)$ encode that the path is currently at position $\pi$ in the term $t$ resp. $r$. The additional component $u$ provides the information that the path jumped to the right-hand side $r$ from the redex $\atPos{t}{u}$. Both nodes and the edges between them are labelled. Each node is labelled with the symbol at the current location of the path, unless it is a redex occurrence in $t$ or a variable occurrence in a right-hand side. The labellings of the edges provide information on how the path moves through the terms: a labelling $i$ represents a move along the $i$-th edge in the term tree from the current location whereas an empty labelling indicates a jump from or to a right-hand side of a rule. \[def:redexPath\] Let $\calR$ be a left-linear TRS, $t$ a partial term in $\calR$, and $U$ a set of pairwise non-conflicting redex occurrence in $t$. A $U,\calR$-*path* (or simply *path*) in $t$ is a sequence of length at most $\omega$ containing so-called *nodes* and *edges* in an alternating manner like this: $$\seq{n_0, e_0, n_1, e_1, n_2, e_2, \dots}$$ where the ’$n_i$’s are nodes and the ’$e_i$’s are edges. A node is either a pair of the form $(\top,\pi)$ with $\pi \in \pos{t}$ or a triple of the form $(r,\pi,u)$ with $r$ the right-hand side of a rule in $\calR$, $\pi \in \pos{r}$, and $u \in U$. Edges are denoted by arrows $\edge$. Both edges and nodes might be labelled by elements in $\Sigma_\bot \cup \calV$ and $\nat$, respectively. We write paths as the one sketched above as $$\node{n_0} \edge \node{n_1} \edge \node{n_2} \edge \cdots$$ or, when explicitly indicating labels, as $$\node{n_0}[l_0] \edge[l_1] \node{n_1}[l_2] \edge[l_3] \node{n_2}[l_4] \edge[l_5] \cdots$$ where empty labels are explicitly given by the symbol $\emptylab$. If a path has a segment of the form $n \edge n'$, then we say there is an edge from $n$ to $n'$ or that $n$ has an outgoing edge to $n'$. Every path starts with the node $(\top,\emptyseq)$ and is either infinitely long or ends with a node. For each node $n$ having an outgoing edge to a node $n'$, the following must hold: 1. If $n$ is of the form $(\top,\pi)$, then \[item:redexPath1\] 1. $n' =(\top,\pi \concat i)$ and the edge is labelled by $i$, with $\pi\concat i \in \pos{t}$ and $\pi \nin U$, or \[item:redexPath1a\] 2. $n' = (r,\emptyseq,u)$ and the edge is unlabelled, with $\atPos{t}{u}$ a $\rho$-redex for $\rho\fcolon l \to r \in R$ and $u \in U$. \[item:redexPath1b\] 2. If $n$ is of the form $(r,\pi,u)$, then \[item:redexPath2\] 1. $n' = (r,\pi \concat i,u)$ and the edge is labelled by $i$, with $\pi \concat i \in \pos{r}$, or \[item:redexPath2a\] 2. $n' = (\top,u \concat \pi')$ and the edge is unlabelled, with $\atPos{t}{u}$ a $\rho$-redex for $\rho\fcolon l \to r \in R$, $\atPos{r}{\pi}$ a variable, and $\pi'$ the unique occurrence of $\atPos{r}{\pi}$ in $l$. . \[item:redexPath2b\] Additionally, the nodes of a path are supposed to be labelled in the following way: 1. A node of the form $(\top,\pi)$ is unlabelled if $\pi \in U$ and is labelled by $t(\pi)$ otherwise. \[item:redexPath3\] 2. A node of the form $(r,\pi, u)$ is unlabelled if $\atPos{r}{\pi}$ is a variable and labelled by $r(\pi)$ otherwise. \[item:redexPath4\] The above definition is actually a coinductive one. This is necessary to also define paths of infinite length. Also in [@kennaway03book] paths are considered to be possibly infinite, although they are defined inductively and are, therefore, finite. \[rem:paths\] Our definition of paths deviates slightly from the usual definition found in the literature [@kennaway95ic; @ketema10lmcs; @ketema11ic]: In our setting, term nodes are of the form $(\top,\pi)$. The symbol $\top$ is used to indicate that we are in the host term $t$. In the definitions found in the literature, the term $t$ itself is used for that, i.e. term nodes are of the form $(t,\pi)$. Our definition of paths makes them less dependant on the term $t$ they are constructed in. This makes it easier to construct a path in a host term from other paths in different host terms. This will become necessary in the proof of Lemma \[lem:presPath\]. However, we have to keep in mind that the node labels in a path are dependent on the host term under consideration. Thus, the labelling of a path might be different depending on which host term it is considered to be in. Returning to the schematic example illustrated in Figure \[fig:path\], we can observe how the construction of a path is carried out: The path starts with a segment in the term $t$. This segment is entirely regulated by the rule ; all its edges and nodes are labelled according to and . The jump to the right-hand side $r_1$ following that initial segment is justified by rule . This jump consists of a node $(\top,u_1)$, unlabelled according to , corresponding to the redex occurrence $u_1$, and an unlabelled edge to the node $(r_1,\emptyseq,u_1)$, corresponding to the root of the right-hand side $r_1$. The segment of the path that runs through the right-hand side $r_1$ is subject to rule ; again all its nodes and edges are labelled, now according to and . As soon as a variable is reached in the right-hand side term (in the schematic example it is the variable $x$) a jump to the main term $t$ is performed as required by rule . This jump consists of a node $(r_1,\pi,u_1)$, unlabelled according to , where $\pi$ is the current position in $r_1$, i.e. the variable occurrence, and an unlabelled edge to the node $(\top,u_1\concat \pi')$. The position $\pi'$ is the occurrence of the variable $x$ in the left-hand side. As we only consider left-linear systems, this occurrence is unique. Afterwards, the same behaviour is repeated: A segment in $t$ is followed by a jump to a segment in the right-hand side $r_2$ which is in turn followed by a jump back to a final segment in $t$. Note that paths do not need to be maximal. As indicated in the schematic example, the path ends somewhere within the main term, i.e.not necessarily at a constant symbol or a variable. What the example does not show, but which is obvious from the definition, is that paths can also terminate within a right-hand side. A jump back to the main term is only required if a variable is encountered. The purpose of the concept of paths is to simulate the contraction of all redexes of the complete development in a locally restricted manner, i.e. only along some branch of the term tree. This locality will keep the proofs more concise and makes them easier to understand once we have grasped the idea behind paths. The strategy to prove our conjecture of uniquely determined final terms is to show that paths can be used to define a term and that a contraction of a redex of the complete development preserves a property of the collection of all paths which ensures that the induced term remains invariant. Then we only have to observe that the induced term of paths in a term with no redexes (in $U$) is the term itself. The following fact is obvious from the definition of a path. \[fact:emptyEdge\] Let $\calR$ be a left-linear TRS, $t$ a partial term in $\calR$, and $U$ a set of redex occurrences in $t$. 1. An edge in a $U,\calR$-path in $t$ is unlabelled iff the preceding node is unlabelled. 2. Any prefix of a $U,\calR$-path in $t$ that ends in a node is also a $U,\calR$-path in $t$. As we have already mentioned, collapsing rules and in particular so-called infinite collapsing towers play a significant role in $\mrs$-convergent reductions as they obstruct complete developments. Also in our setting of $\prs$-convergent reductions they are important as they are responsible for volatile positions: Let $\calR$ be a TRS. 1. A rule $l \to r$ in $\calR$ is called *collapsing* if $r$ is a variable. The unique position of the variable $r$ in $l$ is called the *collapsing position* of the rule. 2. A $\rho$-redex is called *collapsing* if $\rho$ is a collapsing rule. 3. A *collapsing tower* is a non-empty sequence $(u_i)_{i < \alpha}$ of collapsing redex occurrences in a term $t$ such that $u_{i+1} = u_i \concat \pi_i$ for each $i<\alpha$, where $\pi_i$ is a collapsing position of the redex at $u_i$. It is called *maximal* if it is not a proper prefix of another collapsing tower. One can easily see that, in orthogonal TRSs, maximal collapsing towers in the same term are uniquely determined by their topmost redex occurrence. That is, two maximal collapsing towers $(u_i)_{i<\alpha}, (v_i)_{i<\alpha}$ in the same term are equal iff $u_0 = v_0$. As mentioned, we shall use the $U,\calR$-paths in a term $t$ in order to define the final term of a complete development of $U$ in $t$. However, in order to do that, we only need the information that is available from the labellings. The inner structure of nodes is only used for the bookkeeping that is necessary for defining paths. The following notion of traces defines projections to the labels of paths: Let $\calR$ be a left-linear TRS, $t$ a partial term in $\calR$, and $U$ a set of pairwise non-conflicting redex occurrences in $t$. 1. Let $\Pi$ be a $U,\calR$-path in $t$. The *trace* of $\Pi$, denoted $\trace{t}{\Pi}$, is the projection of $\Pi$ to the labelling of its nodes and edges ignoring empty labels and the node label $\bot$. 2. $\paths{t}{U}{\calR}$ is used to denote the set of all $U,\calR$-paths in $t$ that end in a labelled node, or are infinite but have a finite trace. The set of traces of paths in $\paths{t}{U}{\calR}$ is denoted by $\traces{t}{U}{\calR}$. By Fact \[fact:emptyEdge\], the trace of a path is a sequence alternating between elements in $\Sigma \cup \calV$ and $\nat$, which, if non-empty, starts with an element in $\Sigma \cup \calV$. Moreover, by definition, $\traces{t}{U}{\calR}$ is a set of finite traces of $U,\calR$-paths in $t$. As we have mentioned in Remark \[rem:paths\], the labelling of a path depends on the host term under consideration. Hence, also the trace of a path is depended on the host term. That is why we need to index the trace mapping $\trace{t}{\cdot}$ with the corresponding host term $t$. Consider the term $t = g(f(g(h(\bot))))$ and the TRS $\calR$ consisting of the two rules $$f(x) \to h(x), \qquad h(x) \to x.$$ Furthermore, let $U$ be the set of all redex occurrences in $t$, viz. $U = \set{\seq{0},\seq{0}^3}$. The following path $\Pi$ is a $U,\calR$-path in $t$: $$\begin{aligned} \node{(\top,\emptyseq)}[g] &\edge[0] \node{(\top,\seq{0})}[\emptylab] \edge[\emptylab] \node{(r_1,\emptyseq,\seq{0})}[h] \edge[0] \node{(r_1,\seq{0},\seq{0})}[\emptylab] \edge[\emptylab] \node{(\top,\seq{0}^2)}[g] \\ &\edge[0] \node{(\top,\seq{0}^3)}[\emptylab] \edge[\emptylab] \node{(r_2,\emptyseq,\seq{0}^3)}[\emptylab] \edge[\emptylab] \node{(\top,\seq{0}^4)}[\bot] \end{aligned}$$ As a matter of fact, $\Pi$ is the greatest path of $t$. Hence, according to Fact \[fact:emptyEdge\], the set of all prefixes of $\Pi$ ending in a node is the set of all $U,\calR$-paths in $t$. Note that since $\Pi$ itself ends in a labelled node, it is contained in $\paths{t}{U}{\calR}$. The trace $\trace{t}{\Pi}$ of $\Pi$ is the sequence $$\seq{g, 0, h, 0, g, 0}$$ Now consider the term $t' = g(f(g(h^\omega)))$ and the set $U'$ of all its redexes, viz. $U' = \set{\seq{0}}\cup\set{\seq{0}^3,\seq{0}^4,\dots}$. Then the following path $\Pi'$ is a $U,\calR$-path in $t'$: $$\begin{aligned} \node{(\top,\emptyseq)}[g] &\edge[0] \node{(\top,\seq{0})}[\emptylab] \edge[\emptylab] \node{(r_1,\emptyseq,\seq{0})}[h] \edge[0] \node{(r_1,\seq{0},\seq{0})}[\emptylab] \edge[\emptylab] \node{(\top,\seq{0}^2)}[g] \edge[0] \node{(\top,\seq{0}^3)}[\emptylab] \\ &\edge[\emptylab] \node{(r_2,\emptyseq,\seq{0}^3)}[\emptylab] \edge[\emptylab] \node{(\top,\seq{0}^4)}[\emptylab] \edge[\emptylab] \node{(r_2,\emptyseq,\seq{0}^4)}[\emptylab] \edge[\emptylab] \node{(\top,\seq{0}^5)}[\emptylab] \edge[\emptylab] \dots \end{aligned}$$ $\Pi'$ is the greatest path of $t'$. The trace $\trace{t'}{\Pi'}$ of $\Pi'$ is the sequence $$\seq{g, 0, h, 0, g, 0}$$ Since $\Pi'$ is infinitely long but has a finite trace, it is contained in $\paths{t'}{U}{\calR}$. The lemma below shows that there is a one-to-one correspondence between paths in $\paths{t}{U}{\calR}$ and their traces in $\traces{t}{U}{\calR}$. \[lem:traceBij\] Let $\calR$ be an orthogonal TRS, $t$ a partial term in $\calR$, and $U$ a set of redex occurrences in $t$. $\trace{t}{\cdot}$ is a bijection from $\paths{t}{U}{\calR}$ to $\traces{t}{U}{\calR}$. By definition, $\trace{t}{\cdot}$ is surjective. Let $\Pi_1, \Pi_2$ be two paths having the same trace. We will show that then $\Pi_1 = \Pi_2$ by an induction on the length of the common trace. Let $\trace{t}{\Pi_1} = \emptyseq$. Following Fact \[fact:emptyEdge\], there are two different cases: The first case is that $\Pi_1 = \Pi\concat \node{(\top, \pi)}[\bot]$, where the prefix $\Pi$ corresponds to a finite maximal collapsing tower $(u_i)_{i \le \alpha}$ starting at the root of $t$ or $\Pi$ is empty if such a collapsing tower does not exists. If the collapsing tower exists, then $$\Pi = \node{(\top,u_0)}[\emptylab] \edge[\emptylab] % \node{(r_0,\emptyseq,u_0)}[\emptylab] \edge[\emptylab] % \node{(\top, u_1)}[\emptylab] \edge[\emptylab] % \node{(r_1,\emptyseq,u_1)}[\emptylab] \edge[\emptylab] % \dots \edge[\emptylab] % \node{(\top, u_\alpha)}[\emptylab] \edge[\emptylab]$$ But then also $\Pi_2$ starts with the prefix $\Pi \concat (\top,\pi)$ due to the uniqueness of the collapsing tower and the involved rules. In both cases, $\Pi_1 = \Pi_2$ follows immediately. The second case is that $\Pi_1$ is infinite. Then there is an infinite collapsing tower $(u_i)_{i < \omega}$ starting at the root of $t$. Hence, $$\Pi_1 = \node{(\top,u_0)}[\emptylab] \edge[\emptylab] % \node{(r_0,\emptyseq,u_0)}[\emptylab] \edge[\emptylab] % \node{(\top, u_1)}[\emptylab] \edge[\emptylab] % \node{(r_1,\emptyseq,u_1)}[\emptylab] \edge[\emptylab] \dots$$ $\Pi_1 = \Pi_2$ follows from the uniqueness of the infinite collapsing tower. At first glance one might additionally find a third case where $\Pi_1 = \Pi \concat \node{(\top,\pi)}[\emptylab] \edge[\emptylab] \node{(r,\emptyseq,\pi)}[\bot]$ with $\Pi$ a prefix corresponding to a collapsing tower as in the first case. However, this is not possible as it would require the occurrence of $\bot$ in a rule. Let $\trace{t}{\Pi_1} = f$. Then there are two cases: Either $\Pi_1 = \Pi\concat \node{(\top, \pi)}[f]$ or $\Pi_1 = \Pi \concat \node{(\top,\pi)}[\emptylab] \edge[\emptylab] \node{(r,\emptyseq,\pi)}[f]$, where the prefix $\Pi$ corresponds to a finite maximal collapsing tower $(u_i)_{i \le \alpha}$ starting at the root of $t$ or $\Pi$ is empty if such a collapsing tower does not exists. The argument is analogous to the argument employed for the first case of the induction base above. Finally, we consider the induction step. Hence, there are the two cases: Either $\trace{t}{\Pi_1} = T \concat \seq{i}$ or $\trace{t}{\Pi_1} = T \concat \seq{i, f}$. For both cases, the induction hypothesis can be invoked by taking two prefixes $\Pi'_1$ and $\Pi'_2$ of $\Pi_1$ and $\Pi_2$, respectively, which both have the trace $T$ and, therefore, are equal according to the induction hypothesis. The argument that the remaining suffixes of $\Pi_1$ and $\Pi_2$ are equal is then analogous to the argument for two base cases. As mentioned above, the traces of paths contain all information necessary to define a term which we will later identify to be the final term of the corresponding complete development. The following definition explains how such a term, called a *matching term*, is determined: Let $\calR$ be a left-linear TRS, $t$ a partial term in $\calR$, and $U$ a set of pairwise non-conflicting redex occurrences in $t$. 1. The *position* of a trace $T \in \traces{t}{U}{\calR}$, denoted $\postrace{T}$, is the subsequence of $T$ containing only the edge labels. The set of all positions of traces in $\traces{t}{U}{\calR}$ is denoted $\postraces{t}{U}{\calR}$. 2. The *symbol* of a trace $T \in \traces{t}{U}{\calR}$, denoted $\symtrace{t}{T}$, is $f$ if $T$ ends in a node label $f$, and is $\bot$ otherwise, i.e. whenever $T$ is empty or ends in an edge label. 3. A term $t'$ is said to *match* $\traces{t}{U}{\calR}$ if $\pos{t'} = \postraces{t}{U}{\calR}$ and $t'(\postrace{T}) = \symtrace{t}{T}$ for all $T \in \traces{t}{U}{\calR}$. Returning to the definition of paths, one can see that the label of a node is the symbol of the “current” position in a term. Similarly, the label of an edge says which edge in the term tree was taken at that point in the construction of the path. Hence, by projecting to the edge labels, we obtain the “history” of the path, i.e. the position. In the same way we obtain the symbol of that node by taking the label of the last node of the path, provided the corresponding path ends in a non-$\bot$-labelled node. In the other case that the trace does not end in a node label, the corresponding path either ends in a node labelled $\bot$ or is infinite. As we will see, infinite paths with finite traces correspond to infinite collapsing towers, which in turn yield volatile positions within the complete development. Eventually, these volatile positions will also give rise to $\bot$ subterms. The following lemma shows that there is also a one-to-one correspondence between the traces in $\traces{t}{U}{\calR}$ and their positions in $\postraces{t}{U}{\calR}$: \[lem:postraceBij\] Let $\calR$ be an orthogonal TRS, $t$ a partial term in $\calR$ and $U$ a set of redex occurrences in $t$. $\postrace{\cdot}$ is a bijection from $\traces{t}{U}{\calR}$ to $\postraces{t}{U}{\calR}$. An argument similar to the one for Lemma \[lem:traceBij\] can be given in order to show that the composition $\postrace{\cdot}\circ\trace{t}{\cdot}$ is a bijection. Together with the bijectivity of $\trace{s}{\cdot}$, according to Lemma \[lem:traceBij\], this yields the bijectivity of $\postrace{\cdot}$. Having this lemma, the following proposition is an easy consequence of the definition of matching terms. It shows that matching terms do always exists and are uniquely determined: Let $\calR$ be an orthogonal TRS, $t$ a partial term in $\calR$, and $U$ a set of redex occurrences in $t$. Then there is a unique term, denoted $\devTerm{t}{U}{\calR}$, that matches $\traces{t}{U}{\calR}$. Define the mapping $\phi\fcolon \postraces{t}{U}{\calR} \funto \Sigma_\bot \cup \calV$ by setting $\phi(\postrace{T}) = \symtrace{t}{T}$ for each trace $T \in \traces{t}{U}{\calR}$. By Lemma \[lem:postraceBij\], $\phi$ is well-defined. Moreover, it is easy to see from the definition of paths, that $\postraces{t}{U}{\calR}$ is closed under prefixes and that $\phi$ respects the arity of the symbols, i.e. $\pi\concat i \in \postraces{t}{U}{\calR}$ iff $0 \le i < \srank{\phi(\pi)}$. Hence, $\phi$ uniquely determines a term $s$ with $s(\pi) = \phi(\pi)$ for all $\pi \in \postraces{t}{U}{\calR}$. By construction, $s$ matches $\traces{t}{U}{\calR}$. Moreover, any other term $s'$ matching $\traces{t}{U}{\calR}$ must satisfy $s'(\pi) = \phi(\pi)$ for all $\pi \in \postraces{t}{U}{\calR}$ and is therefore equal to $s$. It is also obvious that the matching term of a term $t$ w.r.t. an empty set of redex occurrences is the term $t$ itself. \[lem:matchEmpty\] For any TRS $\calR$ and any partial term $t$ in $\calR$, it holds that $\devTerm{t}{\emptyset}{\calR} = t$. Straightforward. \[rem:invDevTerm\] Now it only remains to be shown that the matching term stays invariant during a development, i.e. that, for each development $S\fcolon t \pato t'$ of $U$, the matching terms $\devTerm{t}{U}{\calR}$ and $\devTerm{t'}{\dEsc{U}{S}}{\calR}$ coincide. Since the matching term $\devTerm{t}{U}{\calR}$ only depends on the set $\traces{t}{U}{\calR}$ of traces, it is sufficient to show that $\traces{t}{U}{\calR}$ and $\traces{t'}{\dEsc{U}{S}}{\calR}$ coincide. The key observation is that in each step $s \to s'$ in a development the paths in $s'$ differ from the paths in $s$ only in that they might omit some jumps. This can be seen in Figure \[fig:path1\]: In a step $s \to s'$ of a development, (some residual of) some redex occurrence in $U$ is contracted. In the picture this corresponds to removing the pattern, say $l_1$, of the redex and replacing it by the corresponding right-hand side $r_1$ of the rule. One can see that, except for the jump to and from the right-hand side $r_1$ the path remains the same. In order to establish the above observation formally, we need a means to simulate reduction steps in a development directly as an operation on paths. The following definition provides a tool for this. Let $\calR$ be a left-linear TRS, $t$ a partial term in $\calR$, $U$ a set of pairwise non-conflicting redex occurrences in $t$, and $\Pi \in \paths{t}{U}{\calR}$. 1. $\Pi$ is said to *contain* a position $\pi \in \pos{t}$ if it contains the node $(\top,\pi)$. 2. For each $u \in U$, the *prefix* of $\Pi$ by $u$, denoted $\Pi^{(u)}$, is defined as $\Pi$ whenever $\Pi$ does not contain $u$ and otherwise as the unique prefix of $\Pi$ that ends in $(\top,u)$. It is obvious from the definition that each prefix $\Pi^{(u)}$ of a path $\Pi\in \paths{t}{U}{\calR}$ by an occurrence $u$ is the maximal prefix of $\Pi$, that does not contain positions that are proper extensions of $u$. Hence, if $\Pi$ contains $u$, then $\Pi^{(u)}$ is the maximal prefix of $\Pi$ that only contains prefixes of $u$ (including $u$ itself). The following lemma is the key step towards proving the invariance of matching terms in developments. It formalises the observation described in Remark \[rem:invDevTerm\]. \[lem:presPath\] Let $\calR$ be an orthogonal TRS, $t$ a partial term in $\calR$, $U$ a set of redex occurrences in $t$, and $S\fcolon t \pato t'$ a development of $U$ in $t$. There is a surjective mapping $\theta_S\fcolon \paths{t}{U}{\calR} \funto \paths{t'}{\dEsc{U}{S}}{\calR}$ such that $\trace{t}{\Pi} = \trace{t'}{\theta_S(\Pi)}$ for all $\Pi \in \paths{t}{U}{\calR}$. Let $S = (t_\iota \to[\pi_\iota,c_\iota] t_{\iota + 1})_{\iota < \alpha}$. We prove the statement by an induction on $\alpha$. If $\alpha = 0$, then the statement is trivially true. Suppose that $\alpha$ is a successor ordinal $\beta + 1$. Let $T\fcolon t_0 \pto{\beta} t_\beta$ be the prefix of $S$ of length $\beta$ and $\phi_\beta\fcolon t_\beta \to[\pi_\beta] t_\alpha$ the last step of $S$, i.e. $S = T \concat \seq{\phi_\beta}$. By the induction hypothesis, there is a surjective mapping $\theta_T\fcolon \paths{t}{U}{\calR} \funto \paths{t_\beta}{U'}{\calR}$, with $U' = \dEsc{U}{T}$ and $\trace{t}{\Pi} = \trace{t_\beta}{\theta_T(\Pi)}$ for all $\Pi \in \paths{t}{U}{\calR}$. By a careful case analysis (as done in [@ketema11ic]), one can show that there is a surjective mapping $\theta\fcolon \paths{t_\beta}{U'}{\calR} \funto \paths{t_\alpha}{U''}{\calR}$, with $U'' = \dEsc{U'}{\seq{\phi_\beta}} = \dEsc{U}{S}$ and $\trace{t_\beta}{\Pi} = \trace{t_\alpha}{\theta(\Pi)}$ for all $\Pi \in \paths{t_\beta}{U'}{\calR}$. Hence, the composition $\theta_S = \theta \circ \theta_T$ is a surjective mapping from $\paths{t}{U}{\calR}$ to $\paths{t'}{\dEsc{U}{S}}{\calR}$ and satisfies $\trace{t}{\Pi} = \trace{t'}{\theta_S(\Pi)}$ for all $\Pi \in \paths{t}{U}{\calR}$. Let $\alpha$ be a limit ordinal. By induction hypothesis, there is a surjective mapping $\theta_{\prefix{S}{\iota}}$ for each proper prefix $\prefix{S}{\iota}$ of $S$ satisfying $\trace{t_0}{\Pi} = \trace{t_\iota}{\theta_{\prefix{s}{\iota}}(\Pi)}$ for all $\Pi \in \paths{t}{U}{\calR}$. Let $\Pi \in \paths{t}{U}{\calR}$ and $\Pi_\iota = \theta_{\prefix{S}{\iota}}(\Pi)$ for each $\iota < \alpha$. We define $\theta_S(\Pi)$ as follows: $$\theta_S(\Pi) = \liminf_{\iota \limto \alpha} \Pi_\iota^{(\pi_\iota)}$$ At first we have to show that $\theta_S$ is well-defined, i.e. that $\liminf_{\iota \limto \alpha} \Pi_\iota^{(\pi_\iota)}$ is indeed a path in $\paths{t'}{\dEsc{U}{S}}{\calR}$, and that it preserves traces. There are two cases to be considered: If there is an outermost-volatile position $\pi$ in $S$ that is contained in $\Pi_\iota$ whenever $\pi_\iota = \pi$, then there is some $\beta < \alpha$ with $\pi_\iota \not< \pi$ for all $\beta \le \iota < \alpha$. Hence, $\theta_S(\Pi) = \Pi_\beta^{(\pi)}$. By Lemma \[lem:nonBotLimRed\] and Lemma \[lem:botLimRed\], we have that $\Pi_\beta^{(\pi)} \in \paths{t'}{\dEsc{U}{S}}{\calR}$, in particular because $t'(\pi) = \bot$. Since the suffix $\Pi'$ with $\Pi_\beta = \Pi_\beta^{(\pi)} \concat \Pi'$ follows an infinite collapsing tower and is therefore entirely unlabelled, it cannot contribute to the trace of $\Pi_\beta$. Consequently, $$\trace{t}{\Pi} \stackrel{IH}{=} \trace{t_\beta}{\Pi_\beta} = \trace{t'}{\Pi_\beta^{(\pi)}} = \trace{t'}{\theta_{S}(\Pi)}.$$ If, on the other hand, there is no such outermost-volatile position, then either the sequence $(\Pi_\iota^{(\pi_\iota)})_{\iota<\alpha}$ becomes stable at some point or the sequence $(\Glb_{\iota<\gamma} \Pi_\iota^{(\pi_\iota)})_{\gamma<\alpha}$ grows monotonically towards the infinite path $\theta_S(\Pi)$. In both cases well-definedness and preservation of traces follows easily from the induction hypothesis. Lastly, we show the surjectivity of $\theta_S$. To this end, assume some $\Pi \in \paths{t'}{\dEsc{U}{S}}{\calR}$. We show the existence of a path $\ol \Pi \in \paths{t}{U}{\calR}$ with $\theta_S(\ol \Pi) = \Pi$ by distinguishing three cases: 1. \[item:presPathA\] $\Pi$ ends in a redex node $(r,\pi,u)$. Hence, $u \in \dEsc{U}{S}$. According to Lemma \[lem:descLimRed\], this means that there is some $\beta < \alpha$ such that $$\begin{gathered} \label{eq:presPathI} \text{$\pi_\iota \not\le u$ for all $\beta \le \iota < \alpha$.} \tag{1} \end{gathered}$$ Consequently, all terms in $\setcom{t_\iota}{\beta \le \iota < \alpha}$ coincide in all prefixes of $u$, and each $v \in \dEsc{U}{S}$ with $v \le u$ is in $\dEsc{U}{\prefix{S}{\iota}}$ for all $\beta \le \iota < \alpha$. Hence, for all $\beta \le \gamma < \alpha$ we have $\Pi \in \paths{t_\gamma}{\dEsc{U}{\prefix{S}{\gamma}}}{\calR}$ with $\trace{t'}{\Pi}=\trace{t_\gamma}{\Pi}$. By induction hypothesis there is for each $\beta \le \gamma < \alpha$ some $\Pi_\gamma \in \paths{t}{U}{\calR}$ that is mapped to $\Pi \in \paths{t_\gamma}{\dEsc{U}{\prefix{S}{\gamma}}}{\calR}$ by $\theta_{\prefix{S}{\gamma}}$ with $\trace{t}{\Pi_\gamma} = \trace{t_\gamma}{\Pi}$. Hence, $\trace{t}{\Pi_\gamma} = \trace{t'}{\Pi}$ which means that all paths $\Pi_\gamma$, with $\beta \le \gamma < \alpha$, have the same trace in $t$ and are therefore equal according to Lemma \[lem:traceBij\]. Let us call this path $\ol \Pi$. That is, $\theta_{\prefix{S}{\gamma}}(\ol \Pi) = \Pi$ for all $\beta \le \gamma < \alpha$. Since $\pi_\gamma \not\le u$, we also have $(\theta_{\prefix{S}{\gamma}} (\ol\Pi))^{(\pi_\gamma)} = \Pi$. Consequently, $\theta_S(\ol\Pi) = \Pi$. 2. $\Pi$ ends in a term node $(\top, \pi)$. Let $f = t'(\pi)$. If $f \neq \bot$, then we can apply Lemma \[lem:nonBotLimRed\] to obtain some $\beta < \alpha$ such that $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. Then we can reason as in case (\[item:presPathA\]) starting from . If $f = \bot$, then we have to distinguish two cases according to Lemma \[lem:botLimRed\]: If there is some $\beta < \alpha$ with $t_\beta(\pi) = \bot$ and $\pi_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$, then we can again employ the same argument as for case (\[item:presPathA\]) starting from . Otherwise, i.e. if $\pi$ is an outermost-volatile position in $S$, then we have some $\beta < \alpha$ such that $\pi_\iota \not< \pi$ for all $\beta \le \iota < \alpha$ and such that $$\begin{gathered} \label{eq:presPathII} \text{for each $\beta \le \gamma < \alpha$ there is some $\gamma \le \gamma' < \alpha$ with $\pi_\gamma' = \pi$.} \tag{2} \end{gathered}$$ Hence, we have for each $\beta \le \gamma < \alpha$ some $\Pi_\gamma \in \paths{t_\gamma}{\dEsc{U}{\prefix{S}{\gamma}}}{\calR}$ and an infinite collapsing tower $(u_i)_{i < \omega}$ in $\dEsc{U}{\prefix{S}{\gamma}}$ with $u_0 = \pi$ such that $\Pi_\gamma$ is of the form $$\Pi \concat \edge[\emptylab] \node{(r_0,\emptyseq,u_0)}[\emptylab] \edge[\emptylab] \node{(\top, u_1)}[\emptylab] \edge[\emptylab] \node{(r_1,\emptyseq,u_1)}[\emptylab] \edge[\emptylab] \dots$$ Therefore, $\trace{t_\gamma}{\Pi_\gamma} = \trace{t'}{\Pi}$. By induction hypothesis there is some $\ol \Pi_\gamma \in \paths{t}{T}{\calR}$ with $\theta_{\prefix{S}{\gamma}}(\ol\Pi_\gamma)=\Pi_\gamma$ and $\trace{t}{\ol\Pi_\gamma} = \trace{t_\gamma}{\Pi_\gamma}$. Hence, $\trace{t}{\ol\Pi_\gamma} = \trace{t'}{\Pi}$, i.e. all $\ol\Pi_\gamma$ have the same trace in $t$ and are therefore equal according to Lemma \[lem:traceBij\]. Let us call this path $\ol \Pi$. Since $(\theta_{\prefix{S}{\gamma}}(\ol\Pi))^{(\pi)}=\Pi_\gamma^{(\pi)}= \Pi$ we can use to obtain that $\theta_S(\ol\Pi) = \Pi$. 3. $\Pi$ is infinite. Hence, $\Pi$ is of the form $$\Pi' \concat \node{(\top, u_0)}[\emptylab] \edge[\emptylab] \node{(r_0,\emptyseq,u_0)}[\emptylab] \edge[\emptylab] \node{(\top, u_1)}[\emptylab] \edge[\emptylab] \node{(r_1,\emptyseq,u_1)}[\emptylab] \edge[\emptylab] \dots$$ with $(u_i)_{i<\omega}$ an infinite collapsing tower in $\dEsc{U}{S}$. Consequently, by Lemma \[lem:descLimRed\], for each $u_i \in \dEsc{U}{S}$ there is some $\beta_i < \alpha$ such that $$\begin{gathered} \label{eq:presPathIII} \text{$u_i \in \dEsc{U}{\prefix{S}{\gamma}}$ and $\pi_\gamma \not\le u_\gamma$ for all $\beta_i \le \gamma < \alpha$.} \tag{3} \end{gathered}$$ Since $(u_i)_{i<\omega}$ is a chain (w.r.t. the prefix order), we can assume w.l.o.g. that $(\beta_i)_{i<\omega}$ is a chain as well. Following Remark \[rem:prsAncestor\], we obtain for each $u_i \in \dEsc{U}{S}$ its ancestor $v_i \in U$ with $\dEsc{v_i}{S} = u_i$. Let $\ol \Pi$ be the unique path in $\paths{t}{U}{\calR}$ that contains each $v_i$ and for each $j < \omega$ let $\Pi_j$ be the unique path in $\paths{t_{\beta_j}}{\dEsc{U}{\prefix{S}{\beta_j}}}{\calR}$ containing each $\dEsc{v_i}{\prefix{S}{\beta_j}}$. Clearly, $\theta_{\prefix{S}{\beta_j}}(\ol\Pi) = \Pi_j$. Note that we have for each $j < \omega$ that all paths $\theta_{\prefix{S}{\iota}}(\ol\Pi)$ with $\beta_j \le \iota < \alpha$ coincide in their prefix by $u_j$, which is a prefix of $\Pi$. Since additionally $(u_i)_{i<\omega}$ is a strict chain and because of , we can conclude that $\theta_S(\ol\Pi) = \Pi$. The above lemma effectively establishes the invariance of matching terms during a development. Together with Lemma \[lem:matchEmpty\] this implies the uniqueness of final terms of complete developments of the same redex occurrences. As a corollary from this, we obtain that descendants are also unique among all complete developments: \[prop:finalCompDev\] Let $\calR$ be an orthogonal TRS, $t$ a partial term in $\calR$, and $U$ a set of redex occurrences in $t$. Then the following holds: 1. Each complete development of $U$ in $t$ strongly $\prs$-converges to $\devTerm{t}{U}{\calR}$. \[item:finalCompDev1\] 2. For each set $V \subseteq \posNonBot{t}$ and two complete developments $S$ and $T$ of $U$ in $t$, respectively, it holds that $\dEsc{V}{S} = \dEsc{V}{T}$. \[item:finalCompDev2\] Let $S\fcolon t \pato[U] t'$ be a complete development of $U$ in $t$ strongly $\prs$-converging to $t'$. By Lemma \[lem:presPath\], there is a surjective mapping $\theta\fcolon \paths{t}{U}{\calR} \funto \paths{t'}{U'}{\calR}$ with $\trace{t}{\Pi} = \trace{t'}{\theta(\Pi)}$ for all $\Pi \in \paths{t}{U}{\calR}$, where $U' = \dEsc{U}{S}$. Hence, it holds that $\traces{t}{U}{\calR} = \traces{t'}{U'}{\calR}$ and, consequently, $\devTerm{t}{U}{\calR} = \devTerm{t'}{U'}{\calR}$. Since $S$ is a complete development of $U$ in $t$, we have that $U' = \emptyset$ which implies, according to Lemma \[lem:matchEmpty\], that $\devTerm{t'}{U'}{\calR} = t'$. Therefore, $\devTerm{t}{U}{\calR} = t'$. Let $t' = t^{(V)}$. By Proposition \[prop:chaDesc\], both reductions $S$ and $T$ can be uniquely lifted to reductions $S'$ and $T'$ in $\calR^\lab$, respectively, such that $\dEsc{V}{S}$ and $\dEsc{V}{T}$ are determined by the final term of $S'$ and $T'$, respectively. It is easy to see that also $\calR^\lab$ is an orthogonal TRS and that $S'$ and $T'$ are complete developments of $U$ in $t'$. Hence, we can invoke clause of this proposition to conclude that the final terms of $S'$ and $T'$ coincide and that, therefore, also $\dEsc{V}{S}$ and $\dEsc{V}{T}$ coincide. By the above proposition, the descendants of a complete development of a particular set of redex occurrences are unique. Therefore, we adopt the notation $\dEsc{U}{V}$ for the descendants $\dEsc{U}{S}$ of $U$ by some complete development $S$ of $V$. According to Proposition \[prop:exComplDev\] and Proposition \[prop:finalCompDev\], $\dEsc{U}{V}$ is well-defined for any orthogonal TRS. Furthermore, Proposition \[prop:finalCompDev\] yields the following corollary establishing the diamond property of complete developments: \[cor:prsCRCompDev\] Let $\calR$ be an orthogonal TRS and $t \pato[U] t_1$ and $t \pato[V] t_2$ be two complete developments of $U$ respectively $V$ in $t$. Then $t_1$ and $t_2$ are joinable by complete developments $t_1 \pato[\dEsc{V}{U}] t'$ and $t_2 \pato[\dEsc{U}{V}] t'$. By Proposition \[prop:descPoint\], it holds that $$\dEsc{(U\cup V)}{U} = \dEsc{U}{U} \cup \dEsc{V}{U} = \dEsc{V}{U}.$$ Let $S\fcolon t \pato[U] t_1$, $T\fcolon t \pato[V] t_2$, $S'\fcolon t_1 \pato[\dEsc{V}{U}] t'$ and $T'\fcolon t_2 \pato[\dEsc{U}{V}] t''$. By the equation above and Proposition \[prop:descSeqRed\], we have that $S\concat S'\fcolon t \pato[U] t_1 \pato[\dEsc{V}{U}] t'$ is a complete development of $U \cup V$. Analogously, we obtain that $T\concat T'\fcolon t \pato[V] t_2 \pato[\dEsc{U}{V}] t''$ is a complete development of $U\cup V$, too. According to Proposition \[prop:finalCompDev\], this implies that both $S\concat S'$ and $T\concat T'$ strongly $\prs$-converge in the same term, i.e. $t' = t''$. In the next section we shall make use of complete developments in order to obtain the Infinitary Strip Lemma for $\prs$-converging reductions and a limited form of infinitary confluence for orthogonal systems. The Infinitary Strip Lemma {#sec:results} -------------------------- (t0) [$t_0$]{} node\[right=of t0\] (t1) [$t_1$]{} node\[right=of t1\] (tb) [$t_\beta$]{} node\[right=of tb\] (tb1) [$t_{\beta+1}$]{} node\[right=of tb1\] (ta) [$t_\alpha$]{}; (s0) [$s_0$]{} node\[below=of t1\] (s1) [$s_1$]{} node\[below=of tb\] (sb) [$s_\beta$]{} node\[below=of tb1\] (sb1) [$s_{\beta+1}$]{} node\[below=of ta\] (sa) [$s_\alpha$]{}; (t0) edge\[single step\] node\[midway,above\] [$v_0$]{} (t1) edge\[strongly\] node\[midway,left\] [$U_0$]{} (s0) (t1) edge\[dots\] (tb) edge\[strongly\] node\[midway,left\] [$U_1$]{} (s1) (tb) edge\[single step\] node\[midway,above\] [$v_\beta$]{} (tb1) edge\[strongly\] node\[midway,left\] [$U_\beta$]{} (sb) (tb1) edge\[dots\] (ta) edge\[strongly\] node\[midway,left\] [$U_{\beta + 1}$]{} (sb1) (ta) edge\[strongly\] node\[midway,left\] [$U_\alpha$]{} (sa) (s0) edge\[strongly\] node\[midway,below\] [$\dEsc{v_0}{U_0}$]{} (s1) (s1) edge\[dots\] (sb) (sb) edge\[strongly\] node\[midway,below\] [$\dEsc{v_\beta}{U_\beta}$]{} (sb1) (sb1) edge\[dots\] (sa) ; ($(t0)+(0,.6)$) edge\[decorate,decoration=brace\] node\[midway,above=5pt\] [$S$]{} ($(ta)+(0,.6)$); In this section we use the results we have obtained for complete developments in the previous two sections in order to establish that a complete development of a set of pairwise disjoint redex occurrences commutes with any strongly $\prs$-convergent reduction: \[prop:prsStripLem\] Let $\calR$ be an orthogonal TRS, $S\fcolon t_0 \pto{\alpha} t_\alpha$ a strongly $\prs$-convergent reduction, and $t_0 \pato[U] s_0$ a complete development of a set $U$ of pairwise disjoint redex occurrences in $t_0$. Then $t_\alpha$ and $s_0$ are joinable by a reduction $\proj{S}{T}\fcolon s_0 \pato s_\alpha$ and a complete development $\proj{T}{S}\fcolon t_\alpha \pato[\dEsc{U}{S}] s_\alpha$. We prove this statement by constructing the diagram shown in Figure \[fig:stripLem\]. The ’$U_\iota$’s in the diagram are sets of redex occurrences: $U_\iota = \dEsc{U}{\prefix{S}{\iota}}$ for all $0 \le \iota \le \alpha$. In particular, $U_0 = U$. All arrows in the diagram represent complete developments of the indicated sets of redex occurrences. Particularly, in each $\iota$-th step of $S$ the redex at $v_\iota$ is contracted. We will construct the diagram by an induction on $\alpha$. If $\alpha=0$, then the diagram is trivial. If $\alpha$ is a successor ordinal $\beta + 1$, then we can take the diagram for the prefix $\prefix{S}{\beta}$, which exists by induction hypothesis, and extend it to a diagram for $S$. The existence of the additional square that completes the diagram for $S$ is affirmed by Corollary \[cor:prsCRCompDev\] since $U_{\beta + 1} = \dEsc{U_\beta}{v_\beta}$. Let $\alpha$ be a limit ordinal. Moreover, let $s_\alpha'$ be the uniquely determined final term of a complete development of $U_\alpha$ in $t_\alpha$. By induction hypothesis, the diagram exists for each proper prefix of $S$. Let $T_\iota\fcolon s_0 \pato s_\iota$ denote the reduction at the bottom of the diagram for the reduction $\prefix{S}{\iota}$ for each $\iota < \alpha$. The set of all $T_\iota$ is directed. Hence, $T = \Lub_{\iota < \alpha} T_\iota$ exists. Since $T_\iota < T$ for each $\iota < \alpha$, the diagram for $S$ with $T\fcolon s_0 \pato s_\alpha$ at the bottom satisfies almost all required properties. Only the equality of $s_\alpha$ and $s_\alpha'$ remains to be shown. Note that, by Proposition \[prop:disjDesc\], the redex occurrences in $U_\alpha$ are pairwise disjoint. Let $\pi \in U_\alpha$. By Lemma \[lem:descLimRed\] and the definition of descendants, there is some $\beta < \alpha$ such that $\pi \in U_\iota$ and $v_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. Hence, for all $\pi' \in \dEsc{v_\iota}{U_\iota}$ with $\beta \le \iota <\alpha$, we also have $\pi' \not\le \pi$. That is, in the remaining reductions $t_\beta \pato t_\alpha$ and $t_\beta \pato[U_\beta] s_\beta \pato s_\alpha$, no reduction takes place at a proper prefix of $\pi$. Hence, by Lemma \[lem:nonBotLimRed\], $t_\beta$ coincides with $t_\alpha$ and $s_\alpha$ in all proper prefixes of $\pi$. Since in the reduction $t_\alpha \pato[U_\alpha] s_\alpha'$ also no reduction takes place at a proper prefix of $\pi$, we obtain that $t_\alpha$ and $s_\alpha'$ and, thus, also $s_\alpha$ and $s_\alpha'$ coincide in all proper prefixes of $\pi$. Let $\rho\fcolon l \to r$ be the rule for the redex $\atPos{t_\beta}{\pi}$ and $\Cxt{a}{,\dots,}, \Cxt[D]{a}{,\dots,}$ ground contexts such that $l = \Cxt{a}{x_1,\dots,x_k}$ and $r = \Cxt[D]{a}{x_{p(1)},\dots,x_{p(m)}}$ for some pairwise distinct variables $x_1,\dots,x_k$ and an appropriate mapping $p\fcolon \set{1,\dots,m} \funto \set{1,\dots, k}$. Moreover, let $t^\iota_1,\dots,t^\iota_k$ be terms such that $t_\iota = \substAtPos{t_\iota}{\pi}{\Cxt[C]{a}{t^\iota_1,\dots,t^\iota_k}}$ and $s_\iota = \substAtPos{s_\iota}{\pi}{\Cxt[D]{a}{t^\iota_{p(1)},\dots,t^\iota_{p(m)}}}$ for all $\beta \le \iota \le \alpha$. The argument in the previous paragraph justifies the assumption of these elements. From $\beta$ onward, all horizontal reduction steps in the diagram take place within the contexts $\substAtPos{t_\iota}{\pi}{\cdot}$ and $\substAtPos{s_\iota}{\pi}{\cdot}$, respectively, or inside the terms $t^\iota_i$, and all vertical reductions take place within the contexts $\substAtPos{t_\iota}{\pi}{\Cxt[C]{a}{,\dots,}}$ and $\substAtPos{s_\iota}{\pi}{\Cxt[D]{a}{,\dots,}}$, respectively. In particular, we have $t_\alpha = \substAtPos{t_\alpha}{\pi}{\Cxt[C]{a}{t^\alpha_1,\dots,t^\alpha_k}}$ and $s_\alpha = \substAtPos{s_\alpha}{\pi}{\Cxt[D]{a}{t^\alpha_{p(1)},\dots,t^\alpha_{p(m)}}}$. Let $t_\alpha \to[\pi] t'_\alpha$. This reduction contracts the redex $\Cxt[C]{a}{t^\alpha_1,\dots,t^\alpha_k}$ to the term $\Cxt[D]{a}{t^\alpha_{p(1)},\dots,t^\alpha_{p(m)}}$ using rule $\rho$. Note that a complete development $t_\alpha \pato[U_\alpha] s_\alpha'$ contracts, besides $\pi$, only redex occurrences disjoint with $\pi$. Hence, $t'_\alpha$ and $s'_\alpha$ coincide in all extensions of $\pi$. Since $t'_\alpha = \substAtPos{t_\alpha}{\pi}{\Cxt[D]{a}{t^\alpha_{p(1)},\dots,t^\alpha_{p(k)}}}$ (and $s_\alpha = \substAtPos{s_\alpha}{\pi}{\Cxt[D]{a}{t^\alpha_{p(1)},\dots,t^\alpha_{p(m)}}}$), we can conclude that $s_\alpha$ and $s_\alpha'$ coincide in all extensions of $\pi$. Since the residual $\pi \in U_\alpha$ was chosen arbitrarily, the above holds for all elements in $U_\alpha$. That is, $s_\alpha$ and $s'_\alpha$ coincide in all prefixes and all extensions of elements in $U_\alpha$. It remains to be shown, that they also coincide in positions that are disjoint to all positions in $U_\alpha$. To this end, we only need to show that $t_\alpha$ and $s_\alpha$ coincide in these positions since the complete development $t_\alpha \pato[U_\alpha] s_\alpha'$ keeps positions disjoint with all positions in $U_\alpha$ unchanged. Let $\pi$ be such a position. Suppose $t_\alpha(\pi) = f \neq \bot$. By Lemma \[lem:nonBotLimRed\], there is some $\beta < \alpha$ such that $t_\beta(\pi) = f$ and $v_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. Note that no prefix $\pi'$ of $\pi$ is in $U_\beta$ since otherwise $\pi' \in U_\alpha$, by Lemma \[lem:descLimRed\], which contradicts the assumption that $\pi$ is disjoint to all positions in $U_\alpha$. Hence, $s_\beta(\pi) = f$ and $\pi' \not\le \pi$ for all $\pi' \in \dEsc{v_\iota}{U_\iota}$ and $\beta \le \iota < \alpha$, which means that no reduction step in $s_\beta \pato s_\alpha$ takes place at some prefix of $\pi$. Thus, we can conclude, according to Lemma \[lem:nonBotLimRed\], that $s_\alpha(\pi) = f$. Similarly, one can show that $s_\alpha(\pi) = f \neq \bot$ implies $t_\alpha(\pi) = f$. Suppose $t_\alpha(\pi) = \bot$. Hence, according to Lemma \[lem:botLimRed\], $\pi$ is outermost-volatile in $S$ or there is some $\beta < \alpha$ such that $t_\beta(\pi) = \bot$ and $v_\iota \not\le \pi$ for all $\beta \le \iota < \alpha$. For the latter case, we can argue as in the case for $t_\alpha(\pi) \neq \bot$ above. In the former case, $\pi$ is outermost-volatile in $T$ as well. Thus, by applying Lemma \[lem:botLimRed\], we obtain that $s_\alpha(\pi) = \bot$. A similar argument can be employed for the reverse direction. The reduction $\proj{S}{T}$ constructed in the proof above is called the *projection* of $S$ by $T$. Likewise, the reduction $\proj{T}{S}$ is called the *projection* of $T$ by $S$. As a corollary we obtain the following semi-infinitary confluence result: \[cor:prsSemiConf\] In every orthogonal TRS, two reductions $t \pato t_2$ and $t \fto* t_1$ can be joined by two reductions $t_2 \pato t_3$ and $t_1 \pato t_3$. This can be shown by an induction on the length of the reduction $t \fto* t_1$. If it is empty, the statement trivially holds. The induction step follows from Proposition \[prop:prsStripLem\]. In the next section we shall, based on the Infinitary Strip Lemma, show that strong $\prs$-reachability coincides with Böhm-reachability, which then yields, amongst other things, full infinitary confluence of orthogonal systems. Comparing Strong p-Convergence and Böhm-Convergence {#sec:relation-bohm-trees} =================================================== In this section we shall show the core result of this paper: For orthogonal, left-finite TRSs, strong $\prs$-reachability and Böhm-reachability w.r.t. the set $\rAct$ of root-active terms coincide. As corollaries of that, leveraging the properties of Böhm-convergence, we obtain both infinitary normalisation and infinitary confluence of orthogonal systems in the partial order model. Moreover, we will show that strong $\prs$-convergence also satisfies the compression property. The central step of the proof of the equivalence of both models of infinitary rewriting is an alternative characterisation of root-active terms which is captured by the following definition: Let $\calR$ be a TRS. 1. A reduction $S\fcolon t \pato s$ is called *destructive* if $\emptyseq$ is a volatile position in $S$. 2. A partial term $t$ in $\calR$ is called *fragile* if a destructive reduction starts in $t$. Looking at the definition, fragility seems to be a more general concept than root-activeness: A term is fragile iff it admits a reduction in which infinitely often a redex at the root is contracted. For orthogonal TRSs, root-active terms are characterised in almost the same way. The difference is that only total terms are considered and that the stipulated reduction contracting infinitely many root redexes has to be of length $\omega$. However, we shall show the set of total fragile terms to be equal to the set of root-active terms by establishing a compression lemma for destructive reductions. Using Lemma \[lem:botLimRed\] we can immediately derive the following alternative characterisations: Let $\calR$ be a TRS. 1. A reduction $S\fcolon s \pato t$ is destructive iff $S$ is open and $t = \bot$ 2. A partial term $t$ in $\calR$ is fragile iff there is an open strongly $\prs$-convergent reduction $t \pato \bot$. One has to keep in mind, however, that a closed reduction to $\bot$ is not destructive. Such a notion of destructiveness would include the empty reduction from $\bot$ to $\bot$, and reductions that end with the contraction of a collapsing redex as, for example, in the single step reduction $f(\bot) \to \bot$ induced by the rule $f(x) \to x$. Such reductions do not “produce” the term $\bot$. They are merely capable of “moving” an already existent subterm $\bot$ by a collapsing rule. In this sense, fragile terms are, according to Lemma \[lem:totalRed\], the only terms which can produce the term $\bot$. This is the key observation for studying the relation between strong $\prs$-convergence and Böhm-convergence. In order to show that strong $\prs$-reachability and Böhm-reachability w.r.t. $\rAct$ coincide we will proceed as follows: At first we will show that strong $\prs$-reachability implies Böhm-reachability w.r.t.the set of total fragile terms, i.e. the fragile terms in $\iterms$. From this we will derive a compression lemma for destructive reductions. We will then use this to show that the set $\rAct$ of root-active terms coincides with the set of total fragile terms. From this we conclude that strong $\prs$-reachability implies Böhm-reachability w.r.t. $\rAct$. Finally, we then show the other direction of the equality. From Strong p-Convergence to Böhm-Convergence {#sec:from-strong-prs} --------------------------------------------- For the first step we have to transform a strongly $\prs$-converging reduction in to a Böhm-converging reduction w.r.t. the set of total fragile terms, i.e. a strongly $\mrs$-converging reduction w.r.t.the corresponding Böhm extension $\calB$. Recall that, by Theorem \[thr:strongExt\], the only difference between strongly $\prs$-converging reductions and strongly $\mrs$-converging reductions is the ability of the former to produce $\bot$ subterms. This happens, according to Lemma \[lem:botLimRed\], precisely at volatile positions. We can, therefore, proceed as follows: Given a strongly $\prs$-converging reduction we construct a Böhm-converging reduction by removing reduction steps which cause the volatility of a position in some open prefix of the reduction and then replacing them by a *single* $\to[\bot]$-step. The intuition of this construction is illustrated in Figure \[fig:bohmDestrFig\]. It shows a strongly $\prs$-converging reduction of length $\omega\mult4$ from $s$ to $t$. In order to maintain readability, we restrict the attention to a particular branch of the term (tree) as indicated in Figure \[fig:bohmDestrFig1\]. The picture shows five positions which are volatile in some open prefix of the reduction. We assume that they are the only volatile positions at least in the considered branch. Note that the positions do not need to occur in all of the terms in the reduction. They might disappear and reappear repeatedly. Each of them, however, appears in infinitely many terms in the reduction, as, by definition of volatility, infinitely many steps take place at each of these positions. In Figure \[fig:bohmDestrFig2\], the prefixes of the reduction that contain a volatile position are indicated by a waved rewrite arrow pointing to a $\bot$. The level of an arrow indicates the position which is volatile. A prefix might have multiple volatile positions. For example, both $\pi_2$ and $\pi_4$ are volatile in the prefix of length $\omega$. But a position might also be volatile for several prefixes. For instance, $\pi_3$ is volatile in the prefix of length $\omega\mult2$ and the prefix of length $\omega\mult4$. By Lemma \[lem:botLimRed\], outermost-volatile positions are responsible for the generation of $\bot$ subterms. By their nature, at some point there are no reductions taking place above outermost-volatile positions. The suffix where this is the case is a *nested* destructive reduction. The subterm where this suffix starts is, therefore, a fragile term and we can replace this suffix with a *single* $\to[\bot]$-step. The segments which are replaced in this way are highlighted by dashed boxes in Figure \[fig:bohmDestrFig2\]. As indicated by the dotted lines, this then also includes reduction steps which occur below the outermost-volatile positions. Therefore, also volatile positions which are not outermost are removed as well. Eventually, we obtain a reduction without volatile positions, which is, by Lemma \[lem:totalRed\], a strongly $\mrs$-converging reduction in the Böhm extension, i.e. a Böhm-converging reduction in the original system: \[prop:prsBohm\] Let $\calR$ be a TRS, $\calU$ the set of fragile terms in $\iterms$, and $\calB$ the Böhm extension of $\calR$ w.r.t. $\calU$. Then, for each strongly $\prs$-convergent reduction $s \pato[\calR] t$, there is a Böhm-convergent reduction $s \mato[\calB] t$. Assume that there is a reduction $S =(t_\iota \to[\pi_\iota] t_{\iota + 1})_{\iota < \alpha}$ in $\calR$ that strongly $\prs$-converges to $t_\alpha$. We will construct a strongly $\mrs$-convergent reduction $T\fcolon t_0 \mato[\calB] t_\alpha$ in $\calB$ by removing reduction steps in $S$ that take place at or below outermost-volatile positions of some prefix of $S$ and replace them by $\to[\bot]$-steps. Let $\pi$ be an outermost-volatile position of some prefix $\prefix{S}{\lambda}$. Then there is some ordinal $\beta < \lambda$ such that no reduction step between $\beta$ and $\lambda$ in $S$ takes place strictly above $\pi$, i.e. $\pi_\iota \not < \pi$ for all $\beta \le \iota < \lambda$. Such an ordinal $\beta$ must exist since otherwise $\pi$ would not be an outermost-volatile position in $\prefix{S}{\lambda}$. Hence, we can construct a destructive reduction $S'\fcolon \atPos{t_\beta}{\pi} \pato \bot$ by taking the subsequence of the segment $\segm{S}{\beta}{\lambda}$ that contains the reduction steps at $\pi$ or below. Note that $\atPos{t_\beta}{\pi}$ might still contain the symbol $\bot$. Since $\bot$ is not relevant for the applicability of rules in $\calR$, each of the $\bot$ symbols in $\atPos{t_\beta}{\pi}$ can be safely replaced by arbitrary total terms, in particular by terms in $\calU$. Let $r$ be a term that is obtained in this way. Then there is a destructive reduction $S''\fcolon r \pato \bot$ that applies the same rules at the same positions as in $S'$. Hence, $r \in \calU$. By construction, $r$ is a $\bot,\calU$-instance of $\atPos{t_\beta}{\pi}$ which means that $\atPos{t_\beta}{\pi} \in \calU_\bot$. Additionally, $\atPos{t_\beta}{\pi} \neq \bot$ since there is a non-empty reduction $S'\fcolon \atPos{t_\beta}{\pi} \pato \bot$ starting in $\atPos{t_\beta}{\pi}$. Consequently, there is a rule $\atPos{t_\beta}{\pi} \to \bot$ in $\calB$. Let $T'$ be the reduction that is obtained from $\prefix{S}{\lambda}$ by replacing the $\beta$-th step, which we can assume w.l.o.g. to take place at $\pi$, by a step with the rule $\atPos{t_\beta}{\pi} \to \bot$ at the same position $\pi$ and removing all reduction steps $\phi_\iota$ taking place at $\pi$ or below for all $\beta < \iota < \lambda$. Let $t'$ be the term that the reduction $T'$ strongly $\prs$-converges to. $t_\lambda$ and $t'$ can only differ at position $\pi$ or below. However, by construction, we have $t'(\pi) = \bot$ and, by Lemma \[lem:botLimRed\], $t_\lambda(\pi) = \bot$. Consequently, $t' = t_\lambda$. This construction can be performed for all prefixes of $S$ and their respective outermost-volatile positions. Thereby, we obtain a strongly $\prs$-converging reduction $T\fcolon t_0 \pato[\calB] t_\alpha$ for which no prefix has a volatile position. By Lemma \[lem:totalRed\], $T$ is a total reduction. Note that $\calB$ is a TRS over the extended signature $\Sigma' = \Sigma \uplus \set{\bot}$, i.e. terms containing $\bot$ are considered total. Hence, by Theorem \[thr:strongExt\], $T\fcolon t_0 \mato[\calB] t_\alpha$. From Böhm-convergence to Strong p-Convergence {#sec:from-bohm-conv} --------------------------------------------- Next, we establish a compression lemma for destructive reductions, i.e. that each destructive reduction can be compressed to length $\omega$. Before we continue with this, we need to mention the following lemma from Kennaway et al. [@kennaway99jflp]: \[lem:procBot\] Let $\calR$ be a left-linear, left-finite TRS and $\calB$ some Böhm extension of $\calR$. Then $s \mato[\calB] t$ implies $s \mato[\calR] s' \mato[\bot] t$ for some term $s'$.[^2] In the next proposition we show that, excluding $\bot$ subterms, the final term of a strongly $\prs$-converging reduction can be approximated arbitrarily well by a finite reduction. This corresponds to Corollary \[cor:mrsFinApprox\] which establishes finite approximations for strongly $\mrs$-convergent reductions. \[prop:prsFinApprox\] Let $\calR$ be a left-linear, left-finite TRS and $s \pato[\calR] t$. Then, for each finite set $P \subseteq \posNonBot{t}$, there is a reduction $s \fto{*}[\calR] t'$ such that $t$ and $t'$ coincide in $P$. Assume that $s \pato[\calR] t$. Then, by Proposition \[prop:prsBohm\], there is a reduction $s \mato[\calB] t$, where $\calB$ is the Böhm extension of $\calR$ w.r.t. the set of total, fragile terms of $\calR$. By Lemma \[lem:procBot\], there is a reduction $s \mato[\calR] s' \mato[\bot] t$. Clearly, $s'$ and $t$ coincide in $\posNonBot{t}$. Let $d = \max\setcom{\len{\pi}}{\pi \in P}$. Since $P$ is finite, $d$ is well-defined. By Corollary \[cor:mrsFinApprox\], there is a reduction $s \fto{*}[\calR] t'$ such that $t'$ and $s'$ coincide up to depth $d$ and, thus, in particular they coincide in $P$. Consequently, since $s'$ and $t$ coincide in $\posNonBot{t} \supseteq P$, $t$ and $t'$ coincide in $P$, too. In order to establish a compression lemma for destructive reductions we need that fragile terms are preserved by finite reductions. We can obtain this from the following more general lemma showing that destructive reductions are preserved by forming projections as constructed in the Infinitary Strip Lemma: \[lem:presPerpRed\] Let $\calR$ be an orthogonal TRS, $S\fcolon t_0 \pato t_\alpha$ a destructive reduction, and $T\fcolon t_0 \pato[U] s_0$ a complete development of a set $U$ of pairwise disjoint redex occurrences. Then the projection $\proj{S}{T}\fcolon s_0 \pato s_\alpha$ is also destructive. We consider the situation depicted in Figure \[fig:stripLem\]. Since $S\fcolon t_0 \pato t_\alpha$ is destructive, we have, for each $\beta < \alpha$, some $\beta\le \gamma < \alpha$ such that $v_\gamma = \emptyseq$. If $v_\gamma = \emptyseq$, then also $\emptyseq \in \dEsc{v_\gamma}{U_\gamma}$ unless $\emptyseq \in U_\gamma$. As by Proposition \[prop:disjDesc\], $U_\gamma$ is a set of pairwise disjoint positions, $\emptyseq \in U_\gamma$ implies $U_\gamma = \set{\emptyseq}$. This means that if $v_\gamma = \emptyseq$ and $\emptyseq \in U_\gamma$, then $U_\iota = \emptyset$ for all $\gamma < \iota < \alpha$. Thus, there is only at most one $\gamma < \alpha$ with $\emptyseq \in U_\gamma$. Therefore, we have, for each $\beta < \alpha$, some $\beta\le \gamma < \alpha$ such that $\emptyseq \in \dEsc{v_\gamma}{U_\gamma}$. Hence, $T$ is destructive. As a consequence of this preservation of destructiveness by forming projections, we obtain that the set of fragile terms is closed under finite reductions: \[lem:perpFinRed\] In each orthogonal TRS, the set of fragile terms is closed under finite reductions. Let $t$ be a fragile term and $T\fcolon t \fto{*} t'$ a finite reduction. Hence, there is a destructive reduction starting in $t$. A straightforward induction proof on the length of $T$, using Lemma \[lem:presPerpRed\], shows that there is a destructive reduction starting in $t'$. Thus, $t'$ is fragile. Now we can show that destructiveness does not need more that $\omega$ steps in orthogonal, left-finite TRSs. This property will be useful for proving the equivalence of root-activeness and fragility of total terms as well as the Compression Lemma for strongly $\prs$-convergent reductions. \[prop:comprPerpRed\] Let $\calR$ be an orthogonal, left-finite TRS and $t$ a partial term in $\calR$. If there is a destructive reduction starting in $t$, then there is a destructive reduction of length $\omega$ starting in $t$. Let $S\fcolon t_0 \pto{\lambda} \bot$ be a destructive reduction starting in $t_0$. Hence, there is some $\alpha < \lambda$ such that $\prefix{S}{\alpha}\fcolon t_0 \pato s_1$, where $s_1$ is a $\rho$-redex for some $\rho\fcolon l \to r \in R$. Let $P$ be the set of pattern positions of the $\rho$-redex $s_1$, i.e. $P = \posFun{l}$. Due to the left-finiteness of $\calR$, $P$ is finite. Hence, by Proposition \[prop:prsFinApprox\], there is a finite reduction $t_0 \fto{*} s'_1$ such that $s_1$ and $s'_1$ coincide in $P$. Hence, because $\calR$ is left-linear, also $s'_1$ is a $\rho$-redex. Now consider the reduction $T_0\fcolon t_0 \fto{*} s'_1 \to[\rho,\emptyseq] t_1$ ending with a contraction at the root. $T_0$ is of finite length and, according to Lemma \[lem:perpFinRed\], $t_1$ is fragile. Since $t_1$ is again fragile, the above argument can be iterated arbitrarily often which yields for each $i < \omega$ a finite non-empty reduction $T_i\fcolon t_i \fto{*} t_{i + 1}$ whose last step is a contraction at the root. Then the concatenation $T = \Concat_{i < \omega} T_i$ of these reductions is a destructive reduction of length $\omega$ starting in $t_0$. The above proposition bridges the gap between fragility and root-activeness. Whereas the former concept is defined in terms of transfinite reductions, the latter is defined in terms of finite reductions. By Proposition \[prop:comprPerpRed\], however, a fragile term is always finitely reducible to a redex. This is the key to the observation that fragility is not only quite similar to root-activeness but is, in fact, essentially the same concept. \[prop:rootAct\] Let $\calR$ be an orthogonal, left-finite TRS and $t$ a total term in $\calR$. Then $t$ is root-active iff $t$ is fragile. The “only if” direction is easy: If $t$ is root-active, then there is a reduction $S$ of length $\omega$ starting in $t$ with infinitely many steps taking place at the root. Hence, $S\fcolon t \pto{\omega} \bot$ is a destructive reduction, which makes $t$ a fragile term. For the converse direction we assume that $t$ is fragile and show that, for each reduction $t \fto{*} s$, there is a reduction $s \fto{*} t'$ to a redex $t'$. By Lemma \[lem:perpFinRed\], also $s$ is fragile. Hence, there is a destructive reduction $S\fcolon s \pato \bot$ starting in $s$. According to Proposition \[prop:comprPerpRed\], we can assume that $S$ has length $\omega$. Therefore, there is some $n < \omega$ such that $\prefix{S}{n}\fcolon s \fto{*} t'$ for a redex $t'$. To prove the other direction of the equality of strong $\prs$-reachability and Böhm-reachability we need the property that strongly $\mrs$-convergent reductions consisting only of $\to[\bot]$-steps, i.e. contractions of $\rAct_\bot$-terms to $\bot$, can be compressed to length at most $\omega$ as well. In order to show this, we will make use of the following lemma from Kennaway et al.[@kennaway99jflp]: \[lem:botInst\] Let $\rAct$ be the root-active terms of an orthogonal, left-finite TRS and $t\in \ipterms$. If some $\bot,\rAct$-instance of $t$ is in $\rAct$, then every $\bot,\rAct$-instance of $t$ is. \[lem:comprBot\] Consider the Böhm extension of an orthogonal TRS w.r.t. its root-active terms and $S\fcolon s \mato[\bot] t$ with $s \in \iterms$, $t \in \ipterms$. Then there is a strongly $\mrs$-converging reduction $T\fcolon s \mato[\bot] t$ of length at most $\omega$ that is a complete development of a set of disjoint occurrences of root-active terms in $s$. The proof is essentially the same as that of Lemma 7.2.4 from Ketema [@ketema06phd]. Let $S = (t_\iota \to[\pi_\iota] t_{\iota +1})_{\iota < \alpha}$ be the mentioned reduction strongly $\mrs$-converging to $t_\alpha$, and let $\pi$ be a position at which some reduction step in $S$ takes place. That is, there is some $\beta$ such that $\pi_\beta = \pi$. We will prove by induction on $\beta$ that $\atPos{t_0}{\pi} \in \rAct$. Consider the term $\atPos{t_\beta}{\pi}$. Since a $\to[\bot]$-rule is applied here, we have, according to Remark \[rem:cloSub\], that $\atPos{t_\beta}{\pi} \in \rAct_\bot$. Let $V = \posBot{\atPos{t_\beta}{\pi}}$. Hence, for each $v \in V$, there is some $\gamma < \beta$ such that $\pi_\gamma = \pi \concat v$. Therefore, we can apply the induction hypothesis and get that $\atPos{t_0}{\pi\concat v} \in \rAct$ for all $v \in V$. It is clear that we can obtain $\atPos{t_0}{\pi}$ from $\atPos{t_\beta}{\pi}$ by replacing each $\bot$-occurrence at $v \in V$ with the corresponding term $\atPos{t_0}{\pi\concat v}$. That is, $\atPos{t_0}{\pi}$ is a $\bot,\rAct$-instance of $\atPos{t_\beta}{\pi}$. Because $\atPos{t_\beta}{\pi} \in \rAct_\bot$, there is some $\bot,\rAct$-instance of $\atPos{t_\beta}{\pi}$ in $\rAct$. Thus, by Lemma \[lem:botInst\], also $\atPos{t_0}{\pi}$ is in $\rAct$. This closes the proof of the claim. Now let $V = \posBot{t_\alpha}$. Clearly, all positions in $V$ are pairwise disjoint. Moreover, for each $v \in V$, there is a step in $S$ that takes place at $v$. Hence, by the claim shown above, $V$ is a set of occurrences in $t_0$ of terms in $\rAct$. A complete development of $V$ in $t_0$ leads to $t_\alpha$ and can be performed in at most $\omega$ steps by an outermost reduction strategy. The important part of the above lemma is the statement that only terms in $\rAct$ are contracted instead of the general case where a $\to[\bot]$ -step contracts a term in $\rAct_\bot\supset \rAct$. Finally, we have gathered all tools necessary in order to prove the converse direction of the equivalence of strong $\prs$-reachability and Böhm-reachability w.r.t. root-active terms. \[thr:prsEqBohm\] Let $\calR$ be an orthogonal, left-finite TRS and $\calB$ the Böhm extension of $\calR$ w.r.t. its root-active terms. Then $s \pato[\calR] t$ iff $s \mato[\calB] t$. The “only if” direction follows immediately from Proposition \[prop:rootAct\] and Proposition \[prop:prsBohm\]. Now consider the converse direction: Let $s \mato[\calB] t$ be a strongly $\mrs$-convergent reduction in $\calB$. W.l.o.g. we assume $s$ to be total. Due to Lemma \[lem:procBot\], there is a term $s' \in \iterms$ such that there are strongly $\mrs$-convergent reductions $S\fcolon s \mato[\calR] s'$ and $T\fcolon s' \mato[\bot] t$. By Lemma \[lem:comprBot\], we can assume that in $s' \mato[\bot] t$ only pairwise disjoint occurrences of root-active terms are contracted. By Proposition \[prop:rootAct\], each root-active term $r\in \rAct$ is fragile, i.e. we have a destructive reduction $r \pato[\calR] \bot$ starting in $r$. Thus, following Remark \[rem:cloSub\], we can construct a strongly $\prs$-converging reduction $T'\fcolon s' \pato[\calR] t$ by replacing each step $\Cxt{b}{r}\to[\bot] \Cxt{b}{\bot}$ in $T$ with the corresponding reduction $\Cxt{b}{r} \pato[\calR] \Cxt{b}{\bot}$. By combining $T'$ with the strongly $\mrs$-converging reduction $S$, which, according to Theorem \[thr:strongExt\], is also strongly $\prs$-converging, we obtain the strongly $\prs$-converging reduction $S\concat T'\fcolon s \pato[\calR] t$. Corollaries {#sec:corollaries} ----------- With the equivalence of strong $\prs$-reachability and Böhm-reachability established in the previous section, strongly $\prs$-convergent reductions inherit a number of important properties that are enjoyed by Böhm-convergent reductions: \[thr:prsCR\] Every orthogonal, left-finite TRS is infinitarily confluent. That is, for each orthogonal, left-finite TRS, $s_1 \pafrom t \pato s_2$ implies $s_1 \pato t' \pafrom s_2$. Leveraging Theorem \[thr:prsEqBohm\], this theorem follows from Theorem \[thr:bohmCR\]. Returning to Example \[ex:mconfl\] again, we can see that, in the setting of strongly $\prs$-converging reduction, the terms $g^\omega$ and $f^\omega$ can now be joined by repeatedly contracting the redex at the root which yields two destructive reductions $g^\omega \pato \bot$ and $f^\omega \pato \bot$, respectively. \[thr:prsWN\] Every orthogonal, left-finite TRS is infinitarily normalising. That is, for each orthogonal, left-finite TRS $\calR$ and a partial term $t$ in $\calR$, there is an $\calR$-normal form strongly $\prs$-reachable from $t$. This follows immediately from Theorem \[thr:prsEqBohm\] and Theorem \[thr:bohmWn\]. Combining Theorem \[thr:prsCR\] and Theorem \[thr:prsWN\], we obtain that each term in an orthogonal TRS has a unique normal form w.r.t. strong $\prs$-convergence. Due to Theorem \[thr:prsEqBohm\], this unique normal form is the Böhm tree w.r.t. root-active terms. Since strongly $\prs$-converging reductions in orthogonal TRS can always be transformed such that they consist of a prefix which is a strongly $\mrs$-convergent reduction and a suffix consisting of nested destructive reductions, we can employ the Compression Lemma for strongly $\mrs$-convergent reductions (Theorem \[thr:mrsCompr\]) and the Compression Lemma for destructive reductions (Proposition \[prop:comprPerpRed\]) to obtain the Compression Lemma for strongly $\prs$-convergent reductions: \[thr:prsCompr\] For each orthogonal, left-finite TRS, $s \pato t$ implies $s \pto{\le \omega} t$. Let $s \pato[\calR] t$. According to Theorem \[thr:prsEqBohm\], we have $s \mato[\calB] t$ for the Böhm extension $\calB$ of $\calR$ w.r.t. $\rAct$ and, therefore, by Lemma \[lem:procBot\], we have reductions $S\fcolon s \mato[\calR] s'$ and $T\fcolon s'\mato[\bot] t$. Due to Theorem \[thr:mrsCompr\], we can assume $S$ to be of length at most $\omega$ and, due to Theorem \[thr:strongExt\], to be strongly $\prs$-convergent, i.e $S\fcolon s \pto{\le \omega}[\calR] s'$. If $T$ is the empty reduction, then we are done. If not, then $T$ is a complete development of pairwise disjoint occurrences of root-active terms according to Lemma \[lem:comprBot\]. Hence, each step is of the form $\Cxt{b}{r}\to[\bot] \Cxt{b}{\bot}$ for some root-active term $r$. By Proposition \[prop:rootAct\], for each such term $r$, there is a destructive reduction $r \pato[\calR] \bot$ which we can assume, in accordance with Proposition \[prop:comprPerpRed\], to be of length $\omega$. Hence, each step $\Cxt{b}{r}\to[\bot] \Cxt{b}{\bot}$ can be replaced by the reduction $\Cxt{b}{r} \pto{\omega}[\calR] \Cxt{b}{\bot}$. Concatenating these reductions results in a reduction $T'\fcolon s'\pato[\calR] t$ of length at most $\omega\cdot\omega$. If $S\fcolon s \pto{\le\omega}[\calR] s'$ is of finite length, we can interleave the reduction steps in $T'$ such that we obtain a reduction $T''\fcolon s'\pto{\omega}[\calR] t$ of length $\omega$. Then we have $S\concat T''\fcolon s \pto{\omega}[\calR] t$. If $S\fcolon s \pto{\le\omega}[\calR] s'$ has length $\omega$, we construct a reduction $s \pato[\calR] t$ as follows: As illustrated above, $T'$ consists of destructive reductions taking place at some pairwise disjoint positions. These steps can be interleaved into the reduction $S$ resulting into a reduction $s \pato[\calR] t$ of length $\omega$. The argument for that is similar to that employed in the successor case of the induction proof of the Compression Lemma of Kennaway et al. [@kennaway95ic]. We do not know whether full orthogonality is essential for the Compression Lemma. However, as for strongly $\mrs$-convergent reductions, the left-linearity part of it is: Consider the TRS consisting of the rules $f(x,x) \to c, a \to g(a), b \to g(b)$. Then there is a strongly $\prs$-converging reduction $$f(a,b) \to f(g(a),b) \to f(g(a),g(b)) \to f(g(g(a)),g(b)) \to \dots \; f(g^\omega,g^\omega) \to c$$ of length $\omega+1$. However, there is no strongly $\prs$-converging reduction $f(a,b) \pto{\le\omega} c$ (since there is no such strongly $\mrs$-converging reduction). We can use the Compression Lemma for strongly $\prs$-convergent reductions to obtain a stronger variant of Theorem \[thr:strongExt\] for orthogonal TRSs: \[cor:prsMrsEq\] Let $\calR$ be an orthogonal, left-finite TRS and $s,t \in \iterms$. Then $s\mato t$ iff $s\pato t$. The “only if” direction follows immediately from Theorem \[thr:strongExt\]. For the “if” direction assume a reduction $S\fcolon s \pato t$. According to Theorem \[thr:prsCompr\], there is a reduction $T\fcolon s \pto{\le\omega} t$. Hence, since $s$ is total and totality is preserved by single reduction steps, $T\fcolon s \pto{\le\omega} t$ is total. Applying Theorem \[thr:strongExt\], yields that $T\fcolon s \mto{\le\omega} t$. Notice the similarity of the above corollary with the Compression Lemma. The Compression Lemma states that the reachability relation $\mato$ (as well as $\pato$) is the same whether we consider ordinals beyond $\omega$ or not. Analogously, Corollary \[cor:prsMrsEq\] states that the reachability relation $\pato$ on total terms is the same whether we allow partial convergence in between or not. More apt, however, is the comparison to the following corollary of the Compression Lemma (cf. Corollary \[cor:mrsFinApprox\]): $s \mato t$ implies $s \fto* t$ whenever $t$ is a finite term. In other words, if the final term is finite, we only need finite reductions. Analogously, Corollary \[cor:prsMrsEq\] states that if the initial term and the final term are total, we only need metric convergence. Conclusions {#sec:conclusions} =========== Infinitary term rewriting in the partial order model provides a more fine-grained notion of convergence. Formally, every meaningful, i.e.$\prs$-continuous, reduction is also $\prs$-converging. However, $\prs$-converging reductions can end in a term containing ’$\bot$’s indicating positions of local divergence. Theorem \[thr:weakExt\], Theorem \[thr:strongExt\] and Corollary \[cor:prsMrsEq\] show that the partial order model coincides with the metric model but additionally allows a more detailed inspection of non-$\mrs$-converging reductions. Instead of the coarse discrimination between convergence and divergence provided by the metric model, the partial order model allows different levels between full convergence (a total term as result) and full divergence ($\bot$ as result). The equivalence of strong $\prs$-reachability and Böhm-reachability shows that the differences between the metric and the partial order model can be compensated by simply adding rules that allow to replicate destructive reductions by $\to[\bot]$-steps. By this equivalence, we additionally obtain infinitary normalisation and infinitary confluence for orthogonal systems – a considerable improvement over strong $\mrs$-convergence. Both strong $\prs$-convergence and Böhm-convergence are defined quite differently and have independently justified intentions, yet they still induce the same notion of transfinite reachability. This suggests that this notion of transfinite reachability can be considered a “natural” choice – also because of its properties that admit unique normal forms. Nevertheless, while achieving the same goals as Böhm-extensions, the partial order approach provides a more intuitive and more elegant model for transfinite reductions as it does not need the cumbersomely defined “shortcuts” provided by $\to[\bot]$-steps, which depend on allowing infinite left-hand sides in rewrite rules. Vice versa destructive reductions in the partial order model provide a justification for admitting these shortcuts. ### Related Work {#sec:related-work .unnumbered} This study of partial order convergence is inspired by Blom [@blom04rta] who investigated strong partial order convergence in lambda calculus and compared it to strong metric convergence. Similarly to our findings for orthogonal term rewriting systems, Blom has shown for lambda calculus that reachability in the metric model coincides with reachability in the partial order model modulo equating so-called $0$-undefined terms. Also Corradini [@corradini93tapsoft] studied a partial order model. However, he uses it to develop a theory of parallel reductions which allows simultaneous contraction of a set of mutually independent redexes of left-linear rules. To this end, Corradini defines the semantics of redex contraction in a non-standard way by allowing a partial matching of left-hand sides. Our definition of complete developments also provides, at least for orthogonal systems, a notion of parallel reductions but does so using the standard semantics of redex contraction. ### Future Work {#sec:future-work .unnumbered} While we have studied both weak and strong $\prs$-convergence and have compared it to the respective metric counterparts, we have put the focus on strong $\prs$-convergence. It would be interesting to find out whether the shift to the partial order model has similar benefits for weak convergence, which is known to be rather unruly in the metric model [@simonsen04ipl]. A starting point in this direction would be to find correspondences between weak and strong $\prs$-convergence. For example, in the metric setting we have that $s \mwato[\calR] t$ implies that there is some $t'$ with $s \mato[\calB] t'$ and $t \mato[\calB] t'$ [@kennaway03book Theorem 12.9.14]. If we had the analogous correspondence for $\prs$-convergence, we would immediately obtain infinitary normalisation and confluence for weak $\prs$-convergence. Moreover, we have focused on orthogonal systems in this paper. It should be easy to generalise our results to almost orthogonal systems. The only difficulty is to deal with the ambiguity of paths when rules are allowed to overlay. This could be resolved by considering equivalence classes of paths instead. The move to weakly orthogonal systems is much more complicated: For strong $\mrs$-convergence Endrullis et al. [@endrullis10rta] have shown that weakly orthogonal systems do not even satisfy the infinitary unique normal form property (), a property that orthogonal systems do enjoy [@kennaway95ic]. Due to Theorem \[thr:strongExt\], this means that also in the setting of strong $\prs$-convergence, weakly orthogonal systems do not satisfy and are therefore not infinitarily confluent either! Endrullis et al. [@endrullis10rta] have shown that this can be resolved in the metric setting by prohibiting collapsing rules. However, it is not clear whether this result can be transferred to the partial order setting. Another interesting direction to follow is the ability to finitely simulate transfinite reductions by term graph rewriting. For strong $\mrs$-convergence this is possible, at least to some extent [@kennaway94toplas]. We think that a different approach to term graph rewriting, viz. the *double-pushout approach* [@ehrig73swat] or the *equational approach* [@ariola96fi], is more appropriate for the present setting of $\prs$-convergence [@corradini97rep; @bahr09master]. Acknowledgements {#sec:acknowledgements .unnumbered} ================ I am indebted to Bernhard Gramlich for his constant support during the work on my master’s thesis which made this work possible. [^1]: Note that if $S$ is open, the final term $t$ is not explicitly contained in $S$. Hence, the totality of $S$ does not necessarily imply the totality of $t$. [^2]: Strictly speaking, if $s$ is not a total term, i.e. it contains $\bot$, then we have to consider the system that is obtained from $\calR$ by extending its signature to $\Sigma_\bot$.
--- author: - 'Alexandre Wagemakers[^1]' - Javier Used - 'Miguel A. F. Sanjuán' title: Reducing the number of time delays in coupled dynamical systems --- Introduction ============ Time delays appear in a very natural way in any communication from one entity to another. In the context of dynamical systems, it is intrinsic to the transmission of the state through a communication channel. When the time delay is very small compared to the time scale of the processes involved in the dynamical systems, the induced effects of these time delays are barely noticeable, but still present. [ While delayed dynamical systems have infinite dimension, the effective motion of the trajectories evolves on a finite dimensional manifold. When the time delay increases, the complexity of the system grows continuously with the increase of the dimension of the effective manifold [@yanchuk2017spatio].]{} It is easy to find examples on graphs with time delays in very different fields. In physics, such time delays have been studied for coupled semiconductor lasers [@soriano2013complex]. In engineering, we can cite the problem of consensus among agents on a graph with a time delay [@Olfati_2004]. These time delays affect the dynamics of the whole graph and makes it harder the analysis and the simulation of the system. Another example where time delays can have relevant effects on the dynamics of a graph is the communication between cells of the nervous system [@liang09]. The problem of the synchronization of the dynamical systems on a graph with time-delayed coupling is partially understood [@li_2004; @yeung1999time; @ott_delay], though the analysis is difficult to achieve in most cases. In an attempt to reduce the dimensionality of the system, the method called componentwise time-shift transformation [@lucken_reduction_2013] allows to transform the time delays on the graph. The method relies on an invariant of the graph: the time delays can be altered as long as the sum of the time delays on the constitutive loop of the graph remain constant. This method was used successfully to demonstrate that we can reduce the number of time delays on any graph by setting $n_z=n_v-1$ time delays to zero, where $n_v$ is the number of vertices of the graph [@lucken_reduction_2013; @lucken_classification_2015]. A new formulation of this method [@wagemakers_2017] has been developed to improve the number of zero time delays. With the application of this new method, we find a number of zero time delays $n_z\geq n_v-1$, and the total sum of the time delays is also considerably reduced. While this last technique is useful in general, we will show here that in some special cases of interest, the number of zero time delays can be increased by a large extent. The main objective of this work is to show that when the graphs possess identical time delays and bidirectional links between pairs, the maximum number of zeros is bounded between $n_v-1 \leq n_z \leq n_v^2/4$. The lower bound corresponds to the minimum achievable on any graph, while the upper bound is the corresponding to the complete bipartite graph with the same number of vertices in each partition. To achieve this goal, first we explain the basic techniques of the componentwise time-shift transformation that allows to move around time delays without altering the dynamics. Then, we consider the particular situation of graphs with identical time delays. We also show how finding the maximum number of zero time delays can be reduced to a combinatorial search on a graph. Finally, we use numerical simulations to test the analytical results. Componentwise time-shift transformation ======================================= We consider a graph $G$ with a collection of $n_e$ oriented edges $e_{i}$ and $n_v$ vertices $v_i$. At each vertex we have a very general dynamical system. The equation set takes the form of a system of coupled delay differential equations $$\label{sys_din} \frac{dx_i}{dt} = f_i(x_i(t), x_j(t-\tau_{k})_{k \in S_i}),$$ with $i=1,...,n_v$ and $S_i$ is the set of indices $k$ such that the edges $e_k$ connect the vertex $j$ to the vertex $i$. We assume a discrete time delay $\tau_{k}$ on the edge $e_k$. The previous system in Eq. (\[sys\_din\]) can be transformed with a redefinition of the time delays $\tau_{k}$ without changing the dynamical properties of the system using the componentwise time-shift transformation [@lucken_reduction_2013; @lucken_classification_2015]. We set $$\label{sys_din_shift} \frac{dy_i}{dt} = f_i(y_i(t), y_j(t-\tilde \tau_{k})_{k \in S_i}),$$ with the following change of variables $$\begin{aligned} \label{delay_shift} y_i(t) = x_i(t-\eta_i) \\ \tilde \tau_{k} = \tau_{k} + \eta_{s(k)} - \eta_{t(k)},\end{aligned}$$ where $\eta_i$ are constants and $s(k)$ is the source vertex of the edge $k$, and $t(k)$ the target vertex of the same edge. The authors in [@lucken_reduction_2013] noticed that the [*algebraic sum*]{} of the time delays around any cycle of the graph is constant for every choice of the time-shifts $\eta_i$. The term [*algebraic sum*]{} means here that, given an oriented cycle in the graph, the time delay associated to the edges on the cycle with the same orientation should be summed up and the time delays on edges with opposite direction subtracted. Now the problem is to find the time-shifts $\eta_i$ associated to each vertex for a desired configuration of time delays $\tilde \tau_{k}$. It is possible to find a vector [$\boldsymbol\eta$]{} based on the topology of the graph and the time delay requirements on each edge [@wagemakers_2017]. Notice that the initial history of the delay differential equation deserves a special treatment if we want to match the trajectory in the phase space for both sets of equations [@lucken_classification_2015]. Time-delay reduction on bidirectional graphs ============================================ In the following we assume several general hypotheses. First, we consider directed graphs with bidirectional edges, which means that each connected vertex has a pair of edges in both directions. We make the distinction between this case and undirected graphs since the time delays might be different depending on the direction of the edge. We will also restrict our attention to graphs with identical time delays on each edge. At last, we also assume that the graph is weakly connected. With these general considerations in mind, we show that it is possible to change the distribution of the time delays on the graph according to simple rules. \ To understand the method, we first limit the study to bipartite graphs. A bipartite graph has two sets of vertices such that there are only edges between these two sets; in other words, there is no edge between any two vertices of the same set. If we apply the idea of conservation of the time delay around a loop of the graph, we have noticed that a simple transformation removes half of the time delays in this graph. First, we label the two sets of nodes $V_L$ and $V_R$ depending on the set of the bipartite partition. For simplicity, we label the vertices $R$ if they belong to $V_R$, and $L$ when they belong to $V_L$. If the time delay in the graph is $\tau$, we set arbitrarily a time delay $\tau_{LR}=2\tau$ for the edges going from a vertex $L$ to $R$, and a time delay $\tau_{RL}=0$ for edges in the opposite direction. This process is illustrated in Fig. \[fig1\]. Since any cycle on the graph has to alternate between both sets, the number of edges from $R$ to $L$, and $L$ to $R$ has to be the same. Any cycle of length $n$ has a total time delay $n\cdot\tau$, which would be conserved with the asymmetrical distribution of time delays. We claim that this is the optimal time delay distribution in the sense of the number of zeros on the edges, that is, $n_z=n_e/2$. [This analysis leads to an interesting consequence on other graphs that do not have the bipartite property. We can reduce the number of time delays on a bipartite sub-graph without changing the sum of the time delay on the loops of the entire network. To illustrate this effect, we suppose a bipartite network with identical time delays and only one edge between two vertices of the same set, $V_R$ or $V_L$. For the purpose of the discussion, we will call this particular edge $e$. Any cycle has to run through an even number of edges between $V_L$ and $V_R$ since it has to go back and forth between the two sets before returning to its initial vertex. As a consequence, if a cycle passes once through $e$, the total sum of the time delays along the cycle will be $(n+1)\cdot\tau$ with $n$ the even number of trips between $V_L$ and $V_R$. If we reduce the bipartite graph according to the method mentioned earlier, the sum of the time delays along the cycle will remain unchanged. This invariance can be explained by noticing that the sum of the time delays along the edges between the sets $V_L$ and $V_R$ is conserved. It leaves unchanged the sum of the time delays around any cycle of the graph including the cycles passing through $e$. This reasoning can be extended to any number of edges between the vertices of the same set $V_L$ or $V_R$.]{} \ We can now express the main result of this section. Given a graph with the general properties listed above, we can find a distribution of time delays with a number of zeros $$\label{theoretic_bound} n_v-1 \leq n_z \leq n_v^2/4.$$ We conjecture that the maximum number of zeros $n_z$ in the graph is given by the bipartite subgraph with the maximum number of edges. The upper bound corresponds to the complete bipartite subgraph with $n_v^2$, while the lower bound can be satisfied on any graph [@lucken_reduction_2013]. The problem of finding the bipartite subgraph with the largest number of edges is a well-known problem in combinatorial optimization named the MAXCUT problem. The problem consists in finding a partition of the vertices that will have a maximum number of edges between the two sets. This problem is known to be computationally difficult (NP hard) but suboptimal solutions can be found in polynomial time [@Goemans_1995]. Figure \[fig2\](a) shows an example of an optimal cut in a simple graph, where the cut intersects 8 edges. In Fig. \[fig2\](b), we show that the cycle passing through the vertices $1 \to 4 \to 2 \to 1 $ has the same cumulative sum of time delays around the cycle in both the original and the transformed graph. In the next section we will show some results of the application of the optimization of graphs to get the largest bipartite graph. Once the bipartite graph has been found, we can change the distribution of time delays on the subset of edges following the method we have described earlier. Examples of time-delay reduction on graphs ========================================== We will demonstrate the effectiveness of the method on graphs with well-known characteristics. For our purpose, the only information needed on the graph is its adjacency matrix $A$. It has dimension $n_v\times n_v$ and if an edge connects the vertex $i$ to the vertex $j$, the entry $a_{ij}$ of the adjacency matrix $A$ is $1$, and $0$ in the other case. The MAXCUT algorithm can be stated as follows $$\label{maxcut_pb} \begin{array}{ll} \textrm{Maximize: }& \displaystyle\sum_{i,j} a_{ij} - \frac{1}{2} c^T A c\\ &\\ \textrm{with }c_k ~ \in \{-1,1\},& \\ \end{array}$$ where $c$ is a column vector of dimension $n_v$. This is a integer program with a quadratic objective that can be solved with an standard optimization software. The entry $c_i$ of the vector $c$ classifies the vertex $i$ into the set $V_L$ or $V_R$. Once the algorithm has found a solution, the number of edges that crosses the cut is the value $$n_c = \displaystyle\sum_{i,j} a_{ij}- \frac{1}{2} c^T A c.$$ The number of zero time delays allowed on the graph is $n_z=n_c/2$ as described in the previous section. To test the algorithm we will focus on two important models: the Watts-Strogatz small-world and the scale-free models [@newman2010networks]. In the small-world model, we can vary a parameter $p$ such that the graph evolves continuously from a regular graph to a random Erdös-Renyi graph. For $p=0$ the graph is regular so that each vertex is connected to $k$ neighbors. As the parameter $p$ increases from $0$ to $1$, some of the edges are redirected to create shortcuts in the initial symmetric configuration. When $p$ gets eventually to $1$ we have an Erdös-Renyi random graph where the probability to have an edge between two vertices is $p_e=n_v/n_e=1/k$. ![This figure illustrates a particular cut (in dashed line) of a regular graph where each vertex is connected to $k=6$ nearest neighbors. The cut includes alternative vertices into the same set. The number of edges crossing the cut is $n_v\cdot \left[ \frac{k}{2} \right]$, in this case $8\cdot \left[ \frac{6}{2} \right]=24$[]{data-label="fig_regular_cut"}](fig4.eps){height="4cm"} Figure \[fig3\](a) depicts the number of zero time delays $n_z$ of several generations of the graph as a function of the vertex mean degree $\langle k\rangle$. We have chosen four values of the probability $p$ of the graph: $p=0$ (regular graph), $p=0.1$, $p=0.5$, and $p=1$ (random graph). For $\langle k\rangle \geq 5$ the tendency is linear in all cases. In Fig. \[fig3\](b) we have the result of the optimization as a function of the number of vertices $n_v$ for $\langle k\rangle=10$. It is remarkable that we have also a linear trend in all cases. From the analysis of the figures, we observe that the result of the optimization do not depend clearly on $p$. While we do not have an explanation for this linear behavior, we can give simple arguments based on the two extreme cases $p=0$ and $p=1$. [When the graph is regular ($p_0$), we have found a special partition of the graph that brings an analytic formula for the number $n_c$. It simply consists in picking alternate neighbor vertices to form the partition as shown in Fig. \[fig\_regular\_cut\]. If we count the number of edges from one partition to the other, and using symmetry arguments, we get]{} $$\label{nc_SW} n_c = n_v\cdot \left[ \frac{k}{2} \right],$$ where $[x]$ represents the closest integer higher or equal to $x$; notice that this formula works for even and odd number of vertices. It seems however that this cut is not optimal. In many cases the MAXCUT algorithm brings a better solution. Given that $n_z=n_c/2$, it provides us a simple lower bound for the number of zero time delays in the small-world model with $p=0$. For the case $p=1$, we can partially explain the results by considering a random cut that separates the vertices in two sets with the same number of elements. [ To achieve this, we just randomly pick half of the vertices to form either the set $V_L$ or $V_R$ and we begin to construct a random Erdös-Renyi graph. Each vertex from the set $V_L$ has approximately $n_v/2$ vertices to choose from the set $V_R$, so that there are $(n_v/2)^2$ possible edges. Since the probability to form an edge between the two vertices is $p\simeq \langle k\rangle / n_v$, and the formation is independent for each edge, the average number of edges that will cross the cut is]{} $$\langle n_c \rangle = \frac{\langle k\rangle}{n_v}\cdot\frac{ n_v^2}{4} =\frac{ n_v \cdot \langle k\rangle }{4}.$$ This is in fact linear with $n_v$ and the mean vertex degree $\langle k\rangle$. The MAXCUT algorithm finds a better solution than this naive random cut, but the linearity of the graph $n_z$ against $n_v$ remains. For other values of $p$, it is not obvious how to obtain $n_c$ and we need numerical simulations to obtain an estimation. While the parameter $p$ has a tremendous effect on the graph theoretic properties, it seems that it has a very limited effect on the number $n_z$. This is a surprising finding that indicates that topological factors such as the graph diameter and the shortest path length have little relevance here. In the last example, we show some results on the scale-free networks, another paradigmatic topology of graphs. The graphs are constructed following the preferred attachment algorithm and we have optimized the graph following the same method. We compare three cases: scale-free model, small-world with $p=0.5$, and Erdös-Renyi (small-world with $p=1$). The results shown in Fig. \[fig4\] for $n_z$ are almost identical for the three cases. The MAXCUT algorithm does not depend on the chosen type of topology, but a more comprehensive study is required to understand the reason of this similarity. [ The numerical simulations have been performed with the packages Lightgraphs and JuMP of the Julia programming language along with the IBM optimization software CPLEX for the solution of the integer program.]{} Discussion and Conclusions ========================== When the graph has an identical distribution of time delays, it is possible to find a new distribution of the time delays on the graph so that the global dynamics is not affected. The MAXCUT algorithm finds a bipartite subgraph with the maximum number of edges between the two sets. This partition is the basis for the new distribution of the time delays. The method basically consists in assigning on the edges connecting the two sets, zero time delays in one direction, and twice the initial time delay in the opposite direction. We have established that according to this method, the maximum number of zero time delays in the graph that can be achieved is $n_v^2 /4 $, while the minimum is $n_v-1$. On the basis of the simulations of the last section, we can conjecture a lower bound much tighter for the special type of graphs that we are studying $$n_z \geq \frac{n_v\cdot k}{4}.$$ It is possible to reach the upper bound $n_z=n_v^2 /4 $ when there is a complete bipartite subgraph embedded in the original graph. The numerical simulations also indicate that the topology is not a critical factor for the result of the MAXCUT optimization. It seems that the average node degree and the number of vertices are the two key parameters that affect $n_z$. It would be interesting on its own to study more in detail the result of the MAXCUT algorithm as a function of different topological factors of the graph. Delay differential equations are in general very difficult to analyze and simulate. We have shown here that it is possible to significantly reduce the dimensionality of a set of coupled delay differential equations over a bidirectional graph with identical time delays. The reduction is at least of the order of the number of vertices. The process of changing the time delays over the graph is simple, but it is hard to find a suitable partition of the nodes. Fortunately, for large graphs there are suboptimal approximations to the MAXCUT algorithm bringing a solution in a polynomial time. Furthermore, this method could be extended to directed graphs with identical time delay. This work opens new perspectives on the simulation and the analysis of large systems of delay coupled dynamical systems. We believe that this method will bring some insight on the behavior of collective dynamics with time delay. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the Spanish State Research Agency (AEI) and the European Regional Development Fund (FEDER) under Project No. FIS2016-76883-P. MAFS acknowledges the jointly sponsored financial support by the Fulbright Program and the Spanish Ministry of Education (Program No. FMECD-ST-2016). [12]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [ ****, ()]{} @noop [****,  ()]{} @noop [**]{} (, ) [^1]:
--- abstract: 'To truly eliminate Cartesian ghosts from the science of consciousness, we must describe consciousness as an aspect of the physical. Integrated Information Theory states that consciousness arises from intrinsic information generated by dynamical systems; however existing formulations of this theory are not applicable to standard models of fundamental physical entities. Modern physics has shown that fields are fundamental entities, and in particular that the electromagnetic field is fundamental. Here I hypothesize that consciousness arises from information intrinsic to fundamental fields. This hypothesis unites fundamental physics with what we know empirically about the neuroscience underlying consciousness, and it bypasses the need to consider quantum effects.' author: - | Adam B. Barrett[^1]\ \ *Sackler Centre for Consciousness Science* and *Department of Informatics*\ University of Sussex, Brighton BN1 9QJ, UK date: '\[Published Feb. 4, 2014 in the *Consciousness Research* specialty section of *Frontiers in Psychology*, article no. 5(63).\]' title: An Integration of Integrated Information Theory with Fundamental Physics --- Introduction {#introduction .unnumbered} ============ The key question in consciousness science is: “Given that consciousness (i.e., subjective experience) exists, what are the physical and biological mechanisms underlying the generation of consciousness?”. From a basic property of our phenomenology, namely that conscious experiences are integrated representations of large amounts of information, Integrated Information Theory (IIT) hypothesizes that, at the most fundamental level of description, consciousness is integrated information, defined as information generated by a whole system, over and above its parts (Tononi, 2008). Further, given the private, non-externally observable nature of consciousness, IIT considers consciousness to be an intrinsic property of matter, as fundamental as mass, charge or energy. Thus, more precisely, IIT posits that consciousness is intrinsic integrated information, where by intrinsic information it is meant that which is independent of the frame of reference imposed by outside observers of the system. The quantity of consciousness generated by a system is the amount of intrinsic integrated information generated (Balduzzi and Tononi, 2008), whilst the qualities of that consciousness arise from the precise nature of informational relationships between the parts of the system (Balduzzi and Tononi, 2009). IIT has garnered substantial attention amongst consciousness researchers. However, it has been criticized for its proposed measures of integrated information not successfully being based on an intrinsic perspective (Gamez, 2011; Beaton and Aleksander, 2012; Searle, 2013). The proposed “$\Phi$” measures are applicable only to networks of discrete nodes, and thus for a complex system depend on the observer choosing a particular graining. More broadly, information can only be intrinsic to fundamental physical entities, and descriptions of information in systems modeled at a non-fundamental level necessarily rely on an extrinsic observer’s choice of level (Floridi, 2009, 2010; Gamez, 2011). Here I propose a potential solution to this problem, what might be called the field integrated information hypothesis (FIIH). Modern theoretical physics describes the universe as being fundamentally composed of continuous fields. Electrical signals are the predominant substrate of information processing in brains, and the electromagnetic field that these produce is considered fundamental in physics, i.e., it is not a composite of other fields. Thus, I hypothesize that consciousness arises from information intrinsic to fundamental fields, and propose that, to move IIT forward, what is needed is a measure of intrinsic information applicable to the configuration of a continuous field. The remainder of this article is laid out as follows. First I discuss the concept of fundamental fields in physics, and how if one takes the view that consciousness is an intrinsic property of matter, then it must be a property arising from configurations of fields. In the following section, I discuss the hypothesis that consciousness arises from integrated information intrinsic to fundamental fields, the shortcomings of existing approaches to integrated information, and the possibility of constructing a measure that can successfully measure this quantity for field configurations. I then explain how IIT and the FIIH imply a limited form of panpsychism, and why this should not be considered a problem, before contrasting the FIIH with previously proposed field theories of consciousness, such as that of Pockett (2000). Finally, the summary includes some justification for this theoretical approach to consciousness. Fundamental fields and consciousness {#fundamental-fields-and-consciousness .unnumbered} ==================================== **Mass (GeV/$c^2$)** **Electric charge** **Strong charge** **Weak charge** ----------------------------- ---------------------- --------------------- ------------------- ----------------- **LEPTONIC MATTER** electron neutrino ($\nu_e$) $<1.3\times10^{-10}$ 0 No Yes electron (e) 0.0005 -1 No Yes muon neutrino ($\nu_\mu$) $<1.3\times10^{-10}$ 0 No Yes muon ($\mu$) 0.106 -1 No Yes tau neutrino ($\nu_\tau$) $<1.4\times10^{-10}$ 0 No Yes tau ($\tau$) 1.78 -1 No Yes **QUARK MATTER** up (u) 0.002 2/3 Yes Yes down (d) 0.005 -1/3 Yes Yes charm (c) 1.3 2/3 Yes Yes strange (s) 0.1 -1/3 Yes Yes top (t) 173 2/3 Yes Yes bottom (b) 4.2 -1/3 Yes Yes **BOSONS** **Electromagnetic force:** photon ($\gamma$) 0 0 No No **Strong force:** gluon (g) 0 0 Yes No **Weak force:** $W^-$ 80 -1 No No $W^+$ 80 1 No No Z 91 0 No No **Gravity:** graviton$^*$ 0 0 No No **Higgs mechanism:** Higgs (H) 126 0 No Yes : Table of the fields/particles that are considered fundamental. Familiar matter arises from leptons and quarks, while the forces of nature arise from interactions of matter with “carrier” bosons. Mass is given in giga electron volts per speed of light squared (Gev/$c^2\approx 2\times10^{-27}$kg). Electric charge is in standard units relative to minus the charge of the electron, i.e., one unit equals $1.6\times10^{-19}$ Coulombs. A description of the group theoretic strong and weak charges is beyond the scope of this article, but the table shows which fields have strong and weak charges. \*The gravity field is considered fundamental and is well-studied, but the gravity particle (graviton) has not to date explicitly been observed; at quantum (i.e., very microscopic) spatial scales, a consistent set of field equations for gravity have yet to be constructed. Contemporary physics postulates that “fields” are the fundamental physical ingredients of the universe, with the more familiar quantum particles arising as the result of microscopic fluctuations propagating across fields, see e.g., Oerter (2006) for a lay person’s account, or Coughlan et al. (2006) for an introduction for scientists. In theoretical terms, a field is an abstract mathematical entity, which assigns a mathematical object (e.g., scalar, vector) to every point in space and time. (Formally a field is a mapping $F$ from the set $S$ of points in spacetime to a scalar or vector field $X$, $F: S \to X$.) So, in the simplest case, the field has a number associated with it at all points in space. At a very microscopic scale, ripples, i.e., small perturbations, move through this field of numbers, and obey the laws of quantum mechanics. These ripples correspond to the particles that we are composed of, and there is precisely one fundamental field for each species of fundamental particle. At the more macroscopic level, gradients in field values across space give rise to forces acting on particles. The Earth’s gravitational field, or the electromagnetic field around a statically charged object, are examples of this, and the classical (as opposed to quantum) description is a good approximation at this spatial scale. However, both levels of description can be considered equally fundamental if the field is fundamental, i.e., not some combination of other simpler fields. Note that the electromagnetic and gravitational fields are both examples of fundamental fields, with the corresponding fundamental particles being the photon and the graviton. Particles are divided up into matter particles and force-carrying particles, but all types of particle have associated fields; all the forces of nature can be described by field theories which model interactions, i.e., exchanges of energy, between fields. See Table 1 for a list of fields/particles that are considered fundamental according to this so-called “Standard Model” of particle physics. To be consistent with modern theoretical physics, a theory of consciousness that considers consciousness to be a fundamental attribute of matter must describe how consciousness manifests itself in the behavior of either fundamental fields or quantum particles. Since we know that the brain generates electric fields with a rich spatiotemporal structure, and that, for the main part, information processing in the brain is carried out by electrical signaling between neurons operating mostly in the classical (as opposed to quantum) regime (Koch and Hepp, 2006), empirical evidence favors the former. Thus, on the view that consciousness is a fundamental attribute of matter, it must be the structure and/or dynamics of the electromagnetic field (which is an example of a fundamental field) that is fundamentally the generator of brain-based consciousness. Once one ascribes electromagnetic fields with the potential to generate consciousness, it is natural to ask whether other fields might also have the potential to generate consciousness. According to modern physics, there was a symmetry between all fields at the origin of the universe, although these symmetries were broken as the universe began to cool (Georgi and Glashow, 1974; see Hawking, 2011 for a lay-person’s account). It could be argued by Occam’s razor that it makes more sense to posit that potential for consciousness existed at the outset, and hence potential for consciousness is a property of all fields, than that it emerged only during symmetry breaking. However, in practice, it is unlikely that any complex consciousness could exist in any field other than the electromagnetic field, for reasons to do with the physics and chemistry of the electromagnetic field compared with other fields. Considering the four forces: strong, weak, electromagnetic and gravitational, the strong and weak forces don’t propagate over distances much larger than the width of the nucleus of an atom, and gravity alone cannot generate complex structures by virtue of being solely attractive; in contrast, the electromagnetic field can propagate over macroscopic scales, is both repulsive and attractive, and is fundamentally what enables non-trivial chemistry and biology. Considering fields associated with matter, these in general do not have any undulations at spatial scales larger than the quantum scale; the non-trivial structures in these fields are essentially just the ripples associated with the familiar quantum matter particles, i.e., electrons and quarks, and various “exotic” particles detectable in particle physics experiments (see Table 1). Finally, the recently discovered Higgs field has essentially a uniform structure; quantum interactions exist between the Higgs field and many of the other fields, and this is fundamentally the origin of mass in the universe (see e.g., Coughlan et al., 2006; Oerter, 2006). Thus, the physics of the electromagnetic field uniquely lends itself to the generation of complex structures. The Field Integrated Information Hypothesis {#the-field-integrated-information-hypothesis .unnumbered} =========================================== Given the above, I propose that the principal conceptual postulates of IIT should be restated as follows. Consciousness arises from information intrinsic to the configuration of a fundamental field. The amount of consciousness generated by a patch of field is the amount of integrated information intrinsic to it. When a patch of field generates a large quantity of intrinsic integrated information, mathematically there is a high-dimensional informational structure associated with it (Tononi, 2008; Balduzzi and Tononi, 2009). The geometrical and topological details of this structure determine the contents of consciousness. The task now is to correctly mathematically characterize intrinsic integrated information, and construct equations to measure it. A true measure of intrinsic integrated information must be frame invariant, just like any fundamental quantity in physics. That is, it must be independent of the point of view of the observer: independent of the units used to quantify distance or time, independent of which direction is up, and independent of the position of the origin of the coordinate system; and also independent of the scale used for quantifying charge, or field strength. The “$\Phi$” measures put forth by existing formulations of IIT (Balduzzi and Tononi, 2008; Barrett and Seth, 2011) are not applicable to fields because they require a system with discrete elements, and fields are continuous in space. One could ask, however, whether a perspective on a system in terms of discrete elements could actually be equivalent to an intrinsic field-based perspective, thus obviating the need for a field-based measure. To see explicitly that this is not the case, let us revisit the photodiode, which, according to the existing theory (Tononi, 2008), has 1 bit of intrinsic information by virtue of having two states, on or off. There is a wire inside the photodiode, and the electrons inside the wire are all individually fluctuating amongst many different states. The electromagnetic field generated by the diode, and the circuit to which it is connected has two stable configurations for as long as the circuit is connected. But other more general configurations for an electromagnetic field are ruled out by each of these states. Considering the system at this level of description yields a distinct perspective, and would lead one to deduce that the amount of information generated by the system’s states is some quantity other than 1 bit. Thus the field-based perspective is not equivalent to the observer-dependent discrete perspective. The idea here is that a formula should be obtained that could in theory be applied universally to explore the intrinsic information in any patch of spacetime, without requiring an observer to do any modeling, i.e., one would just measure field values in as fine a graining as possible to get the best possible approximations to the intrinsic informational structure. Only a formula in continuous space and time would allow this. If a discrete formula were to be applied, there would always be the possibility of encountering an informational structure on a finer scale than that of the formula. (Unless the graining required by the formula were the Planck scale, i.e., the scale of the hypothesized superstring, on which continuous models of physics break down; however there do not exist complex structures at that scale.) In practice however, observations of systems are necessarily discrete, so discrete approximations to a continuous formula could be useful for empirical application. See Balduzzi (2012) for some recent work on the information-theoretic structure of distributed measurements. We don’t yet know how to properly calculate intrinsic information, so must remain agnostic on the precise amount of intrinsic integrated information generated by photodiodes, or of anything. However, the failure of existing approaches does not rule out the construction in the future of a successful formula. While it is beyond the scope of this present paper to make a serious attempt at solving this problem, I speculate that a formula in terms of thermodynamic entropy as opposed to Shannon entropy might be more likely to succeed, as the former is inherently an intrinsic property, whereas the latter was constructed for the purpose of describing an external observer’s knowledge of a system (Floridi, 2009, 2010; Gamez, 2011; Beaton and Aleksander, 2012). Integrated Information Theory and panpsychism {#integrated-information-theory-and-panpsychism .unnumbered} ============================================= Searle (2013) criticizes IIT for its stance that integrated information always produces consciousness, stating that this ludicrously ascribes consciousness to all kinds of everyday objects and would mean that consciousness is “spread thinly like a jam across the universe”. Koch and Tononi (2013) counter that only “local maxima” of integrated information exist (over spatial and temporal scales): “my consciousness, your consciousness, but nothing in between”. If local maxima of intrinsic integrated information in field configurations always generate consciousness, then there must be minute amounts, say “germs”, of consciousness all over the universe, even though there would be no superordinate consciousness amongst groups of people. Thus, IIT and the FIIH do imply a form of panpsychism. However, the phenomenology assigned to an isolated electron in a vacuum, or even a tree, which has no complex electromagnetic field, would be very minimal. Since the only consciousness we can be certain of is our own, the positing by integrated information theories of germs of consciousness everywhere is no reason to dismiss them. A theory should stand or fall on whether or not it can elegantly and empirically describe human consciousness. For those uncomfortable with subscribing to a panpsychist theory, a possible way round the problem is to assign an attribute “potential consciousness” to matter at the most fundamental level. Then, the quantity of potential consciousness is simply the quantity of integrated intrinsic information. But only when there is a large amount of intrinsic integrated information with a sufficiently rich structure to be worthy of being compared to a typical healthy adult human waking conscious moment, should we say that the integrated information has “actual consciousness” associated with it. A line could thus be drawn somewhere between the potential consciousness of an isolated electron in a vacuum and the actual consciousness generated by my brain as I write this article. The problem with such a distinction however is that potential consciousness would still be assigned phenomenal content, so it is perhaps more elegant to just use a single term “consciousness” for the whole spectrum of integrated information. On the other hand, since consciousness is defined by some as any mental content, but by others as only self-reflective mental content, there is no single terminology that appeals to everybody. The key point, irrespective of the precise definition of consciousness, is that on the theory discussed here, intrinsic integrated information is what underlies subjective experience at the most fundamental level of description. Alternatively, one could further imagine different lines being drawn for different purposes. For example, a threshold of conscious awareness above which surgery cannot be performed; or thresholds at which various people are comfortable eating animals. Relation to previous electromagnetic field theories of consciousness {#relation-to-previous-electromagnetic-field-theories-of-consciousness .unnumbered} ==================================================================== There have been several other theories of consciousness put forward that identify consciousness with various types or configurations of fields, see Pockett (2013) for a review. Notably, Pockett’s electromagnetic field theory (EMT) of consciousness (Pockett, 2000, 2011, 2012) posits that “conscious perceptions (and sensations, inasmuch as they can be said to have independent existence) are identical with certain spatiotemporal electromagnetic patterns generated by the normal functioning of waking mammalian brains” (Pockett, 2013). In the most recent formulation of this theory, the key feature of field patterns underlying consciousness is the presence of a neutral region in the middle of a radial pattern. This hypothesis was motivated by the observation that such field patterns appear during recurrent cortical activity, (with the neutral region in layer 4), and the empirical association of consciousness with recurrent processing (Pockett, 2012). A problem common to previous field theories of consciousness (Libet, 1994; Pockett, 2000, 2013; McFadden, 2002) is that they claim that cutting outgoing neural connections from a slab of cortex that generates a conscious experience will not affect the ability to report that conscious experience. EMT argues that the electromagnetic field within such an isolated hypothetical slab would still propagate through space and enable communication between the conscious field generated by the slab and the spatially contiguous larger conscious mental field. This is not however compatible with the laws of physics. Any cutting of synapses to or from regions of cortex that are generating consciousness will alter the field, and will therefore alter the conscious experience. There is no electromagnetic field residing in the brain other than that generated specifically by all of the neural and chemical activity. And it does not make sense to talk of the brain’s electromagnetic field and its firing neurons and synapses as being able to exist independently of each other. On the theory put forward here, neurons can be considered the scaffolding that enable very complex electromagnetic field configurations to be sustained. As far as describing the mechanisms of perception and cognition that generate the specific contents of consciousness in any given scenario, the current paradigm of associating it with neural activity is of course the only valid and useful level of description. However, in terms of explaining more fundamentally how matter gives rise to consciousness, a description in terms of fields would be much more elegant than a description in terms of the complex entities that are neurons. Another shortcoming of previous field theories of consciousness is that none of them relate physical properties of proposed correlates of consciousness to properties of phenomenology, i.e., they do not posit “explanatory correlates of consciousness” (Seth, 2009). The FIIH raises for the first time the possibility of constructing a field theory of consciousness that can account for a fundamental aspect of phenomenology, namely that conscious experiences are integrated representations of large amounts of information. Discussion {#discussion .unnumbered} ========== In this paper I have hypothesized that, at the most fundamental level of description, human consciousness arises from information intrinsic to the complex electromagnetic fields generated by the brain. This “FIIH” builds on the axioms of IIT, namely that consciousness is integrated information, and that consciousness is an intrinsic and fundamental property of matter analogous to mass or charge. However, it also implies that a new mathematical formalism is required to properly quantify intrinsic integrated information, since electromagnetic fields are continuous in space, and existing “$\Phi$”-type measures of integrated information are applicable only to discrete systems (which require an observer dependent perspective). The idea that consciousness can be identified with certain spatiotemporal electromagnetic patterns has been previously put forward in other electromagnetic field theories of consciousness. But by suggesting that integrated information is the key factor, the theory here connects, for the first time, such electromagnetic field theories of consciousness to basic aspects of phenomenology. The hypothesis is admittedly rather speculative, and any proposed mathematical formula for conscious level in terms of information intrinsic to an electromagnetic field will be difficult to test directly, simply because we do not have the technological tools or the computational resources to record in full detail the three-dimensional electromagnetic field structure generated by the brain. Rather, this can only be sampled at a spatial scale that is sparse compared to the finest scale of its undulations. However, there is a strong case to be made that the theoretical development of the ideas presented here has substantial value. Theories in physics have been vigorously pursued for their logic and beauty, in the absence of imminent direct experimental tests. For example, there is a vast amount of work being conducted on string theory; there, rather than experimental verification, the goal is an elegant explanation of our existing empirical knowledge of particle physics and gravity. If there already existed several analogous theories of consciousness, then one could argue that it would not be useful to add to the speculation. However, there is as yet no compellingly believable set of equations for describing, fundamentally, how consciousness is generated. IIT has potential in this direction, but a major step forward for the theory would be a truly plausible formula for intrinsic information applicable to fundamental physical entities. The FIIH provides a conceptual starting point for achieving this. All this is not to say that such a theory will aid understanding of all aspects of consciousness; indeed the multi-faceted nature of consciousness requires descriptions at many different levels. Non-reductionist frameworks are required to understand the complexity of the biological machinery that enables the brain to do any kind of information processing, conscious or unconscious, and to understand the differences between conscious and unconscious cognitive processes neural dynamics and behavior must necessarily be modeled at multiple levels of description. Finally, any theory can potentially indirectly make predictions. Indeed IIT has already inspired heuristic measures of information integration/complexity that have been successfully applied to recorded electrophysiological data and are able to distinguish the waking state from diverse unconscious states, i.e., sleep and anaesthesia under various anaesthetics (Massimini et al., 2005; Ferrarelli et al., 2010; Casali et al., 2013). The results are in broad agreement with the predictions of IIT and provide encouragement for further theoretical work on the relationship between information integration and consciousness. Theories built from the FIIH could make new and distinct predictions about the types of structural and/or functional neuronal architectures that are capable of generating consciousness; and new theory can only further inform the quest for ever more reliable measures of consciousness that can be applied to observable brain variables. Acknowledgements {#acknowledgements .unnumbered} ================ I thank Emily Lydgate and Anil Seth for invaluable discussions during the writing of this paper, and Daniel Bor and David Gamez for very useful comments on draft manuscripts. ABB is funded by EPSRC grant EP/L005131/1. Balduzzi, D., and Tononi, G. (2008). Integrated information in discrete dynamical systems: motivation and theoretical framework. *PLoS Comput. Biol.* 4(6), e1000091. Balduzzi, D., and Tononi, G. (2009). Qualia: the geometry of integrated information. *PLoS Comput. Biol.* 5(8), e1000462. Balduzzi, D. (2012). On the information-theoretic structure of distributed measurements. *EPTCS* 88, 28-42. Barrett, A.B., and Seth, A.K. (2011). Practical measures of integrated information for time-series data. *PLoS Comput. Biol.* 7(1), e1001052. Beaton, M., and Aleksander, I. (2012). World-related integrated information: enactivist and phenomenal perspectives. *Int. J. Mach. Conscious.* 4(2), 439-455. Casali, A.G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K.R., Casarotto, S., Bruno, M.A., Laureys, S., Tononi, G., and Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. *Sci. Trans. Med.* 5(198), 198ra105. Coughlan, G.D., Dodd, J.E., and Gripaios, B.M. (2006). *The Ideas of Particle Physics: An Introduction for Scientists*. Cambridge: Cambridge University Press. Ferrarelli, F., Massimini, M., Sarasso, S., Casali, A., Riedner, B.A., Angelini, G., Tononi, G., and Pearce, R.A. (2010). Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness. *Proc. Natl. Acad. Sci. U. S. A.* 107, 2681-2686. Floridi, L. (2009). Philosophical conceptions of information. *Lect. Notes Comput. Sci.* 5363, 13-53. Floridi, L. (2010). *Information: A Very Short Introduction*. Oxford: Oxford University Press. Gamez, D. (2011). Information and consciousness. *Etica Pol.* 13(2), 215-234. Georgi, H. and Glashow, S.L. (1974). Unity of all elementary particle forces. *Phys. Rev. Lett.* 32, 438-441. Hawking, S. (2011). *A Brief History Of Time: From Big Bang To Black Holes*. New York, NY: Bantam. Koch C., and Hepp, K. (2006). Quantum mechanics in the brain. *Nature* 440, 611-612. Koch, C., and Tononi, G., (2013). *Can a photodiode be conscious? New York Review of Books*. New York, NY: Rea S. Hederman. Libet, B. (1994). A testable field theory of mind-brain interaction. *J. Conscious. Stud.* 1(1), 119-126. Massimini, M., Ferrarelli, F., Huber, R., Esser, S.K., Singh, H., and Tononi, G. (2005). Breakdown of cortical effective connectivity during sleep. *Science* 309, 2228-2232. McFadden, J. (2002). The conscious electromagnetic information (cemi) field theory: the hard problem made easy? *J. Conscious. Stud.* 9(8), 45-60. Oerter, R. (2006). *The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics*. New York, NY: Plume. Pockett, S. (2000). *The Nature of Consciousness: A Hypothesis*. Lincoln; NE: iUniverse.com. Pockett, S. (2011). Initiation of intentional actions and the electromagnetic field theory of consciousness. *Hum. Mente* 15, 159-175. Pockett, S. (2012). The electromagnetic field theory of consciousness: a testable hypothesis about the characteristics of conscious as opposed to non-conscious fields. *J. Conscious. Stud.* 19(11-12): 191-223. Pockett, S. (2013). Field theories of consciousness. *Scholarpedia* 8(12), 4951. Searle, J.R. (2013). *Can information theory explain consciousness? New York Review of Books*. New York, NY: Rea S. Hederman. Seth, A.K. (2009). Explanatory correlates of consciousness: theoretical and computational challenges. *Cogn. Comput.* 1(1), 50-63. Tononi, G. (2008). Consciousness as integrated information: a provisional manifesto. *Biol. Bull.* 215(3), 216-242. [^1]: adam.barrett@sussex.ac.uk
--- abstract: 'We tackle the problem of reflectance estimation from a set of multi-view images, assuming known geometry. The approach we put forward turns the input images into reflectance maps, through a robust variational method. The variational model comprises an image-driven fidelity term and a term which enforces consistency of the reflectance estimates with respect to each view. If illumination is fixed across the views, then reflectance estimation remains under-constrained: a regularization term, which ensures piecewise-smoothness of the reflectance, is thus used. Reflectance is parameterized in the image domain, rather than on the surface, which makes the numerical solution much easier, by resorting to an alternating majorization-minimization approach. Experiments on both synthetic and real-world datasets are carried out to validate the proposed strategy.' author: - 'Jean <span style="font-variant:small-caps;">Mélou</span>$^{1,2}$' - 'Yvain <span style="font-variant:small-caps;">Quéau</span>$^3$' - 'Jean-Denis <span style="font-variant:small-caps;">Durou</span>$^1$' - | \ Fabien <span style="font-variant:small-caps;">Castan</span>$^2$ - 'Daniel <span style="font-variant:small-caps;">Cremers</span>$^3$' bibliography: - 'biblio.bib' date: 'Received: date / Accepted: date' title: 'Variational Reflectance Estimation from Multi-view Images' ---
--- abstract: 'The problem of the universal form of the size spectrum is analyzed. The half-widths of two wings of spectrum is introduced and it is shown that their ratio is very close to the golden fraction. In appendix it is shown that behind the golden fraction of an image one can find the information basis, i.e. the proportion of the golden fraction corresponds to some method to find extremum. The method to find extrema associated with Fibonacci numbers also leads to proportions which can be seen in nature or can be introduced artificially. The information origin of proportions is proved theoretically and confirmed by examples in nature and human life.' author: - Victor Kurasov date: ' St.Petersburg State University ' title: Golden fraction in the theory of nucleation --- Universal proportion in the form of the size spectrum ===================================================== It is well known that the phenomenon of a «golden fraction» is widely spread in nature. This fact is proven by numerous measurements during hundreds of years. The most striking feature is that some fundamental proportions in nature satisfy the golden fraction. It is worth seeking the golden fraction in the process of nucleation. The most natural conditions are the dynamic ones. Under these conditions there is a universal form of size spectrum derived in [@TMF]. The form of the universal spectrum is given by the following formula $$f= \exp(x-\exp(x))$$ in the special coordinates (see [@TMF]) after the special renormalization. The spectrum has the amplitude $$f_{am} = \exp(-1)$$ which is attained at $x=0$. The relaxation length is ordinary defined as the length where the function is diminished in $\exp(1)$ times. So, here appeared two lengths - one corresponding to the right wing and that corresponding to the left wing. We shall denote them as $-x_1$ and $x_2$. They can be expressed through the W-Lambert function and have the following values $$x_1 = 1.84$$ $$x_2 = 1.14$$ The ratio $x_2/x_1$ is very close to the golden fraction $$x_2 /x_1 = 0.622$$ This value is very close to the precise value of the golden fraction $0.618$. The relative error is less than one percent. The situation is clarified by fig.1. ![image](spectr2.eps) [ Fig. 1. The form of the universal spectrum and the golden fraction. The horizontal line is $e$ times smaller than the amplitude and cuts the spectrum at the half-widths.]{} There is no yet any clear interpretation of such good coincidence of this result with the golden fraction. It is quite possible that this is explained by the information origin of the golden fraction which is derived in Appendix. Appendix:The role of the information interaction in the golden fraction ======================================================================= The proportions of a human body satisfy the «golden fraction» rule as it was stated many times, for example, by Pythagoras, Leonardo da Vinci, etc. But investigations of Adolf Zeising [@Z] showed that only the main proportion of a male body satisfies in the global proportions the rule of the «golden fraction». The global proportion of a female body slightly differs from the golden fraction $1.618$ and it is $1.60$. Why this slight deviation takes place? Below the answer on this question will be given. This answer is based on the information origin of the golden fraction which will be analyzed below together with incomplete fractions appeared as ratios of Fibonacci numbers. Information origin of the golden fraction ----------------------------------------- We admit that there is some natural process behind the phenomena of the golden fraction. What process can it be? At least this is the process of observation. Certainly, the process of observation is the information interaction between the observer and the environment. What purposes are attained in this interaction? We suppose that the observer wants to reconstruct the shape and the content of the image. The points which produces the maximal information are certainly the bifurcation points. The second important class of points are the points of extrema. Bifurcation points compose the shape of the object and form the information background to find all other characteristics of the object. At this shape the points of extrema have to be found. So, the primary task is to find extrema. It is known that practically all methods to find exremum different from the simple comparison of function in different points contain the one dimensional procedure as a elementary step in the global procedure [@kusin]. So, it is worth to consider namely the one dimensional procedure of the extremum seeking. Consider an elementary interval $[0,1]$. This interval will be the initial interval where the extremum (maximum) of the known function exists. Our task is to determine the position of this maximum. To state that there is a maximum inside the given interval it is necessary to have at least three points at the interval. Two points will be at the ends of interval. This is clear because during the sequential procedure we diminish the initial interval and automatically the boundary points of a new interval will be already measured. When we have three points we can only state that there is a maximum in the inner point when the function in the inner point is greater than at the boundary points. But we can not diminish the interval without the forth point. Let the inner points be $x_1$ and $x_2$. We have to determine the positions of these points. Let $x_1$ be less than $x_2$. When $f(x_1)>f(x_2)$ then we can reduce $[0,1]$ to $[0,x_2]$. When $f(x_1)<f(x_2)$ then $[0,1]$ can be reduced to $[x_1,1]$. The symmetry requires that $$x_1=1-x_2$$ Then to determine the position of $x_1$ one can note that at interval $[0,x_2]$ it will be necessary to put two points and it would be very profitable when one of these points coincides with $x_1$. This point will be the left point in the interval $[0,x_2]$. Then $$x_1=x_2*x_2$$ or $$x_1=(1-x_1)*(1-x_1)$$ with a root $$x_1=\frac{3-\sqrt{5}}{2}$$ which belongs to $[0,1]$. The value $$1-x_1 = \frac{-1+\sqrt{5}}{2}=0.618$$ is called the golden fraction. This value is namely the golden fraction mentioned at the beginning. So, there appeared a hypothesis that the golden fraction in nature is associated with a process of observation and with the procedure of seeking the extremum. This is the main idea of this publication. But it is necessary to confirm this observation. This will be done below. Example of inapplicability of the pure golden fraction ------------------------------------------------------ We shall check the method of the golden fraction on example of the seeking for the approximate extremum by two measurements in the inner points of interval. After only two measurements it is necessary to take the final decision. This is the minimal number of measurements because one measurement can not specify the interval smaller than the initial one. The simple analysis shows that the smallest interval will be when two points are $x_1=1/3$, $x_2=2/3$. It does not correspond to the golden fraction, but $x_2=0.6666$ is rather close to $0.618$. Here $$\label{d} x_1=1-x_2=x_2-x_1$$ The reason of the discrepancy is the finite number of measurements. So, it is necessary to analyze the optimal procedures to find extremum with finite number of points. The method to find extremum in the finite number of measurements ---------------------------------------------------------------- One of the oldest methods to find extrema is the Fibonacci method described already by Euclid [@kusin]. Let the process be the one dimensional searching of an extremum restricted by $N$ measurements. The process is supposed to be a sequential one, i.e. the observer makes conclusions about the interval for the possible values of an argument at every step of the measurements. We shall call this interval as the uncertainty interval $I_N$. Now we consider the last measurement $X_N$. It has to be made in interval $I_{N-1}$. This interval contains the point of extremum and also the point $E_{N-1}$ at which the extremum between all taken measurements is attained. If we take the new point of measurement $X_N$ equal or very close to $E_{N-1}$ then we get no new information about the behavior of the function and such measurement is useless. So, it is necessary to have a distance between $X_{N}$ and $E_{N-1}$. Certainly, we do not know this distance and and speak only about the lowest boundary for this distance $\delta$. The best estimate for the $|I_N|$ will be when we put $X_N$ symmetrically to $E_{N-1}$ with respect to the middle of interval $I_{N-1}$. Then $$I_{N-1} = 2 I_{N} -\delta$$ This completes the step in the recurrent procedure. Now we come to the previous pair of experiments. The interval $I_{N-1}$ contains $E_{N-2}$. In this interval two experiments will be made. The best experiment in this pair will be $E_{N-1}$. Another experiment will be denoted as $D_{N-1}$. This point will be the boundary between two parts of $I_{N-2}$: one part will be included in the further investigations and the other part will be thrown out. But at the beginning of experiment it is not known what value from the pair will be the best and what will be thrown out. So, these values have to be symmetric with respect to the ends of interval. So, the distances from these points to the ends of interval have to be equal. Since both points are symmetric with respect to the middle of interval and one of the points will be the optimal $E_{N-1}$ then every point has to be at the distance $L_{N-1}$ from the end of interval. Then $$L_{N-2} = L_{N-1} + L_{N}$$ These recurrent relations are typical for the Fibonacci numbers. It is necessary to check the initial numbers with $N=1$ and $N=2$ but according to (\[d\]) these numbers are equal and after the renormalization of $L_1$ and $L_2$ we come to $$F_1=1, F_2=1$$ Then $L_N=F_N$ are the Fibonacci numbers. The sequential necessary proportions will be $$F_2/F_3=2/3, \ \ F_3/F_4=3/5, \ \ F_4/F_5=5/8$$ Already $F_4/F_5$ is very close to the golden fraction and later all sequential fractions will approach to the golden fraction. So, it is worth to consider only the first fractions. One can see that this method is optimal in the case of $N$ measurements. Examples of proportions ----------------------- When human bodies or some other objects in nature have the mentioned proportions it allows to grasp their image rather fast. So, one can speak about the increase of the interaction speed. The time necessary to get the approximate image is smaller when the main extreme points of an image coincide with proportions prescribed by the golden fraction or Fibonacci fractions. If our hypothesis is true then there will be numerous examples of the Fibonacci fractions $F_2/F_3$ and especially $F_3/F_4$. The higher fractions can not be observed because they are too close to the golden fraction. Really, in many cases it is necessary to get the extremum after two or three measurements. As an example one can consider professions of drivers, hunters, etc., where it is very important to take decisions immediately. So, one can come to conclusion that there exist some observers who have the habits to estimate the extrema in several first steps. The object under such observation will correspond to their habits. As it is known from statistical mechanics the additional time spending for a fixed job corresponds to some surplus energy (because the small time corresponds to the nonequilibrium process which requires the surplus energy). So, the construction of the image with ideal proportions is energetically profitable. Now it is clear that the proportion of a female body $0,60$ corresponds to $F_3/F_4$ and it is explained by historical role of a hunter in a pre-historical period. Since it was necessary to take decisions immediately the hunters used to estimate the image in two or three basic points, Contrary to men the women have enough time for observations in their silent life and, thus, the male body have a proportion of a golden fraction corresponding to the infinite number of observations. Certainly, women can not immediately transform their body to the golden fraction proportion in our society where professions of men are now rather calm. But later the evolution choice will inevitably bring this proportion to a golden fraction. The women with long feet are sexually attractive now and have more chances to get children, So, earlier or later this proportion will come to the golden fraction. But it takes thousands of years and now we have the proportion $F_3/F_4$ which is the trace of men’s professions in the pre-historical times. One can see the following interesting example confirming this theory. The Kuroi in Greece created before the classic period have proportions (see fig.2) corresponding to the female fraction $0.60$. ![image](kuros.eps) [ Fig. 2. Example of Kuros. The ratio $b/(a+b)$ is close to $0.60$ ]{} An explanation is very simple since the sculptor and spectators were mainly the men who found the sexually attractive proportions as the female ones. Only in classic period these proportions were reconsidered and brought to the real proportions of a male body. Is it possible to view the first proportion $F_2/F_3$ in a human body? In nature it does not exist. But it can be seen in artificial images of women clothes in a fashion industry images (see fig.3). ![image](mode2.eps) [ Fig. 3. Example of a fashion design. The ratio $b/(a+b)$ is close to $0.66$ ]{} The mentioned main ratio here is close to $F_2/F_3$. One can continue this type of examples. The different heights of heels help women to modify the main ratio of a body. One and a half or two centimeters of a heel give approximately one percent in ratio. So, the heels in three-four centimeters transforms the ratio $0.60$ to the golden fraction. This corresponds to the «English heel». The high heels in 10 centimeters transforms the ratio into the fraction $F_3/F_4$. This is a «French heel». Two types of heels give a clear answer on applicability of the Fibonacci ratios. Women evidently vote by their heels for the information basis of the harmonic proportions in nature. As the result of the given considerations one can state that now the information origin of appearance of the golden fraction is clarified. If we start from the principle of the minimal energy we can derive the golden fraction analytically since every observation requires some time and, thus, some additional energy. The facts appeared from the incomplete golden fractions, i.e. from the Fibonacci numbers, show experimentally that behind the golden proportions there is the Fibonacci method of the extrema determination. One can also mention that now it is clear why ordinary in the women fashion the waist line is outlined. Really, the waist line goes approximately three centimeters higher the umbilicus point which brings the ratio $0.60=F_3/F_4$ to the golden fraction. One can also see that the spatial sequence of different Fibonacci proportions introduces the sequence of different times for observation of these proportions. So, there appeared the connection between the space image and the sequence of times (or the melody) of observation. This allows to speak about the space-time connection and about the melody of paintings. [99]{} Kurasov V.B. Theoretical and mathematical physics, 2002, 131:3, 503–528 Adolf Zeising, Neue Lehre von den Proportionen des menschlichen Ko"rpers aus einem bisher unerkannt gebliebenen, die ganze Natur und Kunst durchdringenden morphologischen Grundgesetze entwickelt, Leipzig, 1854 Kusin L.T. Foundations of cybernetics, vol.1, Moscow, Energy, 1973, 504 p.
--- abstract: 'We obtain explicit expressions for all genus one chiral $n$-point functions for free bosonic and lattice vertex operator algebras. We also consider the elliptic properties of these functions.' author: - | Geoffrey Mason[^1]\ Department of Mathematics,\ University of California Santa Cruz,\ CA 95064, U.S.A. - | Michael P. Tuite[^2]\ Department of Mathematical Physics,\ National University of Ireland,\ Galway, Ireland.\ and\ School of Theoretical Physics,\ Dublin Institute for Advanced Studies,\ 10 Burlington Road, Dublin 4, Ireland. title: 'Torus Chiral $n$-Point Functions for Free Boson and Lattice Vertex Operator Algebras' --- Introduction ============ This is the first of several papers devoted to a detailed and mathematically rigorous study of chiral $n$-point functions at all genera. Given a vertex operator algebra (VOA) $V$ (i.e. a chiral conformal field theory) one may define $n$-point functions at genus one following Zhu [@Z] and use various sewing procedures to define such functions at successively higher genera. In order to implement such a procedure in practice, one needs a detailed description of the genus one functions. This itself is a non-trivial issue, and little seems to be currently rigorously known beyond certain global descriptions for some specific theories [@DMN], [@DM]. The purpose of the present paper is to supply the needed information in case $V$ is either a free bosonic or lattice VOA. More precisely, if $V$ is a free bosonic Heisenberg or lattice VOA, $N$ a $V$-module, and $v_{1},\ldots ,v_{n}$ states in $V$, we establish a closed formula below in Theorem \[Big Theorem\] for the genus one $n$-point function $F_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};\tau )$. Roughly speaking, in the free boson case the $n$-point functions are elliptic functions whose detailed structure depends on certain combinatorial data determined by the states $v_{1},\ldots ,v_{n}$. In the lattice case, the function is naturally the product of two pieces, one determined by the Heisenberg subalgebra and one which may be described in terms of the lattice and the genus one prime form. We note that the role played by elliptic functions and the prime form in calculating genus one $n$-point functions in string theory has long been discussed by physicists but a rigorous and complete description of these $n$-point functions has been lacking until now e.g. [@D; @P]. The paper is organized as follows. We begin in section 2 with a brief review of relevant aspects of free bosonic Heisenberg and even lattice vertex operator algebras. Section 3 contains the main results of this work. We begin with a discussion of free bosonic and lattice VOAs of rank one and later generalise to the rank $l$ case. We firstly use a recursion formula for $n$-point functions due to Zhu [@Z] to demonstrate that every lattice $n$-point function is a product of a part determined by the free bosonic Heisenberg sub-VOA and a part dependent on lattice vectors only. We also obtain an explicit expression for every free bosonic $n$-point function $F_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};\tau )$ in terms of a combinatoric sum over specific elliptic functions labelled by data determined by the states $v_{1},\ldots ,v_{n}$. We next describe the $n$-point functions for pure lattice states. This involves the identification of such $n$-point functions as a sum of appropriate weights over a certain set of graphs. This combinatorial approach then leads to a closed expression for all such $n$-point functions in terms of the lattice vectors and the genus one prime form. Finally we conclude the section with Theorem \[Big Theorem\] which describes the expression for every rank $l$ lattice $n$-point function. Section 4 concludes the paper with a discussion of these $n$-point function from the point of view of their symmetry and elliptic properties. This provides some further insight into the nature of the explicit formulas obtained for $n$-point functions in Section 3. We collect here notation for some of the more frequently occurring functions and symbols that will play a role in our work. $\mathbb{N}=\{1,2,3,....\}$ is the set of positive integers, $\mathbb{Z}$ the integers, $\mathbb{C}$ the complex numbers, $\mathbb{H}$ the complex upper-half plane. We will always take $\tau $ to lie in $\mathbb{H}$, and $z$ will lie in $\mathbb{C}$ unless otherwise noted. We set $q_{z}=\exp (z)$ and $q=q_{2\pi i\tau }=\exp (2\pi i\tau )$. For $n$ symbols $z_{1},\ldots ,z_{n}\,$we also set $q_{i}=\exp (z_{i})\,$and $% z_{ij}=z_{i}-z_{j}$. We now define some elliptic and modular functions. Let $\wp (z,\omega _{1},\omega _{2})$ denote the Weierstrass elliptic $\wp $-function with periods $\omega _{1},\omega _{2}$ and set $$\wp (z,2\pi i,2\pi i\tau )=\frac{1}{z^{2}}+\sum_{k=4,k\text{ even}}^{\infty }(k-1)E_{k}(\tau )z^{k-2}, \label{Weierstrass_function}$$ so that $$E_{k}(\tau )=-\frac{B_{k}}{k!}+\frac{2}{(k-1)!}\sum_{n=1}^{\infty }\sigma _{k-1}(n)q^{n},\quad k\in \mathbb{N},\quad k\text{ even} \label{Eisensteink}$$ is the Eisenstein series of weight $k$ normalized as in [@DLM]; $B_{k}$ is a certain Bernoulli number and $\sigma _{k-1}(n)$ a power sum over positive divisors of $n$. Also set $$E_{k}(\tau )=0,\quad k\in \mathbb{N},\quad k\text{ odd.} \label{eisensteinkodd}$$ We define $$P_{0}(z,\tau )=-\log z+\sum_{k=2}^{\infty }\frac{1}{k}E_{k}(\tau )z^{k}, \label{P0}$$ related to the genus one prime form $K(z,\tau )$ [@Mu] by $$K(z,\tau )=\exp (-P_{0}(z,\tau )). \label{Primeform}$$ We further define $$P_{n}(z,\tau )=\frac{(-1)^{n}}{(n-1)!}\frac{d^{n}}{dz^{n}}P_{0}(z,\tau )=% \frac{1}{z^{n}}+\sum_{k=2}^{\infty }\binom{k-1}{n-1}E_{k}(\tau )z^{k-n} \label{Pndefn}$$ Note that $P_{1}(z,\tau )=\varsigma (z,2\pi i,2\pi i\tau )-E_{2}(\tau )z$, for $\varsigma $ the Weierstrass zeta-function and $P_{2}(z,\tau )=\wp (z,2\pi i,2\pi i\tau )+E_{2}(\tau )$. We note two expansions for $P_{2}$: $$\begin{aligned} P_{2}(z-w,\tau ) &=&\frac{1}{(z-w)^{2}}+\sum_{r,s\in \mathbb{N}}C(r,s,\tau )z^{r-1}w^{s-1}, \label{P2expansion1} \\ P_{2}(z+w_{1}-w_{2}) &=&\sum_{r,s\in \mathbb{N}}D(r,s,z,\tau )w_{1}^{r-1}w_{2}^{s-1}. \label{P2expansion2}\end{aligned}$$ (expanding the latter in $w_{1},w_{2}$) $\,$so that for $r,s\in \mathbb{N}$, $$\begin{aligned} C(r,s) &=& C(r,s,\tau )=(-1)^{r+1}\frac{(r+s-1)!}{(r-1)!(s-1)!}E_{r+s}(\tau ), \label{C(r,s)} \\ D(r,s,z) &=& D(r,s,z,\tau ) =(-1)^{r+1}\frac{(r+s-1)!}{(r-1)!(s-1)!}P_{r+s}(z,\tau ). \label{D(r,s)}\end{aligned}$$ We also define for $r\in \mathbb{N}$, $$\begin{aligned} C(r,0) &=&C(r,0,\tau )=(-1)^{r+1}E_{r}(\tau ), \label{C(r,0)} \\ D(r,0,z) &=& D(r,0,z,\tau ) =(-1)^{r+1}P_{r}(z,\tau ). \label{D(r,0)}\end{aligned}$$ The Dedekind eta-function is defined by $$\eta (\tau )=q^{1/24}\prod_{n=1}^{\infty }(1-q^{n}). \label{etafun}$$ Finally, for a (finite) set $\Phi $ we denote by $\Sigma (\Phi )$ the symmetric group consisting of all permutations of $\Phi $. Set $$\begin{aligned} \mathrm{Inv}(\Phi ) &=&\{\sigma \in \Sigma (\Phi )|\sigma ^{2}=1\},\quad \text{(\textit{involutions} of }\Sigma (\Phi )\text{)},\text{ } \label{InvPhi} \\ \mathrm{Fix}(\sigma ) &=&\{x\in \Phi |\sigma (x)=x\},\quad \text{(\textit{% fixed-points} of }\sigma \text{)}, \label{Fsigma} \\ F(\Phi ) &=&\{\sigma \in \mathrm{Inv}(\Phi )|\mathrm{Fix}(\sigma )=\emptyset \},\quad \text{(\textit{fixed-point-free }involutions\textit{\ }of }\Sigma (\Phi )\text{)}. \nonumber \\ && \label{FPhi}\end{aligned}$$ Vertex Operator Algebras ======================== We discuss some aspects of VOA theory to establish context and notation. For more details, see [@FHL], [@FLM], [@Ka], [@MN]. A vertex operator algebra (VOA) is a quadruple $(V,Y,\mathbf{1},\omega )$ consisting of a $\mathbb{Z}$-graded complex vector space $V=\bigoplus_{n\in \mathbb{Z}}V_{n}$ , a linear map $Y:V\rightarrow (\mathrm{End}V)[[z,z^{-1}]]$ , and a pair of distinguished vectors (states): the vacuum $\mathbf{1}$ in $% V_{0}$ , and the conformal vector $\omega $ in $V_{2}$. We adopt mathematical rather than physical notation for vertex operators, so that for a state $v$ in $V$, its image under the $Y$ map is denoted $$Y(v,z)=\sum_{n\in \mathbb{Z}}v(n)z^{-n-1}, \label{Ydefn}$$ with component operators (or Fourier modes) $v(n)\in\mathrm{End}V$ and where $Y(v,z).\mathbf{1|}_{z=0}=v(-1).\mathbf{1}=v$. We generally take $z$ to be a formal variable. A concession to physics notation is made concerning the vertex operator for the conformal vector $\omega $, where we write $$Y(w,z)=\sum_{n\in \mathbb{Z}}L(n)z^{-n-2}.$$ The modes $L(n)$ close on the Virasoro Lie algebra of central charge $c$: $$\lbrack L(m),L(n)]=(m-n)L(m+n)+(m^{3}-m)\frac{c}{12}\delta _{m,-n}.$$ We define the homogeneous space of weight $k$ to be $V_{k}=\{v\in V|L(0)v=kv\}$ where for $v$ in $V_{k}$ we write $wt(v)=k$. Then as an operator on $V$ we have $$v(n):V_{m}\rightarrow V_{m+k-n-1}.$$ In particular, the *zero mode* $o(v)=v(wt(v)-1)$ is a linear operator on each homogeneous space of $V$. Next we consider some particular VOAs, namely Heisenberg VOAs (or free boson theories), and lattice VOAs. We consider an $l$-dimensional complex vector space (i.e., abelian Lie algebra) $\mathfrak{H}$ equipped with a non-degenerate, symmetric, bilinear form $(,)$ and a distinguished orthonormal basis $% a_{1},a_{2},...a_{l}$. The corresponding affine Lie algebra is the Heisenberg Lie algebra $\mathfrak{\hat{H}}=\mathfrak{H}\otimes \mathbb{C}% [t,t^{-1}]\oplus \mathbb{C}k$ with brackets $[k,\mathfrak{\hat{H}}]=0$ and $$\lbrack a\otimes t^{m},b\otimes t^{n}]=(a,b)m\delta _{m,-n}k. \label{Fockbracket}$$ Corresponding to an element $\lambda $ in the dual space $\mathfrak{H}^{*}$ we consider the Fock space defined by the induced (Verma) module $$M^{\lambda }=U(\mathfrak{\hat{H}})\otimes _{U(\mathfrak{H}\otimes \mathbb{C}[t]\oplus \mathbb{C}k)}\mathbb{C,}$$ where $\mathbb{C}$ is the $1$-dimensional space annihilated by $\mathfrak{H}\otimes t\mathbb{C}[t]$ and on which $k$ acts as the identity and $\mathfrak{H}\otimes t^{0} $ via the character $\lambda $; $U$ denotes the universal enveloping algebra. There is a canonical identification of linear spaces $$M^{\lambda }=S(\mathfrak{H}\otimes t^{-1}\mathbb{C}[t^{-1}]),$$ where $S$ denotes the (graded) symmetric algebra. The Heisenberg VOA $M$ corresponds to the case $\lambda =0$ and the Fock states $$v=a_{1}(-1)^{e_{1}}.a_{1}(-2)^{e_{2}}....a_{1}(-n)^{e_{n}}.a_{l}(-1)^{f_{1}}.a_{l}(-2)^{f_{2}}...a_{l}(-p)^{f_{p}}.% \mathbf{1,} \label{Fockstate}$$ for non-negative integers $e_{i},...,f_{j}$ form a* *basis of $% M$. The vacuum $\mathbf{1}$ is canonically identified with the identity of $% M_{0}=\mathbb{C}$, while the weight 1 subspace $M_{1}$ may be naturally identified with $\mathfrak{H}$. The vertex operator corresponding to $h$ in $% \mathfrak{H}$ is given by $$Y(h,z)=\sum_{n\in \mathbb{Z}}h(n)z^{-n-1}, \label{Yh}$$ where $h(n)$ is the usual operator on $M$. $M$ is a simple VOA. Next we consider the case of lattice VOAs $V_{L}$ associated to a positive-definite, even lattice $L$ ([@B], [@FLM]). Thus $L$ is a free abelian group of rank $l$, say, equipped with a positive definite, integral bilinear form $(,):L\otimes L\rightarrow \mathbb{Z}$ such that $% (\alpha ,\alpha )$ is even for $\alpha \in L$. Let $\mathfrak{H}$ be the space $% \mathbb{C}\otimes _{\mathbb{Z}}L$ equipped with the $\mathbb{C}$-linear extension of $% (,)$ to $\mathfrak{H}\otimes \mathfrak{H}$ and let $M$ be the corresponding Heisenberg VOA. The Fock space of the lattice theory may be described by the linear space $$V_{L}=M\otimes \mathbb{C}[L]=\sum_{\alpha \in L}M\otimes e^{\alpha }, \label{VLdefn}$$ where $\mathbb{C}[L]$ denotes the group algebra of $L$ with canonical basis $% e^{\alpha }$, $\alpha \in L$. $M$ may be identified with the subspace $% M\otimes e^{0}$ of $V_{L}$, in which case $M$ is a subVOA of $V_{L}$ and the rightmost equation of (\[VLdefn\]) then displays the decomposition of $% V_{L}$ into irreducible $M$-modules. We identify $e^{\alpha }$ with the element $\mathbf{1}\otimes e^{\alpha }$ in $V_{L}$; each of the elements $% e^{\alpha }$ is a primary state of weight $(\alpha ,\alpha )/2$. The vertex operator for $h$ in $\mathfrak{H}$ is again represented in the obvious way by (\[Yh\]). The vertex operator for $e^{\alpha }$ is more complicated (loc. cit.) and is given by $$\begin{aligned} Y(e^{\alpha },z) &=&Y_{-}(e^{\alpha },z)Y_{+}(e^{\alpha },z)e^{\alpha }z^{\alpha }, \nonumber \\ Y_{\pm }(e^{\alpha },z) &=&\exp (\mp \sum_{n>0}\frac{\alpha (\pm n)}{n}% z^{\mp n}). \label{Yealpha}\end{aligned}$$ (The slight inconsistency in notation is more than compensated by its convenience). The operators $e^{\alpha }\in \mathbb{C}[L]$ have group commutator $$e^{\alpha }e^{\beta }e^{-\alpha }e^{-\beta }=(-1)^{(\alpha ,\beta )}, \label{ealphacomm}$$ and $e^{\alpha },z^{\alpha }$ act on any state $u\otimes e^{\beta }\in V_{L}$ as $$\begin{aligned} e^{\alpha }(u\otimes e^{\beta }) &=&\epsilon (\alpha ,\beta )u\otimes e^{\alpha +\beta }, \label{ealpha} \\ z^{\alpha }(u\otimes e^{\beta }) &=&z^{(\alpha ,\beta )}u\otimes e^{\beta }, \label{zalpha}\end{aligned}$$ for cocycle $\epsilon (\alpha ,\beta )=\pm 1$. This cocycle can be chosen so that [@FLM] $$\begin{aligned} \epsilon (\alpha ,\beta +\gamma ) &=&\epsilon (\alpha ,\beta )\epsilon (\alpha ,\gamma ), \label{cocycleproduct} \\ \epsilon (\alpha ,-\alpha ) &=&\epsilon (\alpha ,\alpha )=1. \label{cocycleunity}\end{aligned}$$ $\,$ In the context of his theory of modular-invariance for $n$-point functions at genus 1, Zhu introduced in [@Z] a second VOA $(V,Y[,],\mathbf{1},% \tilde{\omega})$ associated to a given VOA $(V,Y(,),\mathbf{1},\omega )$. This will be important in the present paper, and we review some aspects of the construction here. The underlying Fock space of the second VOA is the same space $V$ as the first, moreover they share the same vacuum vector $% \mathbf{1}$ and have the same central charge. The new vertex operators are defined by a change of co-ordinates [^3], namely $$Y[v,z]=\sum_{n\in \mathbb{Z}}v[n]z^{-n-1}=Y(q_{z}^{L(0)}v,q_{z}-1), \label{Ysquare}$$ while the new conformal vector $\tilde{\omega}$ is defined to be the state $% \omega -\frac{c}{24}\mathbf{1}$. We set $$Y[\tilde{\omega},z]=\sum_{n\in \mathbb{Z}}L[n]z^{-n-2} \label{Ywtilde}$$ and write $wt[v]=k$ if $L[0].v=kv$, $V_{[k]}=\{v\in V|wt[v]=k\}$. States homogeneous with respect to the first degree operator $L(0)$ are *not* necessarily homogeneous with respect to $L[0]$. On the other hand, it transpires (cf. [@Z], [@DLM]) that the two Virasoro algebras enjoy the *same* set of primary states. We have $L[-1]=L(0)+L(-1)$, which leads to the useful relation $$o(L[-1]v)=0, \label{oLminusone}$$ for any state $v$. Inasmuch as the co-ordinate change $z\rightarrow q_{z}=\exp (z)$ maps the complex plane to an infinite cylinder, we sometimes refer to the VOA as being ’*on the sphere*’, or ’*on the cylinder*’. The Heisenberg VOA $M$ is a simple example where there is not too much difference between being on the sphere or the cylinder. This is basically because $M$ is generated by its weight 1 states which are primary for both Virasoro algebras, and because we have $u[1]v=u(1)v=(u,v)\mathbf{1}$ and the commutator formula $$\lbrack u[m],v[n]]=m(u,v)\delta _{m,-n}, \label{bosonsq}$$ for weight 1 states $u,v\in M$ (cf. [@Z], [@DMN] for more details). Torus $n$-point Functions ========================= In this section we will consider $n$-point functions at genus one. A general reference is Zhu’s paper [@Z]. Let $(V,Y,\mathbf{1},\omega )$ be a vertex operator algebra as discussed in section 2 with $N$ a $V$-module. Recall (loc. cit.) that for states $v_{1},\ldots v_{n}\in V$ , the $n$-point function on the torus determined by $N$ is $$F_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)=Tr_{N}Y(q_{1}^{L(0)}v_{1},q_{1})\ldots Y(q_{n}^{L(0)}v_{n},q_{n})q^{L(0)-c/24}, \label{npointfunction}$$ where $q_{i}=q_{z_{i}}$, $1\leq i\leq n$, for auxiliary variables $% z_{1},...,z_{n}$. (\[npointfunction\]) incorporates some cosmetic changes compared to [@Z]: we have adorned our $n$-point functions with an extra factor $q^{-c/24}$ and omitted a factor of $2\pi i$ from the variables $% z_{i} $ (cf. footnote to (\[Ysquare\])). In case $n=1$, (\[npointfunction\]) is the usual trace function which we will often denote by $$Z_{N}(v_{1},\tau )=Tr_{N}o(v_{1})q^{L(0)-c/24}$$ where $o(v_{1})$ again denotes the zero mode, now for the vertex operator $Y(v_{1},z)$ acting on $N$. Note that this trace is independent of $z_{1}$. Taking all $v_{i}=\mathbf{1}$ in (\[npointfunction\]) yields the genus one partition function for $N$: $$Z_{N}(\tau )=Tr_{N}q^{L(0)-c/24}=q^{-c/24}\sum_{m\geq 0}\dim N_{m}q^{m}. \label{ZN}$$ where $N_{m}$ is the subspace of $N$ of homogeneous vectors of conformal weight $m$. The following result, which we use later, holds: \[lemma3.1\] For states $v_{1},v_{2},\ldots ,v_{n}$ as above we have $$\begin{aligned} &&F_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)\nonumber\\ &&=Z_{N}(Y[v_{1},z_{1n}].Y[v_{2},z_{2n}]\ldots Y[v_{n-1},z_{n-1n}].v_{n},\tau ) \label{Fnziminuszn} \\ &&=Z_{N}(Y[v_{1},z_{1}].Y[v_{2},z_{2}]\ldots Y[v_{n},z_{n}].\mathbf{1},\tau ) \label{Fnz1zn}\end{aligned}$$ where $z_{ij}=z_{i}-z_{j}$. **Proof**: Recall notation from section 2 for vertex operator algebras on the cylinder. Lemma \[lemma3.1\] is implicit in [@Z], section 4, especially eqn. (4.4.21). We will give a direct proof in the case $n=2$ based on the associativity of vertex operators. The general case follows in similar fashion. Associativity tells us ([@FHL], Proposition 3.3.2) that $$Tr_{N}Y(v_{1},z_{1})Y(v_{2},z_{2})q^{L(0)}=Tr_{N}Y(Y(v_{1},z_{12})v_{2}),z_{2})q^{L(0)}. \label{Yassociativity}$$ We also have ([@FHL], eqn.(2.6.4)) $$e^{xL(0)}Y(v,y)e^{-xL(0)}=Y(e^{xL(0)}v,e^{x}y). \label{L0scaling}$$ By (\[Yassociativity\]) and (\[L0scaling\]) it follows that the left-hand-side of (\[Fnz1zn\]) is equal to $$\begin{aligned} &&Tr_{N}Y(Y(q_{1}^{L(0)}v_{1},q_{1}-q_{2})q_{2}^{L(0)}v_{2},q_{2})q^{L(0)-c/24} \\ &=&Tr_{N}Y(q_{2}^{L(0)}Y(q_{z_{12}}^{L(0)}v_{1},q_{z_{12}}-1).v_{2},q_{2})q^{L(0)-c/24} \\ &=&Tr_{N}Y(q_{2}^{L(0)}Y[v_{1},z_{12}].v_{2},q_{2})q^{L(0)-c/24} \\ &=&Z_{N}(Y[v_{1},z_{12}].v_{2},\tau ),\end{aligned}$$ as required. $\smallskip $Finally using [@FHL], eqn.(2.3.17) we have in general that $$e^{xL(-1)}Y(v,y)e^{-xL(-1)}=Y(v,y+x). \label{Ttranslation}$$ Hence (\[Fnz1zn\]) follows from $$\begin{aligned} o(Y[v_{1},z_{12}].v_{2}) &=&o(Y[v_{1},z_{12}].e^{-z_{2}L[-1]}.Y[v_{2},z_{2}].% \mathbf{1}) \\ &=&o(e^{-z_{2}L[-1]}.Y[v_{1},z_{1}].Y[v_{2},z_{2}].\mathbf{1}) \\ &=&o(Y[v_{1},z_{2}].Y[v_{2},z_{2}].\mathbf{1}),\end{aligned}$$ and using (\[oLminusone\]). $\qed $ Recall from the previous section the notation for states and modules for Heisenberg and lattice vertex operator algebras. We are going to develop explicit formulas for $n$-point functions in these cases. The final answer is quite elaborate, so we begin with the rank $1$ case. The general case will proceed in exactly the same manner. We fix the following notation: $L$ is a rank $l=1$ even lattice with inner product $(,)$, $M$ the corresponding Heisenberg vertex operator algebra based on the complex space $\mathfrak{H}=\mathbb{% C}\otimes _{\mathbb{Z}}L$, $a\in \mathfrak{H}$ satisfies $(a,a)=1$, $N=M\otimes e^{\beta }$ is a simple $M$-module with $\beta \in L$, $h=(\beta ,\beta )/2$ the conformal weight of the highest weight vector of $N$. We will establish a closed formula for $n$-point expressions of the form $$F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q), \label{nptlattice}$$ where $\alpha _{1},\ldots ,\alpha _{n}\in L$ and $v_{1},\ldots ,v_{n}$ are elements in the canonical Fock basis (\[Fockstate\]) of $M$ on the cylinder. Thus, $v_{1}=a[-1]^{e_{1}}a[-2]^{e_{2}}\ldots $, etc. Note that the individual vertex operators $Y(v_{i}\otimes e^{\alpha _{i}},z_{i})$, $% 1\leq i\leq n$ do not generally act on the module $N$, however their composite does as long as $\alpha _{1}+\ldots +\alpha _{n}=0$. We will always assume that this is the case. It transpires that (\[nptlattice\]) factors as $$Q_{N}.F_{N}(\mathbf{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;\mathbf{1}% \otimes e^{\alpha _{n}},z_{n};q), \label{QNFN}$$ where $Q_{N}$ is independent of the $\alpha _{i}$, and our main task will be to elucidate the structure of the two factors $Q_{N}$ and $F_{N}$. Our results generalize the calculations in [@DMN], which dealt with the case $n=1,\alpha _{1}=0$. We turn to the precise description of $Q_{N}$. Consider first a Fock state $% v\in M$ given by $$v=a[-1]^{e_{1}}\ldots a[-p]^{e_{p}}.\mathbf{1} \label{vstate}$$ where $e_{1},\ldots ,e_{p\text{ }}$are non-negative integers . The state $v$ is determined by a multi-set or *labelled set*, which consists of $% e_{1}+e_{2}+\ldots +e_{p}$ elements, the first $e_{1}$ of which are labelled $1$, the next $e_{2}$ labelled $2$, etc. In this way, each of the states $% v_{i}$ in (\[nptlattice\]) is associated with a labelled set $\Phi _{i}$, and we let $\Phi =\Phi _{1}\cup \ldots \cup \Phi _{n}$ denote the disjoint union of the $\Phi _{i}$, itself a labelled set. For convenience we often specify an element of $\Phi $ by its label: the reader should bear in mind that this expedient can be misleading because there are generally several distinct elements with the same label. An element $\varphi \in \mathrm{Inv}(\Phi )$ (cf. (\[InvPhi\])) considered as a permutation of $\Phi $, may be represented as a product of cycles, each of length $1$ or $2$: $$\varphi =(r_{1}s_{1})\ldots (r_{b}s_{b})(t_{1})\ldots (t_{c}). \label{phiInv}$$ (\[phiInv\]) tells us that $\Phi =\{r_{1},s_{1},\ldots ,r_{b},s_{b},t_{1},\ldots ,t_{c}\}$, while $\varphi $ exchanges elements with labels $r_{i}$ and $s_{i}$, and fixes elements with labels $% t_{1},\ldots ,t_{c}$. Notice that involutions may produce the same permutation of labels yet correspond to distinct permutations of $\Phi $. We will always consider such involutions to be distinct, regardless of labels. Recall the definitions (\[C(r,s)\]) to (\[D(r,0)\]). Let $\Xi $ be a subset of $\Phi $ with $|\Xi |\leq 2$. If $\Xi =\{r,s\}$ has size $2$ with $% r\in \Phi _{i},s\in \Phi _{j}$, we define $$\gamma (\Xi )=\left\{ \begin{array}{c} D(r,s,z_{ij},\tau ),\quad i\neq j \\ C(r,s,\tau ),\quad i=j. \end{array} \right. \label{GammaX}$$ Note that $D(r,s,z,\tau )=D(s,r,-z,\tau )$, so that the order in which the arguments $r,s$ appear is of no relevance. If $\Xi =\{r\}\subseteq \Phi _{k}$ we define $$\gamma (\Xi )=(a,\delta _{r,1}\beta +C(r,0,\tau )\alpha _{k}+\sum_{l>k}D(r,0,z_{kl},\tau )\alpha _{l}). \label{gammaPhik}$$ For $\varphi \in \mathrm{Inv}(\Phi )$ set $$\Gamma (\varphi )=\prod_{\Xi }\gamma (\Xi ) \label{GammaphiInv}$$ where the product ranges over all orbits (cycles) of $\varphi $ in its action on $\Phi $. Finally, set $$Q_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)=\sum_{\varphi \in \mathrm{Inv}(\Phi )}\Gamma (\varphi ) \label{Qnv1vn}$$ We can now formally state our first main result about $n$-point functions. \[Propgennpt\] Let $v_{1},\ldots ,v_{n}$ be states of the form (\[vstate\]) in the rank $1$ free boson theory $M$ and let $\Phi $ be the labelled set determined (as above) by these states. Then the following holds for lattice elements $\alpha _{1},\ldots ,\alpha _{n}$ satisfying $\alpha _{1}+\ldots +\alpha _{n}$ $=0$: $$\begin{aligned} &&F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &=&Q_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)F_{N}(\mathbf{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;\mathbf{1}\otimes e^{\alpha _{n}},z_{n};q). \label{Fnlattice}\end{aligned}$$ **Proof:** The idea is to carefully examine a recursion formula for $n$-point functions due to Zhu ([@Z], Proposition 4.3.3). Bearing in mind the differences in notation between the present paper and [@Z], we quote the following: \[LemZhurec\] Assume that $u_{1},\ldots ,u_{n},b$ are states in a vertex operator algebra $V$, that $N$ is a $V$-module, and that $o(b)$ acts as a scalar on $N$. Then $$\begin{aligned} &&F_{N}(b[-1]u_{1},z_{1};\ldots ;u_{n},z_{n};q) \nonumber \\ &&=Tr_{N}o(b)Y(q_{1}^{L(0)}u_{1},q_{1})\ldots Y(q_{n}^{L(0)}u_{n},q_{n})q^{L(0)-c/24} \nonumber \\ &&+\sum_{k\geq 1}E_{2k}(\tau )F_{N}(b[2k-1]u_{1},z_{1};u_{2},z_{2};\ldots ;u_{n},z_{n};q) \nonumber \\ &&+\sum_{m\geq 0}\sum_{k=2}^{n}(-1)^{m+1}P_{m+1}(z_{k1},\tau )F_{N}(u_{1},z_{1};\ldots ;b[m]u_{k},z_{k};...;u_{n},z_{n};q) \nonumber \\ &&-\frac{1}{2}% \sum_{k=1}^{n}F_{N}(u_{1},z_{1};...;b[0]u_{k},z_{k};...;u_{n},z_{n};q). \label{FNb1Zhu}\end{aligned}$$ $\qed $ We apply this result in the case that $N$ and $u_{i}=v_{i}\otimes e^{\alpha _{i}}$ are as discussed in (\[nptlattice\]) - (\[Qnv1vn\]), with $b=a$. The zero mode of $a$ acts on $N$ as multiplication by the scalar $(a,\beta )$ and $a[0].v_{i}\otimes e^{\alpha _{i}}=(a,\alpha _{i})v_{i}\otimes e^{\alpha _{i}}$. As a result of $\alpha _{1}+\ldots +\alpha _{n}$ $=0$ it follows that the last summand in (\[FNb1Zhu\]) vanishes, and (\[FNb1Zhu\]) then reads $$\begin{aligned} &&F_{N}(a[-1]v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &&=(a,\beta +\sum_{k=2}^{n}D(1,0,z_{1k})\alpha _{k})F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &&+\sum_{k\geq 1}C(2k-1,1)F_{N}(\hat{v}_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &&+\sum_{m\geq 1}\sum_{k=2}^{n} D(m,1,z_{k1})F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots \hat{v}_{k}\otimes e^{\alpha _{k}},z_{k};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q). \nonumber \\ && \label{FNa1Zhu}\end{aligned}$$ Here, we have used the (admittedly uninformative) notation $\hat{v}_{1}$ in the second summand to indicate that a factor $a[-2k+1]$ should be removed from the expression of $v_{1}$ as a product (\[vstate\]), and indeed that this should be implemented as often as $a[-2k+1]$ occurs in the expression. If $a[-2k+1]$ does not occur in the expression for $v_{1}$ then $\hat{v}_{1}$ is defined to be zero. Similar notation $\hat{v}_{k}$ occurs in the third summand, where it indicates removal of a factor $a[-m]$. Next we develop the analog of (\[FNa1Zhu\]) in which $a[-1]$ is replaced by $a[-p]$ for any positive integer $p$. To this end we take $% b=L[-1]^{p-1}.a $. We easily calculate that $b[m]=(-1)^{p-1}\binom{m}{p-1}% a[m-p+1]$, in particular $b[-1]=a[-p]$ and $b[0]=0$ if $p\geq 2$. Note also that $o(b)=0$ if $p\geq 2$, thanks to (\[oLminusone\]). With this choice of $b$ and $p$, and after some calculation, (\[FNb1Zhu\]) reduces to the next equation. In fact, we can combine the resulting equality (for $p\geq 2$) with the case $p=1$. What obtains is the basic recursive relation satisfied by our $n$-point functions, namely $$\begin{aligned} &&F_{N}(a[-p]v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &&=\sum_{k>p/2}C(2k-p,p )F_{N}(\hat{v}_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &&+\sum_{m>p-1}\sum_{k=2}^{n}D(m-p+1,p,z_{k1} ). \nonumber \\ &&F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;\hat{v}_{k}\otimes e^{\alpha _{k}},z_{k};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &&+(a,\delta _{p,1}\beta +C(p,0 )\alpha _{1}+\sum_{k=2}^{n}D(p,0,z_{1k} )\alpha _{k}). \nonumber \\ &&F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q), \label{latticerecursion}\end{aligned}$$ where we have used a similar convention to the case $p=1$ regarding symbols $% \hat{v}_{1},\hat{v}_{k}$. Close scrutiny of relation (\[latticerecursion\]) reveals how to complete the proof of Proposition \[Propgennpt\], which at this point is a matter of interpreting the recursion formula. We choose an element with label $p$ from the first labelled set $\Phi _{1}$ determined by $v_{1}$. The first sum on the r.h.s of (\[latticerecursion\]) then corresponds to certain terms in the representation (\[vstate\]) of $v_{1}$. Indeed, as long as $2k>p$, a factor $a[p-2k]$ will give rise to a term $C(2k-p,p,\tau )F_{N}(...)$, and via (\[GammaX\]) we identify $C(2k-p,p,\tau )$ with $\gamma (\Xi )$ where $% \Xi =\{2k-p,p\}$ $\subseteq $ $\Phi _{1}$ and $(2k-p,p)$ is the initial transposition of a putative involution that is to be constructed inductively. Terms in the second (double) summation in (\[latticerecursion\]) are treated similarly - they correspond to expressions $% \gamma (\Xi )F_{N}(...)$ where $\Xi =\{m-p+1,p\}$ and $m-p+1,p$ are labels of elements in $\Phi _{k},\Phi _{1}$ respectively (for $k\neq 1$). The third term in (\[latticerecursion\]) is similarly seen to coincide with $\gamma (\Xi )F_{N}(...)$, now with $\Xi =\{p\}\subseteq $ $\Phi _{1}$. We repeat this process in an inductive manner. It is easy to see that in this way we construct every element of $\mathrm{Inv}(\Phi )$ exactly once, and what emerges is the formula (\[Fnlattice\]). This completes our discussion of the proof of Proposition \[Propgennpt\]. $\qed $ In order to complete our discussion of the rank one $n$-point functions we must of course evaluate the term $F_{N}(\mathbf{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;\mathbf{1}\otimes e^{\alpha _{n}},z_{n};q)$. Before we do that, however, it will be useful to draw some initial conclusions from Proposition \[Propgennpt\]. Taking all lattice elements $\alpha _{1}=\ldots =\alpha _{n}=\beta =0$ corresponds to the case of $n$-point functions in the rank $1$ free bosonic theory $M$. In this case all contributions from orbits of length $1$ vanish and hence the sum over $% \mathrm{Inv}(\Phi )$ reduces to one over $F(\Phi )$ of (\[FPhi\]) only. Furthermore we know that $F_{M}(\mathbf{1},z_{1};\ldots ;\mathbf{1},z_{n};q)$ is just the partition function for $M$ i.e. $1/\eta (\tau )$. Thus we arrive at a formula for $n$-point functions for a single free boson: \[gen1ptM\] Let $M$ be the VOA for a single free boson. For Fock states $% v_{1},\ldots ,v_{n}$ as in (\[vstate\]) with corresponding labelled set $% \Phi $ we have $$F_{M}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)=\frac{1}{\eta (\tau )}\sum_{\varphi \in F(\Phi )}\Gamma (\varphi ). \label{BosonFM}$$ $\qed $ The case $n=1$ of Corollary \[gen1ptM\] was established in [@DMN], where it was shown that $$Z_{M}(v;\tau )=\frac{1}{\eta (\tau )}\sum_{\varphi \in F(\Phi )}\prod C(r,s,\tau ), \label{FMonept}$$ for fixed point free involutions $\varphi =\ldots (rs)\ldots $ of the labelled set $\Phi $ labelling $v$ of (\[vstate\]) where the product is taken over all the transpositions $(rs)$ in $\varphi $. An even more special, yet interesting, case arises if in Corollary \[gen1ptM\] we take each state $v_{i}$ to coincide with the conformal weight one state $a$. Thus $v_{i}=a[-1].\mathbf{1}$, $1\leq i\leq n$, $\Phi $ consists of $n$ elements each carrying the label $1$, and elements of $% \Sigma (\Phi )$ may be considered as mappings on the set $\{1,2,\ldots ,n\}$ in the usual way. If $n$ is odd then there are no fixed-point-free involutions acting on $\Phi $, so (\[BosonFM\]) is zero in this case. If $% n $ is even then $\gamma (\Xi )=P_{2}(z_{ij},\tau )$ if $\Xi $ = $\Phi _{i}\cup \Phi _{j}$ for $i\neq j$. Thus we obtain \[nastatesM\] Let $M$ be the VOA for a single free boson. Then for $n$ even, $$F_{M}(a,z_{1};\ldots ;a,z_{n};q)=\frac{1}{\eta (\tau )}\sum_{\varphi \in F(\Phi )}\prod P_{2}(z_{ij},\tau ), \label{BosonFMaa}$$ where the product ranges over the cycles of $\varphi =\ldots (ij)\ldots \qed $ We next consider the case of an $M$-module $N$ $=M\otimes e^{\beta }$. If $% n=1$ we necessarily have $\alpha _{1}=0$, $Z_{N}=q^{(\beta ,\beta )/2}/\eta (\tau )$, and the labelled set $\Phi $ coincides with $\Phi _{1}$. If $% \varphi \in \mathrm{Inv}(\Phi )$ and $\Xi =\{r\}$ is an orbit of $\varphi $ of length $1$ then from (\[gammaPhik\]) we get $\gamma (\Xi )=\delta _{r,1}(a,\beta )$. So $\Gamma (\varphi )$ vanishes unless all labels of the set of fixed-points $\mathrm{Fix}(\varphi )$ are equal to $1$. In this case we may write $\varphi =1^{|\Delta |}\varphi _{0}$ to indicate that $\varphi $ fixes a set $\Delta $ of elements with label $1$, and that $\varphi _{0}\ $is the fixed-point-free involution induced by $\varphi $ on the complement $% \Phi \backslash \Delta $. We thus obtain \[gen1ptN\] Let $M$ be as in (\[nptlattice\]) and let $N$ be the $M$-module $M\otimes e^{\beta }$. Take $v$ to be the state (\[vstate\]) and let $\Lambda $ denote the elements in $\Phi $ with label $1$. Then $$\begin{aligned} Z_{N}(v,\tau ) &=&\frac{q^{(\beta ,\beta )/2}}{\eta (\tau )}\sum_{\Delta \subseteq \Lambda }(a,\beta )^{|\Delta |}\sum_{\varphi _{0}\in F(\Phi \backslash \Delta )}\Gamma (\varphi _{0}), \nonumber \\ \Gamma (\varphi _{0}) &=&\prod C(r,s,\tau ), \label{FNoneptlattice}\end{aligned}$$ where $\varphi _{0}=\ldots (rs)\ldots $ acting on $\Phi \backslash \Delta $. $\qed $ Similarly to Corollary \[nastatesM\] we can consider again the special case with each $v_{i}=a$, generalising (\[BosonFMaa\]). In the above notation we therefore have $\Lambda =\Phi $ so that \[nastatesN\] Let $M$ be the VOA for a single free boson with module $% N=M\otimes e^{\beta }$. Then $$F_{N}(a,z_{1};\ldots ;a,z_{n};q)=\frac{q^{(\beta ,\beta )/2}}{\eta (\tau )}% \sum_{\Delta \subseteq \Phi }(a,\beta )^{|\Delta |}\sum_{\varphi _{0}\in F(\Phi \backslash \Delta )}\prod P_{2}(z_{ij},\tau ), \label{BosonFNaa}$$ where the product ranges over the cycles of $\varphi _{0}=\ldots (ij)\ldots \qed $ We now show how (\[BosonFNaa\]) can be interpreted as the generator of all free bosonic $n$-point functions for Fock states (\[Fockstate\]). This provides a useful insight into the structure found for these $n$-point functions in terms of the elliptic function $P_{2}(z,\tau )$ and the scalar $% (a,\beta )$. \[Propgen\] $F_{N}(a,z_{1};\ldots ;a,z_{n};q)$ is a generating function for the $n$-point functions for all Fock states $v_{1},\ldots ,v_{n}$. **Proof.** This follows from Lemma \[lemma3.1\] and the expansions of $P_{2}$ of (\[P2expansion1\]) and (\[P2expansion2\]). We will illustrate the result for $n=1$ and $n=2$. A general proof can be given along the same lines. From (\[Fnz1zn\]) we obtain $$\begin{aligned} F_{N}(a,z_{1};\ldots ;a,z_{n};q) &=&Z_{N}(Y[a,z_{1}]\ldots Y[a,z_{n}].% \mathbf{1},q) \nonumber \\ &=&\sum_{l_{1},\ldots l_{n}\in \mathbb{Z}}Z_{N}(a[-l_{1}]\ldots a[-l_{n}].% \mathbf{1},q)z_{1}^{l_{1}-1}\ldots z_{n}^{l_{n}-1}. \nonumber\\ && \label{FMonepointgen}\end{aligned}$$ The 1-point function for the bosonic Fock state $v=a[-l_{1}]\ldots a[-l_{n}].% \mathbf{1}$ is clearly the coefficient of $z_{1}^{l_{1}-1}\ldots z_{n}^{l_{n}-1}$ for $l_{1},\ldots ,l_{n}>0$. We then recover (\[FNoneptlattice\]) from the expansion for each $P_{2}(z_{ij},\tau )$ using (\[P2expansion1\]). For $n=2$ consider the $1$-point function $$Z_{N}(Y[Y[a,w_{1}]\ldots Y[a,w_{m}].\mathbf{1},w].Y[Y[a,z_{1}]\ldots Y[a,z_{n}].\mathbf{1},z].\mathbf{1},q). \label{FNtwopointgen}$$ The 2-point function $F_{N}(v_{1},q_{1};v_{2},q_{2};q)$ for $% v_{1}=a[-l_{1}]\ldots a[-l_{m}].\mathbf{1}$ and $v_{2}=a[-k_{1}]\ldots a[-k_{n}].\mathbf{1}$ is the coefficient of $\prod_{i=1}^{m}% \prod_{j=1}^{n}w_{i}^{l_{i}-1}z_{j}^{m_{j}-1}$ in (\[FNtwopointgen\]). By associativity (\[Yassociativity\]) and using $Y[\mathbf{1},z]=\mathrm{Id}$ eqn. (\[FNtwopointgen\]) can be expressed as $$Z_{N}(Y[a,w_{1}+w]\ldots Y[a,w_{m}+w].Y[a,z_{1}+z]\ldots Y[a,z_{n}+z].% \mathbf{1},q).$$ Using (\[BosonFNaa\]) this becomes (suppressing the $\tau$ dependence for clarity) $$\frac{q^{(\beta ,\beta )/2}}{\eta (\tau )}\sum_{\Delta \subseteq \Phi }(a,\beta )^{|\Delta |}\sum_{\varphi _{0}\in F(\Phi \backslash \Delta )}P_{2}(w_{ab})\ldots P_{2}(z_{cd})\ldots P_{2}(w-z+w_{e}-z_{f})\ldots$$ where $\varphi _{0}=(ab)\ldots (cd)\ldots (ef)\ldots \in F(\Phi \backslash \Delta )$ with $a,b,e\ldots \in \{1,2,\ldots m\}$ and $c,d,f\ldots \in \{1,2,\ldots n\}$. Then the coefficient of $w_{a}^{l_{a}-1}w_{b}^{l_{b}-1}$ in $P_{2}(w_{ab})$ is $C(l_{a},l_{b})$, the coefficient of $% z_{c}^{m_{c}-1}z_{d}^{m_{d}-1}$ in $P_{2}(z_{cd})$ is $C(m_{c},m_{d})$ from (\[P2expansion1\]) and the coefficient of $w_{e}^{l_{e}-1}z_{f}^{m_{f}-1}$ in $P_{2}(w-z+w_{e}-z_{f})$ is $D(l_{e},m_{f},w-z)$ from (\[P2expansion2\]) leading to the result (\[BosonFM\]) in this case. $% \qed $ We complete our discussion of bosonic $n$-point functions with two global formulas for $1$-point functions. The first shows how to write certain $1$-point functions with respect to $N$ in terms of $1$-point functions with respect to $M$: \[PropZNtoZM\] Let notation be as in Corollary \[gen1ptM\]. Then if $% \varsigma $ is an indeterminate, $$\begin{aligned} &&Z_{N}(\exp (\sum_{m\geq 1}\frac{1}{m}a[-m]\varsigma ^{m}).\mathbf{1},\tau) \nonumber\\ &&=q^{(\beta ,\beta )/2}\exp ((a,\beta )\varsigma )Z_{M}(\exp (\sum_{m\geq 1}% \frac{1}{m}a[-m]\varsigma ^{m}).\mathbf{1},\tau ). \label{FNexpzeta}\end{aligned}$$ \[PropZMexpPrime\] Let $\lambda _{1},\ldots \lambda _{n}\,$be $n$ scalars obeying $\sum_{i=1}^{n}\lambda _{i}=0$. Then the following holds: $$Z_{M}(\exp (\sum_{m\geq 1}\frac{a[-m]}{m}\sum_{i=1}^{n}\lambda _{i}z_{i}^{m}).\mathbf{1},\tau )=\frac{1}{\eta (\tau )}\prod_{1\leq i<j\leq n}(\frac{K(z_{ij},\tau )}{z_{ij}})^{\lambda _{i}\lambda _{j}}, \label{FMexpzetaprime}$$ where $K(z,\tau )$ $\,$is the prime form of (\[Primeform\]). We divide the proof of Proposition \[PropZNtoZM\] into several steps. \[LemZNaminusonep\] Let $u\in M$ be a state such that $a[1].u=0$. For an integer $p\geq 0$, $$Z_{N}(a[-1]^{p}.u,q)=q^{(\beta ,\beta )/2}Z_{M}((a,\beta )+a[-1])^{p}.u,q). \label{FNaminus1p}$$ **Proof**: Suppose first that $p=0$. In this case, the lemma says that for a state $v$ as in (\[vstate\]) which satisfies also $e_{1}=0$, the traces of $o(v)$ over $N$ and $M$ differ only by an overall factor of $% q^{(\beta ,\beta )/2}$. A moment’s thought shows that this follows from Corollary \[gen1ptM\] because the set $\Lambda $ is empty in this case. This proves the case $p=0$ of the lemma. We prove the general case by induction on $p$. Using lemma \[LemZhurec\] with $n=1$ we calculate $$\begin{aligned} &&Z_{N}(a[-1]^{p+1}.u,\tau ) \nonumber\\ &&=(a,\beta )Z_{N}(a[-1]^{p}.u,\tau )+\sum_{k\geq 1}E_{2k}(\tau )Z_{N}(a[2k-1].a[-1]^{p}.u,\tau ) \\ &&=q^{(\beta ,\beta )/2}(a,\beta )Z_{M}((a,\beta )+a[-1])^{p}.u,\tau ) \\ &&+pE_{2}(\tau )q^{(\beta ,\beta )/2}Z_{M}(((a,\beta )+a[-1])^{p-1}.u,\tau ) \\ &&+q^{(\beta ,\beta )/2}\sum_{k\geq 2}E_{2k}(\tau )Z_{M}(a[2k-1].(a,\beta )+a[-1])^{p}.u,\tau ),\end{aligned}$$ from which the result follows by using lemma \[LemZhurec\] again. $% \qed $ If we multiply (\[FNaminus1p\]) by $\varsigma ^{p}$, rearrange the constants and sum over $p$ we find \[LemZNtoZM\] Let notation be as above. Then $$Z_{N}(\exp (a[-1]\varsigma ).u,\tau )=q^{(\beta ,\beta )/2}\exp ((a,\beta )\varsigma )Z_{M}(\exp a[-1]\varsigma ).u,\tau ).$$ $\qed $ Choosing $u=\exp (\sum_{m\geq 2}\frac{a[-m]}{m}\varsigma ^{m}).\mathbf{1\,}$we note that Proposition \[PropZNtoZM\] is a special case of lemma \[LemZNtoZM\]. **Proof of Proposition \[PropZMexpPrime\]**: When expanded as a sum, the left-hand-side of (\[FMexpzetaprime\]) can be written in the form $$\sum_{v}\prod_{k=1}^{p}\frac{1}{e_{k}!}(\frac{1}{k}\sum_{i=1}^{n}\lambda _{i}z_{i}^{k})^{e_{k}}Z_{M}(v,\tau ), \label{FMexpb}$$ where $v$ ranges over the basis elements (\[vstate\]). In this regard one should note that as a consequence of Corollary \[gen1ptM\], $Z_{M}(v,\tau )=0$ whenever $\sum_{k=1}^{p}ke_{k}$ is odd. The argument that we use to establish the equality (\[FMexpzetaprime\]) involves the use of a combinatorial technique that proliferates in other work on genus two and higher VOAs [@MT]. Use the case $N=M$ of (\[FNoneptlattice\]) and use Corollary \[gen1ptM\] to write (\[FMexpb\]) as $$\frac{1}{\eta (\tau )}\sum_{v}\prod_{k=1}^{p}\frac{1}{e_{k}!}\sum_{\varphi \in F(\Phi _{v})}\prod_{k=1}^{p}\Gamma (\varphi )(\frac{1}{k}% \sum_{i=1}^{n}\lambda _{i}z_{i}^{k})^{e_{k}} \label{efactgamma}$$ where $\Phi _{v}$ is the labelled set determined by $v$. Fix for a moment an element $\varphi $, considered as a product of transpositions. We can represent $\varphi $ by a graph with nodes labelled by positive integers corresponding to the elements of $\Phi _{v}$, two nodes being connected precisely when $\varphi $ interchanges the nodes in question. Such a graph masquerades under various names in combinatorics: bipartite graph, or a *complete matching* (cf. [@LW], for example), and is nothing more than another way to think about fixed-point-free involutions. Pictorially the complete matching looks like $$\begin{array}{c} r_{1}\bullet \text{-----}\bullet s_{1} \\ r_{2}\bullet \text{-----}\bullet s_{2} \\ \vdots \\ r_{b}\bullet \text{-----}\bullet s_{b} \\ \mathrm{Fig.1} \end{array}$$ We denote the complete matching determined by $\varphi $ by the symbol $\mu _{\varphi }$. Any complete matching on a set labelled by positive integers corresponds to a fixed-point-free involution and a state $% v $. Let us agree that two complete matchings $\mu _{1},\mu _{2}$ are *isomorphic* if there is a bijection from the node set of $\mu _{1}$ to the node set of $\mu _{2}$ that preserves labels. We may identify the node sets of the two matchings, call it $\Phi $, in which case an isomorphism may be realized by an element in the symmetric group $\Sigma (\Phi )$. More precisely, let us define the *label subgroup* of $% \Sigma (\Phi )$ to be the subgroup $\Lambda (\Phi )$ consisting of all permutations of $\Phi $ that preserve labels. Then it is the case that an isomorphism between $\mu _{1}$ and $\mu _{2}$ may be realized by an element in the label subgroup. Note that our notation implies that $|\Lambda (\Phi )|=\prod e_{k}!$. Now consider (\[efactgamma\]). The expression to the right of the second summation, which we denote by $$\hat{\Gamma}(\mu _{\varphi })=\prod_{k=1}^{p}\Gamma (\varphi )(\frac{1}{k}% \sum_{i=1}^{n}\lambda _{i}z_{i}^{k})^{e_{k}} \label{Gammahat}$$ is determined by the complete matching $\mu _{\varphi }$. By what we have said, every complete matching on a set labelled by positive integers occurs in this way as $v$ ranges over the preferred Fock basis of $M$ (on the cylinder) and $\varphi $ ranges over $F(\Phi _{v})$, while the factor $\prod \frac{1}{e_{k}!}$ may be interpreted as averaging over all complete matchings with a given set of labels. The only duplication that occurs is due to *automorphisms* of the complete matching. The upshot is that expression (\[efactgamma\]) is equal to $$\frac{1}{\eta (\tau )}\sum_{\mu }\frac{\hat{\Gamma}(\mu )}{|\mathrm{Aut}(\mu )|} \label{autinvgammahat}$$ where $\mu $ ranges over all *isomorphism classes* of complete matchings labelled by positive integers. For a given complete matching $\mu $ as in Fig. 1, let $E=E(r,s)$ denote an edge with labels $r,s$, and let $m(E)$ denote the *multiplicity* of $% E $ in $\mu $. Thus we may represent $\mu $ symbolically by its decomposition $\mu =\sum m(E)E$ into isomorphism classes of labelled edges. Now it is evident that there is an isomorphism of groups $$\mathrm{Aut}(\mu )\simeq \prod_{E}\mathrm{Aut}(E)\wr \Sigma _{m(E)},$$ a direct product, indexed by isomorphism classes of labelled edges, of groups which are themselves the (regular) wreathed product of $\mathrm{Aut}% (E)$ and $\Sigma _{m(E)}$. In particular, we have $$|\mathrm{Aut}(\mu )|=\prod_{E}m(E)!|\mathrm{Aut}(E)|^{m(E)}. \label{orderautmu}$$ Note that $|\mathrm{Aut}(E)|\leq 2$, with equality only if the two node labels of $E$ are equal. Next it is easy to see that the expression $\hat{% \Gamma}(\mu )$ is *multiplicative* *over edges*. In other words, we have $$\hat{\Gamma}(\mu )=\prod_{E}\hat{\Gamma}(E)^{m(E)}, \label{gammahatmu}$$ and for an edge $E(r,s)$ we have $$\begin{aligned} \hat{\Gamma}(E) &=&C(r,s,\tau )(\frac{1}{r}\sum_{i=1}^{n}\lambda _{i}z_{i}^{r})(\frac{1}{s}\sum_{j=1}^{n}\lambda _{j}z_{j}^{s}) \nonumber \\ &=&\frac{(-1)^{r+1}}{r+s}\binom{r+s}{s}E_{r+s}(\tau )\sum_{i=1}^{n}\lambda _{i}z_{i}^{r}\sum_{j=1}^{n}\lambda _{j}z_{j}^{s}. \label{gammahatE}\end{aligned}$$ Substitute (\[orderautmu\]) and (\[gammahatmu\]) in (\[autinvgammahat\]) to obtain the expression $$\frac{1}{\eta (\tau )}\prod_{E}\exp (\frac{\hat{\Gamma}(E)}{|\mathrm{Aut}(E)|% })=\frac{1}{\eta (\tau )}\exp (\sum_{E_{or}}\frac{\hat{\Gamma}(E_{or})}{2}), \label{expgammahatE}$$ where $E$ ranges over all labelled edges $r \bullet$——$\bullet s$ and $E_{or}$ ranges over all *oriented* edges $r\bullet $—&gt;—$\bullet s$ (which have trivial automorphism group). Using (\[gammahatE\]), the expression (\[expgammahatE\]) is in turn equal to $$\frac{1}{\eta (\tau )}\exp (\frac{1}{2}\sum_{k\geq 1}E_{2k}(\tau )\frac{1}{2k% }\sum_{r=0}^{2k}(-1)^{r+1}\binom{2k}{r}\sum_{i=1}^{n}\lambda _{i}z_{i}^{r}\sum_{j=1}^{n}\lambda _{j}z_{j}^{2k-r}). \label{exptwolambda}$$ But, $$\frac{1}{2}\sum_{r=0}^{2k}(-1)^{r+1}\binom{2k}{r}\sum_{i=1}^{n}\lambda _{i}z_{i}^{r}\sum_{j=1}^{n}\lambda _{j}z_{j}^{2k-r}=-\sum_{1\leq i<j\leq n}\lambda _{i}\lambda _{j}z_{ij}^{2k}.$$ and hence using (\[P0\]), (\[Primeform\]) we find (\[exptwolambda\]) becomes $$\frac{1}{\eta (\tau )}\prod_{1\leq i<j\leq n}\exp (-\lambda _{i}\lambda _{j}(P_{0}(z_{ij},\tau )+\log z_{ij}))=\frac{1}{\eta (\tau )}\prod_{1\leq i<j\leq n}[\frac{K(z_{ij},\tau )}{z_{ij}}]^{\lambda _{i}\lambda _{j}}.$$ This completes the proof of Proposition \[PropZMexpPrime\]. $\qed $ We now turn our attention on the second factor $F_{N}$ of (\[QNFN\]) where we abbreviate $\mathbf{1}\otimes e^{\alpha _{i}}\,$ by $e^{\alpha _{i}}$ again. \[Propnptlattice\] Let $M$ and $N$ be as above, and let $\alpha _{1},\ldots ,\alpha _{n}$ be lattice elements in the rank one even lattice $% L $ satisfying $\alpha _{1}+\ldots +\alpha _{n}=0$. Then $$\begin{aligned} &&F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q) \nonumber \\ &=&\frac{q^{(\beta ,\beta )/2}}{\eta (\tau )}\prod_{1\leq r\leq n}\exp ((\beta ,\alpha _{r})z_{r})\prod_{1\leq i<j\leq n}\epsilon (\alpha _{i},\alpha _{j})K(z_{ij},\tau )^{(\alpha _{i},\alpha _{j})}. \label{FNexpalpha}\end{aligned}$$ **Proof**: Use (\[Fnz1zn\]) of Lemma \[lemma3.1\] to rewrite the LHS of (\[FNexpalpha\]) as $$Z_{N}(o(Y[e^{\alpha _{1}},z_{1}]\ldots Y[e^{\alpha _{n}},z_{n}].\mathbf{1}% );q) \label{ZNalpha}$$ Referring to (\[Yealpha\]), by repeated use of the identity $$Y_{+}(e^{\alpha },z)Y_{-}(e^{\beta },w)=(\frac{z-w}{z})^{(\alpha ,\beta )}Y_{-}(e^{\beta },w)Y_{+}(e^{\alpha },z)$$ (for $|z|>|w|$) and using (\[ealpha\]) to (\[cocycleproduct\]) we find (\[ZNalpha\]) is $$\prod_{1\leq r<s\leq n}z_{rs}{}^{(\alpha _{r},\alpha _{s})}\epsilon (\alpha _{r},\alpha _{s})Z_{N}(\exp (\sum_{m>0}\sum_{i=1}^{n}\frac{\alpha _{i}[-m]}{m% }z_{i}^{m}).\mathbf{1,}\tau ). \label{ZNalpha2}$$ The operator corresponding to $m=1$ in the exponential in (\[ZNalpha2\]) may be written in the form $a[-1]\varsigma $ where $\varsigma =$ $% \sum_{k=1}^{n}(a,\alpha _{k})z_{k}$ so that, from Lemma \[LemZNtoZM\], $$\begin{aligned} &&Z_{N}(\exp (\sum_{m>0}\sum_{i=1}^{n}\frac{\alpha _{i}[-m]}{m}z_{i}^{m}).% \mathbf{1,}\tau ) \\ &=&q^{(\beta ,\beta )/2}\prod_{i=1}^{n}\exp ((\beta ,\alpha _{i})z_{i})Z_{M}(\exp (\sum_{m>0}\sum_{i=1}^{n}\frac{\alpha _{i}[-m]}{m}% z_{i}^{m}).\mathbf{1,}\tau )\end{aligned}$$ Now use Proposition \[PropZMexpPrime\] with $\lambda _{i}=(a,\alpha _{i})$ so that we find $$Z_{M}(\exp (\sum_{m>0}\sum_{i=1}^{n}\frac{\alpha _{i}[-m]}{m}z_{i}^{m}).% \mathbf{1,}\tau )=\frac{1}{\eta (\tau )}\prod_{1\leq i<j\leq n}[\frac{% K(z_{ij},\tau )}{z_{ij}}]^{(\alpha _{i},\alpha _{j})}.$$ This completes the proof of the Proposition. $\qed $ We now consider the lattice VOA $V_{L}$ constructed from a rank $% l $ even lattice $L$ as described in section 2. We recall that $% a_{1},a_{2},...a_{l}$ is an orthonormal basis for $\mathfrak{H}$ with respect to the non-degenerate symmetric bilinear form $(,)$. We let $M$ be the rank $l$ Heisenberg vertex operator algebra and let $N=M\otimes e^{\beta }$ be a simple $M$-module with $\beta \in L$, $h=(\beta ,\beta )/2$ the conformal weight of the highest weight vector of $N$. Then $M\simeq M^{1}\otimes \ldots \otimes M^{l}$, $\,$the tensor product of $l$ copies of the rank $1$ Heisenberg VOA, and is spanned by the Fock states $$v=a_{1}[-1]^{e_{1}}\ldots a_{1}[-p]^{e_{p}}\ldots a_{l}[-1]^{f_{1}}\ldots a_{l}[-q]^{f_{q}}.\mathbf{1} \label{genvstate}$$ where $e_{1},\ldots ,f_{q}\,$are non-negative integers. We now give a general closed formula for all rank $l$ lattice $n$-point functions (\[nptlattice\]) where $v_{1},\ldots ,v_{n}$ are Fock states of the form (\[genvstate\]). Viewing each vector $v_{i}$ as an element of $% M^{1}\otimes \ldots \otimes M^{l}$ we define $\Phi _{i}^{r}\,\,\,$as the labelled set for the $r^{\mathrm{th}}$ tensored vector of $v_{i}$ e.g. $\Phi _{i}^{1}$ contains $\,1$ with multiplicity $e_{1}$, $2$ with multiplicity $% e_{2}$ etc. Therefore, $v_{i}$ is determined by the labelled set $\Phi _{i}=\Phi _{i}^{1}\cup \ldots \cup \Phi _{i}^{l}$, the disjoint union of $l$ sets. We also define $\Phi ^{r}=\bigcup_{1\leq i\leq n}\Phi _{i}^{r}$ to be the labelled set for the $r^{\mathrm{th}}$ tensored vectors of the $n$ vectors $v_{1},\ldots ,v_{n}$. Then we have: \[Big Theorem\] Let $v_{1},\ldots ,v_{n}$ be states of the form (\[genvstate\]) in the rank $l$ free boson theory $M$ and let $\Phi ^{1},\ldots \Phi ^{l}$ be the labelled sets defined as above by these states. Then the following holds for lattice elements $\alpha _{1},\ldots ,\alpha _{n}\in L$ satisfying $\alpha _{1}+\ldots +\alpha _{n}$ $=0$: $$\begin{aligned} &&F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q) \nonumber \\ &=&Q_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)F_{N}(e^{\alpha _{1}},z_{1};\ldots ;e^{\alpha _{n}},z_{n};q), \label{FnBigtheorem}\end{aligned}$$ where $$Q_{N}(v_{1},z_{1};\ldots ;v_{n},z_{n};q)=\prod_{1\leq r\leq l}\sum_{\varphi ^{r}\in \mathrm{Inv}(\Phi ^{r})}\Gamma (\varphi ^{r}), \label{GenQn}$$ and $$\begin{aligned} &&F_{N}(e^{\alpha _{1}},z_{1};\ldots ;e^{\alpha _{n}},z_{n};q) \nonumber \\ &=&\frac{q^{(\beta ,\beta )/2}}{\eta (\tau )^{l}}\prod_{1\leq r\leq n}\exp ((\beta ,\alpha _{r})z_{r})\prod_{1\leq i<j\leq n}\epsilon (\alpha _{i},\alpha _{j})K(z_{ij},\tau )^{(\alpha _{i},\alpha _{j})}. \label{Genlattice}\end{aligned}$$ **Proof.** We sketch the proof which follows along the same lines as the rank one case described above. We firstly apply Lemma \[LemZhurec\] to the $r^{\mathrm{th}}$ tensored vectors labelled by $\Phi ^{r}$. Following the same argument for the rank one case in Proposition \[Propgennpt\], this results in (\[FnBigtheorem\]) and (\[GenQn\]). We secondly evaluate the LHS of (\[Genlattice\]) as in Proposition \[Propnptlattice\] using Proposition \[PropZMexpPrime\] with $\lambda _{i}^{r}=(a_{r},\alpha _{i})$ for $r=1,\ldots ,l$ to obtain (\[Genlattice\]). $\qed $ We conclude this section with the first non-trivial examples of lattice $n$-point functions which occur for $n=2$. From Theorem \[Big Theorem\] and recalling (\[cocycleunity\]) we have: \[Cor3.13\] For a rank $l$ lattice theory with $N$ as above and with states $e^{\alpha }$ and $e^{-\alpha }$ we have: $$F_{N}(e^{\alpha },z_{1};e^{-\alpha },z_{2};q)=\frac{q^{(\beta ,\beta )/2}}{% \eta ^{l}(\tau )}\frac{\exp ((\beta ,\alpha )z_{12})}{K(z_{12},\tau )^{(\alpha ,\alpha )}}. \label{lattice2pt}$$ $\qed $ Taking the sum over all $\beta \in L$ we immediately obtain: For $V=V_{L}$, the lattice vertex operator algebra for a rank $l$ even lattice $L$, and for states $e^{\alpha }$ and $e^{-\alpha }$ we have: $$F_{V_{L}}(e^{\alpha },z_{1};e^{-\alpha },z_{2};q)=\frac{1}{\eta ^{l}(\tau )}% \frac{\Theta _{\alpha ,L}(\tau ,z_{12}/2\pi i)}{K(z_{12},\tau )^{(\alpha ,\alpha )}},$$ where $$\Theta _{\alpha ,L}(\tau ,z)=\sum_{\beta \in L}\exp (2\pi i[\frac{(\beta ,\beta )}{2}\tau +(\beta ,\alpha )z]),$$ is a Jacobi form of weight $l\,$ and index $(\alpha ,\alpha )/2$ [@EZ]. $% \qed $ The Elliptic Properties of $n$-point Functions ============================================== In this section we will consider the elliptic properties of the $n$-point functions described in the previous section. For vertex operator algebras satisfying the so-called $C_{2}$ condition, Zhu has shown that every $n$-point function is meromorphic and periodic in each parameter $z_{i}\,$and is therefore elliptic [@Z]. The $C_{2}$ condition does not hold for simple modules of free bosonic theories but nevertheless all $n$-point functions (\[FnBigtheorem\]) are found to be meromorphic and either elliptic for the free bosonic $n$-point functions or quasi-elliptic for the lattice $n$-point functions. In this section we will consider these $n$-point functions from first principles where our aim is to provide further insight into the structure found for these functions. In particular, we will show how (\[BosonFNaa\]), the generating function for free bosonic $n$-point functions and the lattice $n$-point function (\[FNexpalpha\]) are the unique elliptic or quasi-elliptic functions determined by permutation symmetry, periodicity and certain natural singularity and normalisation properties. We begin with a general statement about $n$-point functions: \[Lemma4.1\] The $n$-point function $F_{N}=F_{N}(v_{1}\otimes e^{\alpha _{1}},z_{1};\ldots ;v_{n}\otimes e^{\alpha _{n}},z_{n};q)$ for $\alpha _{1}+\ldots +\alpha _{n}$ $=0$ obeys the following: (i) : $F_{N}$ is symmetric under all permutations of its indices. (ii) : $F_{N}$ is a function of $z_{ij}=$ $z_{i}-z_{j}$. (iii) : $F_{N}$ is non-singular at $z_{ij}$ $\neq 0$ for all $i\neq j$. (iv) : $F_{N}$ is periodic in $z_{i}$ with period $2\pi i$. (v) : $F_{N}$ is quasi-periodic in $z_{i}$ with period $2\pi i\tau $ and multiplier $$q^{(\alpha _{i},\alpha _{i})/2+(\alpha _{i},\beta )}q_{i}^{(\alpha _{i},\alpha _{i})}. \label{quasiperiodmult}$$ **Proof**. (i) Apply the general locality property for vertex operators e.g. [@Ka] $$(z-w)^{N}Y(u,z).Y(v,w)=(z-w)^{N}Y(v,w).Y(u,z),$$ for $N$ sufficiently large, to all adjacent pairs of operators in (\[Fnz1zn\]) of Lemma \[lemma3.1\]. \(ii) This follows from (i) and (\[Fnziminuszn\]) of Lemma \[lemma3.1\]. \(iii) Suppose that $F_{N}$ is singular at $z_{n}=z_{0}$ for some $z_{0}\neq $ $z_{j}$ for all $j=1,\ldots ,n-1$. Using (ii) we may assume that $z_{0}=0$ by redefining $z_{i}$ to be $z_{i}-z_{0}$ for all $i$. But $F_{N}$ cannot be singular at $z_{n}=0$ from (\[Fnz1zn\]) of Lemma \[lemma3.1\] since $% Y[v_{n}\otimes e^{\alpha _{n}},z_{n}].\mathbf{1}|_{z_{n}=0}=v_{n}$ and hence the result follows. \(iv) This follows from the integrality of conformal weights. \(v) Using (i) we have $$\begin{aligned} F_{N}&=&q^{-c/24}Tr_{N}Y(q_{2}^{L(0)}v_{2}\otimes e^{\alpha _{2}},q_{2}).\nonumber\\ &&\ldots Y(q_{n}^{L(0)}v_{n}\otimes e^{\alpha _{n}},q_{n}).Y(q_{1}^{L(0)}v_{1}\otimes e^{\alpha _{1}},q_{1})q^{L(0)}.\nonumber \end{aligned}$$ Under $z_{1}\rightarrow z_{1}+2\pi i\tau \,$ $\,$we have $F_{N}\rightarrow \hat{F}_{N}$ where $$\begin{aligned} \hat{F}_{N}&=&q^{-c/24}Tr_{N}Y(q_{2}^{L(0)}v_{2}\otimes e^{\alpha _{2}},q_{2}).\nonumber\\ &&\ldots Y(q_{n}^{L(0)}v_{n}\otimes e^{\alpha _{n}},q_{n}).q^{L(0)}.Y(q_{1}^{L(0)}v_{1}\otimes e^{\alpha _{1}},q_{1}), \label{Fnhat}\end{aligned}$$ using (\[L0scaling\]). Consider the co-cycle parts of the vertex operators within $\hat{F}_{N}$. Using (\[ealphacomm\]) to (\[zalpha\]) we see that $$\begin{aligned} &&\prod_{2\leq r\leq n}e^{\alpha _{r}}.q_{r}^{\alpha _{r}}.q^{L(0)}.e^{\alpha _{1}}.q_{1}^{\alpha _{1}}.(v\otimes e^{\beta }) \nonumber \\ &=&q^{(\alpha _{1},\alpha _{1})/2+(\alpha _{1},\beta )}q_{1}^{(\alpha _{1},\alpha _{1})}\prod_{1\leq r\leq n}e^{\alpha _{r}}.q_{r}^{\alpha _{r}}.q^{L(0)}.(v\otimes e^{\beta }). \label{cocycleL0}\end{aligned}$$ Since $N=\bigoplus_{n\in \mathbb{Z}}M_{n}\otimes e^{\beta }$, the graded trace (\[Fnhat\]) decomposes into finite dimensional traces over $M_{n}$ to which we may apply the standard trace property $TrAB=TrBA$ on the remaining parts of the vertex operators within $\hat{F}_{N}$. Hence we obtain $$\hat{F}_{N}=q^{(\alpha _{1},\alpha _{1})/2+(\alpha _{1},\beta )}q_{1}^{(\alpha _{1},\alpha _{1})}F_{N},$$ as required. By (i) we obtain the quasi-periodicity (\[quasiperiodmult\]) for each $z_{i}$. $\qed $ Let us now consider the generating function, $F_{N}(a,z_{1};\ldots ;a,z_{n};q)$, for all Fock state $n$-point functions for the rank one case. Note that $F_{N}(q)\equiv Z_{N}(q)=q^{(\beta ,\beta )/2}/\eta (\tau )$ for $% n=0$. From Lemma \[Lemma4.1\], $F_{N}(a,z_{1};\ldots ;a,z_{n};q)$ $\,$is periodic in each $z_{i}$ with periods $2\pi i\,$and $2\pi i\tau $. The singularity structure at $z_{ij}=0$ is determined by \[Lemma4.3\] For $n\geq 2$ and for $i\neq j$, $F_{N}(a,z_{1};\ldots ;a,z_{n};q)$ has the following leading behaviour in its (formal) Laurent expansion in $z_{ij}$ $$F_{N}(a,z_{1};\ldots ;a,z_{n};q)=\frac{1}{z_{ij}^{2}}F_{N}(a,z_{1};\ldots ;% \hat{a},\hat{z}_{i}\ldots ;\hat{a},\hat{z}_{j}\ldots ;a,z_{n};q)+\ldots , \label{FNa1tonres}$$ where $\hat{a},\hat{z}_{i}$ and $\hat{a},\hat{z}_{j}$ denotes the deletion of the corresponding vertex operators resulting in an $n-2$ point function. **Proof.** Using Lemma \[Lemma4.1\] (i) it suffices to consider the expansion in $z_{n-1n}$. The result then follows from (\[Fnziminuszn\]) of Lemma \[lemma3.1\] and using $$Y[a,z_{n-1n}].a=\frac{1}{z_{n-1n}^{2}}\mathbf{1}+\sum_{k\geq 1}z_{n-1n}^{k-1}a[-k].a.$$ $\qed $ We also have the following integral normalisation: \[Lemma4.4\] For each $i=1,\ldots ,n\geq 1$ $$\frac{1}{2\pi i}\int_{0}^{2\pi i}F_{N}(a,z_{1};\ldots ;a,z_{n};q)dz_{i}=(a,\beta )F_{N}(a,z_{1};\ldots ;\hat{a},\hat{z}_{i};\ldots ;a,z_{n};q), \label{aPeriod}$$ where $\hat{a},\hat{z}_{i}$ denotes the deletion of the corresponding vertex operator giving an $n-1$ point function. **Proof.** Using Lemma \[Lemma4.1\] (i) it suffices to consider $i=1$ only. Then the integral is $$\begin{aligned} &&\frac{1}{2\pi i}\int_{0}^{2\pi i}F_{N}(a,z_{1};\ldots ;a,z_{n};q)dz_{1} \\ &=&Tr_{N}\frac{1}{2\pi i}\oint_{\mathcal{C}% _{1}}Y(a,q_{1})dq_{1}Y(q_{2}a,q_{2})\ldots Y(q_{n}v_{n},q_{n})q^{L(0)-1/24} \\ &=&Tr_{N}o(a)Y(q_{2}a,q_{2})\ldots Y(q_{n}v_{n},q_{n})q^{L(0)-1/24} \\ &=&(a,\beta )Tr_{N}Y(q_{2}a,q_{2})\ldots Y(q_{n}v_{n},q_{n})q^{L(0)-1/24}.\end{aligned}$$ where $\mathcal{C}_{1}\,$denotes a closed contour surrounding $q_{1}=0$. $% \qed $ We now show that $F_{N}(a,z_{1};\ldots ;a,z_{n};q)$ $\,$is uniquely determined to be given by (\[BosonFNaa\]) of Corollary \[nastatesN\] as follows: \[Prop4.5\] $F_{N}(a,z_{1};\ldots ;a,z_{n};q)\,$ is the unique meromorphic function in $z_{i}\in \mathbb{C}/\{2\pi i(m+n\tau )|m,n\in \mathbb{Z}% \} $ obeying Lemmas \[Lemma4.1\], \[Lemma4.3\] and \[Lemma4.4\] and is given by (\[BosonFNaa\]). **Proof.** If $F_{N}(a,z_{1};\ldots ;a,z_{n};q)\,$ is meromorphic then it is an elliptic function from Lemma \[Lemma4.1\] (iv) and (v). We prove the required result by induction. For $n=1$, $F_{N}(a,z_{1};q)$ has no poles from Lemma \[Lemma4.1\] (iii) and is therefore constant in $z_{1}$. Then (\[aPeriod\]) of Lemma \[Lemma4.4\] implies $$F_{N}(a,z_{1};q)=(a,\beta )\frac{q^{(\beta ,\beta )/2}}{\eta (\tau )},$$ in agreement with the RHS of (\[BosonFNaa\]) in this case. Next consider the elliptic function $$\begin{aligned} G(z_{1},\ldots ,z_{n})&=&F_{N}(a,z_{1};\ldots ;a,z_{n};q)\nonumber\\ &&-\sum_{i=2}^{n}P_{2}(z_{1i})F_{N}(a,z_{2};\ldots ;\hat{a},\hat{z}% _{i};\ldots ;a,z_{n};q), \nonumber\end{aligned}$$ where $\hat{a},\hat{z}_{i}$ denotes the deletion of the corresponding vertex operator. From Lemma \[Lemma4.3\], $G$ is holomorphic and elliptic in $% z_{1}$ and is therefore independent of $z_{1}$. Next note that although not an elliptic function, $P_{1}(z,\tau )$ is periodic with period $2\pi i$ and so $\int_{0}^{2\pi i}P_{2}(z_{1i},\tau )dz_{1}=0$. Hence Lemma \[Lemma4.4\] implies $$\begin{aligned} F_{N}(a,z_{1};\ldots ;a,z_{n};q) &=&(a,\beta )F_{N}(a,z_{2};\ldots ;a,z_{n};q) \nonumber \\ &&+\sum_{i=2}^{n}P_{2}(z_{1i})F_{N}(a,z_{2};\ldots ;\hat{a},\hat{z}% _{i};\ldots ;a,z_{n};q). \nonumber\\ && \label{Fnrecurrence}\end{aligned}$$ But this recurrence relation is precisely (\[FNa1Zhu\]) with $v_{1}=% \mathbf{1},v_{2}=\ldots =v_{n}=a$ and $\alpha _{i}=0$. Thus we obtain the RHS of (\[BosonFNaa\]) by induction as before. $\qed $ The rank $l$ result follows as before by considering the tensor product of $% l $ rank one Heisenberg VOAs. We now consider the lattice $n$-point function $F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q)$ $\,$for a rank $l$ lattice. We firstly note the following: \[Lemmaalpharec\] For $n\geq 1$ and for $i\neq j$, the (formal) Laurent expansion in $z_{ij}$ of $F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q)$ has leading behaviour $$\begin{aligned} &&F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q) \nonumber \\ &&=\epsilon (\alpha _{i},\alpha _{j})z_{ij}^{(\alpha _{i},\alpha _{j})}F_{N}(e^{\alpha _{1}},z_{1};...\hat{e}^{\alpha _{i}},\hat{z}% _{i};\ldots e^{\alpha _{i}+\alpha _{j}},z_{j};\ldots ;e^{\alpha _{n}},z_{n};q)+\ldots ,\nonumber \\ &&\end{aligned}$$ where $\hat{e}^{\alpha _{i}},\hat{z}_{i}$ denotes the deletion of the corresponding vertex operator resulting in an $n-1$ point function. **Proof.** Using Lemma \[Lemma4.1\] (i) it suffices to consider the expansion in $z_{n-1n}$. The result then follows from (\[ealpha\]), (\[zalpha\]) and (\[Fnziminuszn\]) of Lemma \[lemma3.1\] to find $$Y[e^{\alpha _{n-1}},z_{n-1n}].e^{\alpha _{n}}=\epsilon (\alpha _{n-1},\alpha _{n})z_{n-1n}^{(\alpha _{n-1},\alpha _{n})}e^{\alpha _{n}+\alpha _{n-1}}+\ldots$$ $\qed $ Next recall the following properties for the prime form $K(z,\tau )$ e.g. [@Mu] \[LemmaPrime\] The genus one prime form $K(z,\tau )$ is a holomorphic function on $\mathbb{C}/\{2\pi i(m+n\tau )|m,n\in \mathbb{Z}\}$ given by $$\begin{aligned} K(z,\tau )&=&-\frac{i\theta _{1}(z,\tau )}{\eta (\tau )^{3}}, \label{Kthetaeta} \\ \theta _{1}(z,\tau )&\equiv& \sum_{n\in \mathbb{Z}}\exp (\pi i\tau (n+1/2)^{2}+(n+1/2)(z+i\pi )), \label{theta1}\end{aligned}$$ where $K(z,\tau )$ is quasi-periodic in $z$ with period $2\pi i\,$ and multiplier $-1$ and with period $2\pi i\tau $ and multiplier $% -q^{-1/2}q_{z}^{-1}$. Furthermore $K(z,\tau )$ has a unique zero at $z=0$ on $\mathbb{C}/\{2\pi i(m+n\tau )|m,n\in \mathbb{Z}\}$. $\qed $ We finally show that $F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q)$ is uniquely determined to be given by (\[Genlattice\]) as follows: \[Propalpha\] $F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q)\, $ is the unique meromorphic function in $z_{i}\in \mathbb{C}/\{2\pi i(m+n\tau )|m,n\in \mathbb{Z}\}$ obeying Lemmas \[Lemma4.1\] and \[Lemmaalpharec\]. **Proof.** If $F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q)\,$ is meromorphic then consider the meromorphic function $$G(z_{1},\ldots ,z_{n})=\frac{F_{N}(e^{\alpha _{1}},z_{1};...;e^{\alpha _{n}},z_{n};q)}{\prod_{1\leq r\leq n}\exp ((\beta ,\alpha _{r})z_{r})\prod_{1\leq i<j\leq n}\epsilon (\alpha _{i},\alpha _{j})K(z_{ij},\tau )^{(\alpha _{i},\alpha _{j})}}. \label{Galpha}$$ We wish to show that $G=F_{N}(q)=\eta (\tau )^{l}$. We prove this by induction in $n$. It is easy to see that $G$ is periodic with periods $2\pi i $, $2\pi i\tau $ using Lemma \[Lemma4.1\] (iv),(v) and Lemma \[LemmaPrime\] and hence $G$ is elliptic in $z_{i}.$ $G$ is also a function of $z_{ij}$ and considering the Laurent expansion in $z_{ij}$ one finds that the leading term is given by $$G(z_{1},\ldots ,z_{n})=F_{N}(q)+\ldots$$ using Lemma \[Lemmaalpharec\] and (\[cocycleproduct\]) and induction. Hence $G$ is regular at $z_{ij}=0$. But the denominator of $G$ has zeros possible only at $z_{ij}=0$ (for $(\alpha _{i},\alpha _{j})>0$) and hence $G$ is a holomorphic elliptic function and is therefore constant in $z_{i}$. Thus $G(z_{1},\ldots ,z_{n})=F_{N}(q)$ and the result follows. $\qed $ [DLM]{} Borcherds, R.:Vertex algebras, Kac-Moody algebras and the Monster, Proc.Natl.Acad.Sci.U.S.A. **83** (1986) 3068-3071. D’Hoker, E: String Theory in: Quantum Fields an Strings: A Course for Mathematicians, ed. Deligne et al, AMS (Providence, 1999). Dong,C., Li,H. and Mason,G.: Modular-invariance of trace functions in orbifold theory and generalized moonshine, Comm. Math. Phys. **214** (2000), 1-56. Dong, C. and Mason, G.: Monstrous Moonshine of Higher Weight, Acta Mathematica, **185** (2000), 101-121. Dong, C., Mason, G. and Nagatoma, K.: Quasi-modular forms and trace functions associated to free boson and lattice vertex operator algebras*,* Int. Math. Res. Not. **8** (2001), 409-427. Eichler, M. and Zagier, D.: The Theory of Jacobi Forms, Birkhäuser (Boston, 1985). Frenkel, I., Huang, Y-Z. and Lepowsky, J.: On axiomatic approaches to vertex operator algebras and modules, Mem. Amer. Math. Soc. **104** no. 494, (1993). Frenkel, I., Lepowsky, J. and Meurman, A.: *Vertex operator algebras and the Monster*, (Academic Press, New York, 1988). Goddard, P.: Meromorphic conformal field theory in: Proceedings of the CIRM Luminy conference, 1988, (World Scientific, Singapore, 1989). Kac, V.: *Vertex Operator Algebras for Beginners*, University Lecture Series, Vol. 10, AMS, (Boston, 1998). van Lint, J. and Wilson. R., *A Course in Combinatorics*, Cambridge University Press (Cambridge, 1992). Matsuo, A. and Nagatomo,K.: Axioms for a Vertex Algebra and the Locality of Quantum Fields, Math. Soc. Japan, Memoirs **4**, Tokyo, (1999). Mason, G. and Tuite, M.P.: Work in progress. Mumford, D.: Tata lectures on Theta I, Birkhäuser, (Boston, 1983). Polchinski, J: *String Theory*, Volumes I and II, Cambridge University Press, (Cambridge, 1998). Zhu,Y.: Modular invariance of characters of vertex operator algebras. J. Amer.Math.Soc. **9** (1996) 237–302. [^1]: Partial support provided by NSF DMS -9709820 and the Committee on Research, University of California, Santa Cruz [^2]: Supported by an Enterprise Ireland Basic Research Grant and the Millenium Fund, National University of Ireland, Galway [^3]: Concerning the co-ordinate change we follow [@DLM] rather than [@Z]. The latter has $z$ replaced by $2\pi iz$ in (\[Ysquare\]). This leads to minor discrepancies between the notation in [@Z] and the present paper which should be borne in mind.
--- abstract: | Let $\Phi$ be a quasi-periodically forced quadratic map, where the rotation constant $\omega$ is a Diophantine irrational. A strange non-chaotic attractor (SNA) is an invariant (under $\Phi$) attracting graph of a nowhere continuous measurable function $\psi$ from the circle $\mathbb{T}$ to $[0,1]$. This paper investigates how a smooth attractor degenerates into a strange one, as a parameter ${\beta}$ approaches a critical value ${\beta}_0$, and the asymptotics behind the bifurcation of the attractor from smooth to strange. In our model, the cause of the strange attractor is a so-called torus collision, whereby an attractor collides with a repeller. Our results show that the asymptotic minimum distance between the two colliding invariant curves decreases linearly in the parameter ${\beta}$, as ${\beta}$ approaches the critical parameter value ${\beta}_0$ from below. Furthermore, we have been able to show that the asymptotic growth of the supremum of the derivative of the attracting graph is asymptotically bounded from both sides by a constant times the reciprocal of the square root of the minimum distance above. author: - Thomas Ohlson Timoudas bibliography: - 'references.bib' title: 'Power law asymptotics in the creation of strange attractors in the quasi-periodically forced quadratic family' --- Introduction ============ Model and results ================= Proof of the main theorem ========================= This section has been split into three parts covering existence and smoothness of attractor, minimum distance to repelling set, and growth of derivative, respectively. We will use the same notation as in \[SectionInduction\]. Throughout this section we will assume that $\lambda$ is a fixed constant, and sufficiently large for every result in the previous sections to hold. From now on, we will also assume that $\alpha = \alpha_c$. Note that $\alpha_c$ depends on $\lambda$. A notation we will introduce in this section is $I_{n({\beta})}$, where $0 \leq {\beta}< 1$, and $n = n({\beta})$ is the smallest integer satisfying ${\beta}\in B_{n}$. Existence and regularity of the attractor ----------------------------------------- Here we show that, for every $0 \leq {\beta}< 1$, there is an attractor which is the graph of an invariant smooth ($C^\infty$) function $\psi^{\beta}: \mathbb{T} \to (0,1)$, and that this attractor depends smoothly on ${\beta}$. This is the contents of \[SmoothnessOfAttractor\]. In order to accomplish this goal, we will follow a standard argument. We will first show that there is an invariant space $S_n = \mathbb{T} \times B_n \times [\epsilon_n, 1 - \epsilon_n]$ for every $n \geq 0$, such that for $(\theta, {\beta}, x) \in S_n$, we have the uniform bound $$\begin{aligned} \| \partial_x x_k \| \leq const \cdot \delta^k,\end{aligned}$$ for some $0 < \delta < 1$, where $\theta_0 = \theta, x_0 = x$. This will give us a family, for every $n \geq 1$, $\{\psi_{{\beta}, n}: \mathbb{T} \to (0,1)\}_{{\beta}\in B_n}$, of smooth functions for, the graphs of which will be the (unique) attractor corresponding to that ${\beta}$. As we increase $n$, we will obtain a family $\{\psi^{\beta}: \mathbb{T} \to (0,1)\}$ of smooth functions (attracting graphs) for every $0 \leq {\beta}< 1$. \[EverythingAligns\] Assume that ${\beta}\in B_n$ (in particular $0 \leq {\beta}< 1$) for some $n \geq 0$. If $\theta_0 \in \mathbb{T}$, and $x_0 \in (0,1)$, then there is a $0 \leq t$, such that $\theta_t \in \Theta_{n-1}$, and $x_t \in C$. Moreover, if $x_0 \in (\epsilon, 1 - \epsilon)$, there is a $T_\epsilon \geq 0$ such that $t \leq T_\epsilon$. In particular, if $\epsilon = 1/100$, we may choose $T_\epsilon \leq 2M_{n-1} + 1$. Since $\frac32 \leq c(\theta) < 4$ for every $\theta \in \mathbb{T}$ when $0 \leq {\beta}< 1$, it follows that $x_k \in (0,1)$ for every $k \geq 0$ ($0 < x_i < 4 p(\frac12) = 1$). We will first show that there is an $s \geq 0$ such that $x_s \in [1/100, 99/100]$, and $\theta_s \not\in I_0 \cup (I_0 + \omega)$. Then we will prove the statement from there. Suppose first that $x_0 \in [1/100, 99/100]$. If $\theta_0 \not\in I_0 \cup (I_0 + \omega)$, we are done. Assume instead that $\theta_0 \in I_0 \cup (I_0 + \omega)$. If $x_2 \in [1/100, 99/100]$, we are done. Otherwise, $x_2 \not\in [1/100, 99/100]$, and we fall into one of the cases considered below. Now, suppose instead that $x_0 \not\in [1/100, 99/100]$. Then there is an $s > 0$ such that $x_s \in [1/100, 99/100]$. Let $s$ be the smallest such integer. Since $p(1 - x) = p(x)$, we may assume that $x_0 < 99/100$ (discounting the possibility that $x_0 > 99/100$. By \[TimeOfAscent\], there is a uniform upper bound on $s$, say $s \leq S_\epsilon$, if $x_0 \in (\epsilon, 1 - \epsilon)$. If $\theta_s \not\in I_0 \cup (I_0 + \omega)$, we are done. If instead $\theta_s \in I_0 \cup (I_0 + \omega)$, then since $s$ was the smallest such integer, $x_{s - 1} \not\in [1/100, 99/100]$, and so by \[TwoStepsAfterEntry\], $x_{s + 2} \in [1/100, 99/100]$, and $\theta_{s+2} \not\in I_0 \cup (I_0 + \omega)$. In any case, there is a (uniformly) bounded $s \leq S_\epsilon + 2$, such that $\theta_s \not\in I_0 \cup (I_0 + \omega), x_s \in [1/100, 99/100]$. We may thus assume (without loss of generality) that $\theta_0 \not\in I_0 \cup (I_0 + \omega), x_0 \in [1/100, 99/100]$. Recall that $\Theta_{n-1} \cap G_{n-1} = \empty$ by \[ThetaInterGEmpty\]. Then \[WhenNotInC\] implies that, the next time $t \geq 0$ that $\theta_t \in \Theta_{n-1}$, then $x_t \in C$. Since $\Theta_{n-1} = \mathbb{T} \backslash \bigcup \limits_{i = 0}^{n-1} \bigcup \limits_{m = -M_i}^{M_i} (I_i + m\omega)$, the maximum number of consecutive iterations spent outside $\Theta_{n-1}$ is $2M_{n-1} + 1$. Thus, setting $T_\epsilon = S_\epsilon + 2M_{n-1} + 3$, the proof is completed. \[Contraction\] Let $n \geq 0$ be arbitrary. If ${\beta}\in B_n$, $\theta_0 \in \Theta_{n-1}$, and $x_0, y_0 \in C$, then for each $k > 1$ $$\begin{aligned} |x_k - y_k| < (3/5)^{k/2} |x_0 - y_0|.\end{aligned}$$ Let $0 < s_1 < s_2 < \cdots$ be the times when $\theta_{s_l} \in I_n$. By \[ZoomContractionRate\] $$\begin{aligned} |x_k - y_k| < (3/5)^{(1/2 + 1/2^{n+1})k} |x_0 - y_0|,\end{aligned}$$ for $k \in [1, s_1]$. Since $s_1 \geq M_n \gg 20 \cdot 2^{n+1} K_n$ if $\lambda$ is large enough (as in \[ZoomInductionStep\]), we obtain $$\begin{aligned} |x_{s_1} - y_{s_1}| < (3/5)^{s_1/2 + 20K_n} |x_0 - y_0|.\end{aligned}$$ Suppose that $|x_{s_l} - y_{s_l}| < (3/5)^{s_l/2 + 20K_n} |x_0 - y_0|$ holds for $l \geq 1$. Since ${\beta}\in B_n$, $(iv)_n$ implies that $\theta_{s_l + 2K_n + 20} \in \Theta_{n-1}$, and $x_{s_l + 2K_n + 20} \in C$. Recall that $|c(\theta)p'(x)| < 4 < (5/3)^3$ for every $\theta \in \mathbb{T}$ and $x \in [0,1]$. Now, it follows that $$\begin{aligned} |x_{s_l + k} - y_{s_l + k}| < 4^k \cdot |x_{s_l} - y_{s_l}| < (5/3)^{3k} \cdot (3/5)^{s_l/2 + 20K_n} \cdot |x_0 - y_0|,\end{aligned}$$ for $k \in [1, 2K_n + 20]$. Since $k < 2K_n + 20$, and therefore $20K_n - 3k \geq 10K_n \geq k/2$, we get $$\begin{aligned} |x_{s_l + k} - y_{s_l + k}| < (3/5)^{s_l/2 + 20K_n - 3k} \cdot |x_0 - y_0| < (3/5)^{s_l/2 + k/2} \cdot |x_0 - y_0|.\end{aligned}$$ Now, we obtain for $k \in [s_l + 2K_n + 20, s_{l+1}]$ that $$\begin{aligned} |x_k - y_k| &< (3/5)^{(1/2 + 1/2^{n+1})(k - s_l + 2K_n + 20)} \cdot |x_{s_l + 2K_n + 20} - y_{s_l + 2K_n + 20}| <\\ &< (3/5)^{(1/2 + 1/2^{n+1})(k - s_l + 2K_n + 20)} \cdot (3/5)^{(s_l + 2K_n + 20)/2} \cdot |x_0 - y_0| =\\ &= (3/5)^{k/2 + 1/2^{n+1}(k - s_l + 2K_n + 20)} |x_0 - y_0|.\end{aligned}$$ We will now proceed to prove the stronger bound for $k = s_{l+1}$. We know that $1/2^{n+1}(s_{l+1} - s_l) \geq 1/2^{n+1} N_m \gg 1/2^{n+1} (20 \cdot 2^{n+1} K_n$ (again, see the proof of $(i)_{n+1}$, \[ZoomInductionStep\]) $$\begin{aligned} |x_{s_{l+1}} - y_{s_{l+1}}| &< (3/5)^{s_{l+1}/2 + 1/2^{n+1}(s_{l+1} - s_l + 2K_n + 20)} |x_0 - y_0| =\\ &< (3/5)^{s_{l+1}/2 + 20K_n} |x_0 - y_0|\end{aligned}$$ By induction, the statement follows. \[InvariantSubset\] For every $n \geq 0$, there exists an invariant (compact) subset $S_n = \mathbb{T} \times B_n \times [a_n, 1 - a_n]$, where $0 < a_n \leq 1/4$, such that for $(\theta_0, {\beta}, x_0), (\theta_0, {\beta}, y_0) \in S_n$ $$\begin{aligned} |x_k - y_k| < c_n \cdot (3/5)^{k/2} |x_0 - y_0|,\end{aligned}$$ where $c_n > 0$ is a constant depending only on $n$. Suppose that ${\beta}_{\max} < 1$ is the biggest ${\beta}\in B_n$. Let $$\begin{aligned} b_n = \max \limits_{{\beta}\in B_n, \theta \in \mathbb{T}} c_{\beta}(\theta)p(1/2) = 1/4 \cdot \left( 3/2 + 5/2{\beta}_{\max} \right) < 1.\end{aligned}$$ We will show that $a_n = 1 - b_n$ will suffice. Let $\theta_0 \in \mathbb{T}, x_0 \in [a_n, 1 - a_n]$. Note that, if $x_0 \not\in [1/100, 99/100]$, then, for every ${\beta}\in B_n$, $$\begin{aligned} \frac98 a_n \leq \frac32 a_n (1 - a_n) \leq c_{\beta}(\theta_0)p(x_0) = x_1 \leq c_{\beta}(\theta_0) a_n (1 - a_n) \leq 4 \cdot 1/4 \cdot (1 - a_n),\end{aligned}$$ since $1 - a_n \geq \frac34$. That is, $x_1 \in S_n$. Since this worked for any $\theta_0 \in \mathbb{T}$, this set must be invariant. For the second part, let $\theta_0 \in \mathbb{T}, x_0, y_0 \in S_n$. According to \[EverythingAligns\], there are $s, t \leq T_n$, such that $\theta_s, \theta_t \in \Theta_{n-1}, x_s, y_t \in C$, where $T_n$ is the same for all these starting values. We may assume without loss of generality that $s \leq t$. Recall that $\Theta_{n-1} \cap G_{n-1} = \empty$ by \[ThetaInterGEmpty\]. Since $\theta_s \in \Theta_{n-1}, x_s \in C \subset [1/100, 99/100]$, and $\theta_t \in \Theta_{n-1}$, \[WhenNotInC\] implies that $x_t \in C$. Hence $\theta_t \in \Theta_{n-1}$, and $x_t, y_t \in C$. Now, $$\begin{aligned} |x_t - y_t| \leq 4^t \cdot |x_0 - y_0|.\end{aligned}$$ Combining this with \[Contraction\] yields, for every $k \geq 0$, $$\begin{aligned} |x_k - y_k| \leq 4^{T_n} \cdot (5/3)^{T_n/2} \cdot (3/5)^{k/2} |x_0 - y_0|,\end{aligned}$$ which concludes our proof. \[NegativeLyapunov\] For every $(\theta_0, {\beta}, x_0) \in S_n$ ($n \geq 0$), and every for every $k > 0$, $$\begin{aligned} \left| \frac{\partial x_k}{\partial x_0} \right| < c_n \cdot (3/5)^{k/2},\end{aligned}$$ for some constant $c_n$ depending only on $n$. Choose $x_0$ in the interior of $A_{\beta}$. We have for small enough $|h| > 0$ that $x_0 + h, x_0 \in A_{\beta}$. Considering $x_k(x_0)$ as a function of $x_0$, we have $$\begin{aligned} \left| \frac{\partial x_k}{\partial x_0} \right| &= \left| \lim \limits_{h \to 0} \frac{x_k(x_0 + h) - x_k(x_0)}{h} \right| \\ &< \lim \limits_{h \to 0} \frac{c_n \cdot (3/5)^{k/2}|x_0 + h - x_0|}{|h|} \\ &= c_n \cdot (3/5)^{k/2}.\end{aligned}$$ \[SmoothnessOfAttractor\] There is an invariant curve, the graph of a function $\psi^{\beta}(\theta)$ which is smooth smooth ($C^\infty$) in both ${\beta}$ and $\theta$. This curve attracts the orbits of every point $(\theta, x) \in \mathbb{T} \times (0,1)$. We will use the results in [@StarkRegularityQPF]. In his notation, for a fixed $n \geq 0$, $(\theta, {\beta}) \in X = \mathbb{T} \times B_n$ and $x \in Y = [a_n, 1 - a_n]$ (where $a_n$ is as in \[InvariantSubset\]). Now, by \[NegativeLyapunov\] $$\begin{aligned} \left| D_x x_k \right| < c_n \cdot (3/5)^{k/4}\end{aligned}$$ for every $(\theta_0, {\beta}, x_0) \in S_n = X \times Y$. Applying [@StarkRegularityQPF Theorem 2.1], we obtain continuous invariant graphs $\{\psi^{\beta}_n: \mathbb{T} \to (0,1)\}$ for each ${\beta}\in B_n$, attracting all of $\mathbb{T} \times (0,1)$, by \[EverythingAligns\]. Now, [@StarkRegularityQPF Theorem 3.1] implies that each $\psi^{\beta}_n$ is as smooth as $\Phi_{\alpha_c, {\beta}}$, that is $C^\infty$. If ${\beta}\in B_n$, then ${\beta}\in B_m$ and $\psi^{\beta}_m = \psi^{\beta}_n$ for every $m \geq n$, since the attractor is unique. We also recall that $\bigcup \limits_{n = 0}^\infty B_n = [0,1)$. Therefore, we obtain for every $0 \leq {\beta}< 1$ a $C^\infty$ map $$\begin{aligned} \psi^{\beta}: \mathbb{T} \to (0,1),\end{aligned}$$ the graph of which attracts $\mathbb{T} \times (0,1)$. Asymptotic minimal distance between attractor and repeller ---------------------------------------------------------- Here, we show that, when ${\beta}\in B_n$, then the curve $\psi^{\beta}$ will be essentially flat in the step before the first peek, i.e. that $\partial_\theta \psi^{\beta}(I_n)$ is very small, and furthermore, it will be located in $C$. This will then give us very good bounds on $\partial_\theta \psi^{\beta}(I_n + \omega)$, which will be very close to $\partial_\theta c(I_n)$. That is $\psi^{\beta}(I_n + \omega)$ will almost look like $c$ does slightly to the left of the peak at $\theta = 0$, that is, sharply increasing. The next part is to show that the value of $\psi^{\beta}(\alpha_c)$ is almost $1/2$, meaning that $\psi^{\beta}(\alpha_c + \omega) \approx c(\alpha_c)p(1/2)$ is close to the “potential maximum”. For $\theta \in I_n + \omega$ not very close to $\alpha_c$, the sharp nature of the peak at $\alpha_c$ will mean that $\psi^{\beta}(\theta + \omega)$ can’t reach as high as $\psi^{\beta}(\alpha_c + \omega)$. This will then give us the asymptotic behaviour of the minimum distance we described. The main results here are \[DerivativeBoundBeforePeak,MinimumDistance\]. \[SmallIteratedDerivatives\] If $\theta_0 \in \Theta_{n-1}$, and $x_0 = x \in C$, then $$\begin{aligned} \left|(\partial_\theta c(\theta_{N-1})) \cdot p(x_{N-1}) + \sum \limits_{j = 1}^{N-1} (\partial_\theta c(\theta_{j-1})) \cdot p(x_{j-1}) \prod \limits_{i=j}^{N-1} c(\theta_i) \cdot p'(x_i)\right| < \lambda^{1/4},\end{aligned}$$ where $N = N(\theta_0; I_n)$, and $\partial$ is either $\partial_{\beta}$ or $\partial_\theta$. Note that the assumption that $\partial_\theta x_0 = 0$, is equivalent to $$\begin{aligned} |\partial_\theta x_N| = \left|(\partial_\theta c(\theta_{N-1})) \cdot p(x_{N-1}) + \sum \limits_{j = 1}^{N-1} (\partial_\theta c(\theta_{j-1})) \cdot p(x_{j-1}) \prod \limits_{i=j}^{N-1} c(\theta_i) \cdot p'(x_i)\right|,\end{aligned}$$ since then $\partial_\theta x_0 \prod \limits_{i=j}^{N-1} c(\theta_i) \cdot p'(x_i)$ is removed from the expression. Let $s < N$ be the smallest integer such that $\theta_i \not\in I_0 \cup (I_0 + \omega)$ for $s \leq i \leq N$ (that is $\theta_i$ won’t return to $I_0$ before $i = N$). Since $$\begin{aligned} \theta_0 \in \Theta_{n-1} = \mathbb{T} \backslash \bigcup \limits_{i=0}^{n-1} \bigcup \limits_{k = -M_i}^{M_i} I_0 + k\omega,\end{aligned}$$ and also $N(\theta; I_0) \geq M_0$ for $\theta \in I_0$, we deduce that at least $s \geq M_0$. Recall that $M_0 \gg K_0 = \lambda^{1/28}$, and so $K_0 \gg 10 \log \lambda$ if $\lambda$ is large. Thus, for every $s \leq k \leq N$, $\theta_k \not\in I_0 \cup (I_0 + \omega)$, and $|\partial_\theta c(\theta_k)|, |\partial_{\beta}c(\theta_k)| < \frac1{\sqrt{\lambda}}$ (see \[CDerivativeOutsideI0\]), and also $\prod \limits_{j = k}^{N-1} |c(\theta_j)p(x_j)| < (3/5)^{(N - k)/2}$ (see \[TailContractionBaseInd\]). Applying \[DerivativeBounds\] for $T = N-1$, assuming $\partial_\theta x_0 = 0$, we obtain that $|\partial_{\beta}x_N|, |\partial_{\beta}y_N| \leq \lambda^{-1/4}$, which is what we wanted to show. Let $0 \leq {\beta}< 1$ be fixed. For each given $(\theta_0, x_0) \in I_0 \times C$, set $T(\theta_0, x_0)$ equal to the smallest positive integer $T > 2$ such that $$\begin{aligned} x_T \geq \frac1{100}.\end{aligned}$$ Set $T(\theta) = T(\theta, \psi^{\beta}(\theta))$. \[DerivativeBoundBeforePeak\] Suppose that $0 \leq {\beta}< 1$, and let $J = J({\beta})$ be an interval such that $$\begin{aligned} I_{m+1} \subseteq J \subseteq I_m,\end{aligned}$$ for some $1 \leq m$, satisfying that, for every $\theta_0 \in J$, $$\begin{aligned} T(\theta_0) \leq (N_m)^{3/4},\end{aligned}$$ where $$\begin{aligned} N_m = \min \limits_{\theta \in (I_m + \omega)} N(\theta; I_m).\end{aligned}$$ Then $$\begin{aligned} |\partial_\theta \psi^{\beta}(\theta)|, |\partial_{\beta}\psi^{\beta}(\theta)| \leq \lambda^{-1/4} + \epsilon(m)\end{aligned}$$ for every $\theta \in J$, where $\epsilon(m) \to 0$ as $m \to \infty$. Moreover, $$\begin{aligned} \psi^{\beta}(J) \subseteq C,\label{InCAtFirstPeak}\end{aligned}$$ and if $m \geq 1$ is large enough, $$\begin{aligned} {\beta}\lambda^{1/7} \leq \partial_\theta \psi^{\beta}(\theta) \leq {\beta}\lambda\end{aligned}$$ for $\theta \in J + \omega$. We will iterate the segment given by $\theta_0 = \theta \in J \subseteq I_0$. For ease of notation, we set $x_0 = \psi^{\beta}(\theta_0)$. Let $0 = s_0 < s_1 < \dots$ be the return times to $J$, that is for $i \geq 0$, $\theta_i \in J \Leftrightarrow i = s_k$ for some $k \geq 0$. Set $\theta_0^{(k)} = \theta_{s_k}, x_0^{(k)} = x_{s_k}$. Recall that $T = T(\theta_0, x_0)$ was defined as the smallest positive integer satisfying that $x_T \geq \frac1{100}$. Now, suppose that $t \geq 0$ is the smallest integer satisfying $$\begin{aligned} x_{T + t} \in C, \theta_{T + t} \in \Theta_{m-1}.\end{aligned}$$ Since $x_T \in [1/100, 99/100]$, \[EverythingAligns\] implies that $t \leq 2M_{m-1} + 1 < K_m \ll \sqrt{N_m}$. Set $P = T + t \leq (N_m)^{3/4} + \sqrt{N_m} \leq 2(N_m)^{3/4} \ll N_m$, then $\theta^{(k)}_P \in \Theta_{m-1}, x^{(k)}_P \in C$ for every $k \geq 0$. Now, \[InCWhenReturnToBad\] implies that $$\begin{aligned} \psi^{\beta}(\theta^{(k+1)}_0) = x^{(k+1)}_0 = x_{s_{k+1}} \in C,\end{aligned}$$ for every $k \geq 1$, or that $\psi^{\beta}(J) \subseteq C$. Additionally, \[TailContraction\] gives that $$\begin{aligned} \prod \limits_{i=P}^{U_k - 1} |c(\theta^{(k)}_i) p'(x^{(k)}_i)| \leq (3/5)^{(U_k - P)/2}\end{aligned}$$ where we have set $U_j = s_{j+1} - s_j$. Since $\theta^{(k)}_P \in \Theta_{m-1}, x^{(k)}_P \in C$, \[SmallIteratedDerivatives\] implies that $$\begin{aligned} |\partial_\theta x^{(k)}_{U_k}| &= |(\partial_\theta c(\theta^{(k)}_{P-1})) p(x^{(k)}_{P-1}) + \partial_\theta x^{(k)}_P \prod \limits_{i=P}^{U_k - 1} c(\theta^{(k)}_i) p'(x^{(k)}_i) +\\ &+ \sum \limits_{j = P+1}^{U_k-1} \partial_\theta c(\theta^{(k)}_{j-1}) p(x^{(k)}_{j-1}) \prod \limits_{i = j}^{U_k-1} c(\theta^{(k)}_i) p'(x^{(k)}_i)| \leq\\ &\leq |\partial_\theta x^{(k)}_P| \cdot (3/5)^{(U_k - P)/2} + \lambda^{-1/4}.\end{aligned}$$ Similarly, recalling that $|c(\theta) \cdot p'(x)| \leq 4$, $$\begin{aligned} |\partial_\theta x^{(k)}_P| &\leq |\partial_\theta x^{(k)}_0| \cdot \prod \limits_{i=0}^{P - 1} |c(\theta^{(k)}_i) p'(x^{(k)}_i)| +\\ &+ \|\partial_\theta c\| \left(1 + \sum \limits_{j = 1}^{P-1} \prod \limits_{i = j}^{P-1} |c(\theta^{(k)}_i) p'(x^{(k)}_i)| \right) \leq\\ &\leq |\partial_\theta x^{(k)}_0| \cdot 4^P + \|\partial_\theta c\| \sum \limits_{j = 0}^{P-1} 4^{P-1-j} =\\ &= |\partial_\theta x^{(k)}_0| \cdot 4^P + \|\partial_\theta c\| \frac{4^P - 1}{3},\end{aligned}$$ where $\| \cdot \|$ denotes the sup-norm. Putting it together, we obtain, since $U_k \geq N_m \gg P$, that $$\begin{aligned} |\partial_\theta x^{(k)}_{U_k}| &\leq \left( |\partial_\theta x^{(k)}_0| \cdot 4^P + \|\partial_\theta c\| \frac{4^P - 1}{3} \right) (3/5)^{(U_k - P)/2} + \lambda^{-1/4} \leq\\ &\leq |\partial_\theta x^{(k)}_0| \cdot \epsilon(m) + \|\partial_\theta c\| \epsilon(m) + \lambda^{-1/4},\end{aligned}$$ where $$\begin{aligned} \epsilon(m) = 4^P \cdot (3/5)^{(N_m - P)/2} \leq 4^P \cdot (3/5)^{N_m/2 - (N_m)^{3/4}} \to 0,\end{aligned}$$ as $m \to \infty$. By induction, since $x^{(k)}_{U_k} = x_{s_{k+1}}$, we get for every $k \geq 0$ that $$\begin{aligned} |\partial_\theta x_{s_{k+1}}| &\leq |\partial x^{(0)}_0| \epsilon(m)^{k+1} + \|\partial_\theta c\| \sum \limits_{j=1}^{k+1} \epsilon(m)^j + \lambda^{-1/4} \sum \limits_{j=0}^{k} \epsilon(m)^j \leq\\ &\leq \left( |\partial x^{(0)}_0| + \|\partial_\theta c\| + \lambda^{-1/4} \right) \cdot \epsilon(m) + \lambda^{-1/4}.\end{aligned}$$ By passing to a subsequence $\{s_{k'}\}$ of $\{s_k\}$ which satisfies $\theta_{s_{k'}} \to \theta_0$, and noting that $$\begin{aligned} \partial_\theta x_{s_{k'}} &= \partial_\theta \psi^{\beta}(\theta_{s_{k'}}) =\\ &= \partial_\theta \psi^{\beta}(\theta_0) + \partial_\theta^2 \psi^{\beta}(\theta_0) (\theta_{s_{k'}} - \theta_0) + o(\theta_{s_{k'}} - \theta_0) = \partial_\theta \psi^{\beta}(\theta_0) + o(1),\end{aligned}$$ as $k' \to \infty$, we obtain the inequality $$\begin{aligned} |\partial_\theta \psi^{\beta}(\theta_0)| \cdot (1 - \epsilon(m)) + o(1) \leq \left(\|\partial_\theta c\| + \lambda^{-1/4} \right) \cdot \epsilon(m) + \lambda^{-1/4},\end{aligned}$$ which we can write as $$\begin{aligned} |\partial_\theta \psi^{\beta}(\theta_0)| \leq \lambda^{-1/4} + \epsilon'(m),\end{aligned}$$ for some $\epsilon'(m)$ going to 0 as $m$ goes to infinity. The proof is exactly the same for $\partial_{\beta}\psi^{\beta}$. By \[CDerivativeAroundSecondPeak\], ${\beta}\lambda^{1/6} < \partial_\theta c_{\alpha_c, {\beta}= 1}(\theta) < {\beta}\lambda$ for every $\theta \in I_0+ \omega$. When $\theta \in J$, then $\psi^{\beta}(\theta) \in C$. Therefore $$\begin{aligned} \frac{3}{10} < \frac32 \cdot p(1/3 + 1/100) \leq p(\psi^{\beta}(\theta)) \leq 4p(1/3 + 1/100) < 95/100.\end{aligned}$$ Recall that $|\partial_\theta \psi^{\beta}(\theta)| < (1 + \epsilon(m)) \lambda^{-1/4}$, where $\epsilon(m) \to 0$ as $m \to \infty$. Since $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + \omega) = \left(\partial_\theta c(\theta) \right) \cdot p(\psi^{\beta}(\theta)) + c(\theta) \cdot p'(\psi^{\beta}(\theta)) \cdot \partial_\theta \psi^{\beta}(\theta),\end{aligned}$$ assuming that $\lambda$ is very large, we obtain after a straight-forward computation that $$\begin{aligned} {\beta}\lambda^{1/7} < \partial_\theta \psi^{\beta}(\theta + \omega) < {\beta}\lambda.\end{aligned}$$ \[BigDerivative\] There is an $n_0 \geq 0$ such that, for every $n \geq n_0$, and every ${\beta}\in B_n \backslash B_{n-1}$ (sufficiently close to 1) $$\begin{aligned} {\beta}\lambda^{1/7} \leq \partial_\theta \psi^{\beta}(\theta) \leq {\beta}\lambda\end{aligned}$$ for every $\theta \in I_n + \omega$, assuming that $\lambda > 0$ is sufficiently large. Moreover $$\begin{aligned} \psi^{\beta}(I_n) \subseteq C.\end{aligned}$$ Let $0 \leq {\beta}< 1$ sufficiently close to 1 be given, and choose $J = I_{n}$, where $n = n({\beta})$. Now \[QuickReturnFromWorstToGood\] tells us that $$\begin{aligned} x_{2K_n + 20} \in C, \theta_{2K_n + 20} \in \Theta_{n-1},\end{aligned}$$ that is $\max \limits_{\theta \in I_n} T(\theta_0) \leq 2K_n + 20$, where $K_n \ll \sqrt{N_n} \leq (N_n)^{3/4}$. Both statements now follow immediately from \[DerivativeBoundBeforePeak\]. \[ParamDerivativeAtAlpha\] There is an $0 < \epsilon \leq 1$ such that, for every $1 - \epsilon \leq {\beta}< 1$, $$\begin{aligned} \frac1{3} \leq \partial_{\beta}\psi^{\beta}(\alpha_c) \leq \frac52,\end{aligned}$$ provided that $\lambda > 0$ is sufficiently large. Moreover, $$\begin{aligned} \lim \limits_{{\beta}\to 1^-} \psi^{\beta}(\alpha_c) = 1/2,\end{aligned}$$ and, as ${\beta}\to 1^-$, $$\begin{aligned} |\psi^{\beta}(\alpha_c) - 1/2| = O(1-{\beta}).\label{DistanceFromOneHalfForAlphaC}\end{aligned}$$ For ${\beta}$ sufficiently close to 1, \[BigDerivative\] implies that $\psi^{\beta}(I_n) \subseteq C$, and that $|\partial_{\beta}\psi^{\beta}(\theta)| < \lambda^{1/4} + \epsilon(n)$ for $\theta \in I_n$, where $\epsilon(n) \to 0$ as $n \to \infty$. By invariance of $\psi^{\beta}$ under the map $\Phi_{\alpha_c, {\beta}}$, $$\begin{aligned} \partial_{\beta}\psi(\alpha_c) = \partial_{\beta}c(\alpha_c - \omega) p(\psi(\alpha_c - \omega) + c(\alpha_c - \omega) \partial_{\beta}\psi^{\beta}(\alpha_c - \omega).\end{aligned}$$ By definition of the set $\mathcal{A}_0 \ni \alpha_c$, $2\lambda^{-2/3} \leq 0 - (\alpha_c - \omega) \leq \lambda^{-2/5}/2$, which means that $$\begin{aligned} c(\alpha_c - \omega) - c(0) = \lambda^{-2/5}/2 \partial_\theta c(0) + o(\lambda^{-2/5}) = o(\lambda^{-2/5}),\end{aligned}$$ or that $c(\alpha_c - \omega) = \frac32 + {\beta}\frac52 + o(\lambda^{-2/5})$. This implies that $\partial_{\beta}c(\alpha_c - \omega) = \frac52 + o(\lambda^{-2/5})$. Therefore $$\begin{aligned} (\frac52 + o(\lambda^{-2/5})) (\frac13 - \frac1{100}) + \lambda^{-1/4} + \epsilon(n) \leq \partial_{\beta}\psi(\alpha_c) \leq (\frac52 + o(\lambda^{-2/5})) (\frac13 + \frac1{100}) + 4\lambda^{-1/4} + 4\epsilon(n),\end{aligned}$$ or $$\begin{aligned} \frac13 \leq 2\frac13 + o(\lambda^{-1/10}) + \epsilon(n) \leq \partial_{\beta}\psi(\alpha_c) \leq \frac54 + o(\lambda^{-1/10}) + 4\epsilon(n) \leq \frac52,\end{aligned}$$ if $n$ and $\lambda$ are suffciently large. Suppose that $\theta_0 = \alpha_c - M_n \omega$, $x_0 \in C$. In [@BjerkSNA], it was proved that, if ${\beta}= 1$, then $$\begin{aligned} \lim \limits_{n \to \infty} x_{M_n} = 1/2.\end{aligned}$$ Letting $x_{M_n}({\beta})$ (a smooth function in ${\beta}$) be as above, but corresponding to a ${\beta}\in [0,1]$ sufficiently close to 1, we obtain uniform bounds on $$\begin{aligned} \partial_{\beta}x_{M_n}({\beta}).\end{aligned}$$ Since $$\begin{aligned} x_{M_n}({\beta}) - x_{M_n}(1) = \partial_{\beta}x_{M_n}(\tilde{{\beta}})({\beta}- 1),\end{aligned}$$ for some ${\beta}\leq \tilde{{\beta}} \leq 1$, we have, for large enough $n \geq 0$, $$\begin{aligned} |x_{M_n}({\beta}) - 1/2| &= |(x_{M_n}({\beta}) - x_{M_n}(1)) + (x_{M_n}(1) - 1/2)| < 2\epsilon,\end{aligned}$$ uniformly in $n$, for ${\beta}$ sufficiently close to 1. From this, it follows that $$\begin{aligned} \lim \limits_{{\beta}\to 1^-} \psi^{\beta}(\alpha_c) = \lim \limits_{n \to \infty} \lim \limits_{{\beta}\to 1^-} x_{M_n}({\beta}) = 1/2.\end{aligned}$$ By the mean value theorem $$\begin{aligned} \psi^{\beta}(\alpha_c) = \lim \limits_{\tilde{{\beta}} \to 1^-} \psi^{\tilde{{\beta}}}(\alpha_c) + \partial_{\beta}\psi^{\tilde{{\beta}}}(\alpha_c)({\beta}- \tilde{{\beta}}) + o({\beta}- \tilde{{\beta}}) = 1/2 + O(1 - {\beta}),\end{aligned}$$ since $\frac1{3} \leq \partial_{\beta}\psi^{\beta}(\alpha_c) \leq \frac52$. Let $T_1({\beta}, \theta)$ be defined, for every $\theta \in I_0 + 3\omega$, as the smallest integer $0 \leq T_1({\beta}, \theta)$ such that $\psi^{\beta}(\theta + T_1({\beta}, \theta) \cdot \omega) \geq \frac1{100}$. By its very definition $\max \limits_{\theta \in \mathbb{T}} T_1({\beta}, \theta) \leq M_C({\beta})$, where $M_C({\beta})$ is the constant appearing in \[Bn\]. Hence, if $$\begin{aligned} 2K_{n-1} - 2 < \max \limits_{\theta \in I_0 + 3\omega} T_1({\beta}, \theta) \leq 2K_n - 2,\end{aligned}$$ then ${\beta}\in B_n \backslash B_{n-1}$. Set $$\begin{aligned} T_1({\beta}) = \max \limits_{\theta \in \mathbb{T}} T_1({\beta}, \theta).\end{aligned}$$ \[MinimumDistance\] Suppose that ${\beta}< 1$ is sufficiently close to 1, and that ${\beta}\in B_n \backslash B_{n-1}$, i.e. that $$\begin{aligned} 2K_{n-1} - 2 < T_1({\beta}) \leq 2K_n - 2.\end{aligned}$$ Then the the minimum distance between the repelling set and the attractor is attained $I_n + 3\omega$, and is asymptotically linear in ${\beta}$. Specifically, $$\begin{aligned} \delta({\beta}) = c_{{\beta}= 1}(\alpha_c + \omega) \cdot \frac58(1 - {\beta}) + o(1-{\beta})\label{MinimumValue}\end{aligned}$$ asymptotically as ${\beta}\to 1^-$. Moreover, $$\begin{aligned} \psi^{\beta}(\alpha_c) = \frac38 + {\beta}\frac58 + o(1-{\beta}).\label{PsiInAlphaC}\end{aligned}$$ If $\psi(\theta) \in (a, 1/10)$, where $0 \leq a < 1/10$, then $4 \psi(\theta) \geq \psi(\theta + \omega) \geq \frac54 \psi(\theta)$ (see \[AscentFromBottom\]), or $\psi(\theta + \omega) \in [\frac54 a, 99/100]$. Similarly, if $\psi(\theta) \in (9/10, b)$, where $9/10 < b \leq 1$, then $\psi(\theta + \omega) \in (\frac54(1-b), 99/100)$ (since $p(1 - x) = p(x)$). As long as $\theta \not\in I_0 \cup (I_0 + \omega)$, then $\psi(\theta) \in [1/100, 99/100]$ implies that $\psi(\theta + \omega) \in [1/100, 2/5] \subset [1/100, 99/100]$ (see \[CloseToCIfAwayFromPeak\]). One implication of this is, that a value strictly greater than $99/100$ can never be attained for a $\theta \not\in (I_0 + \omega) \cup (I_0 + 2\omega)$. Another one is that, if a value strictly less than 1/100 is attained, the minimum has to be attained in the iteration immediately following a value greater than 99/100, i.e., for $\theta \in (I_0 + 2\omega) \cup (I_0 + 3\omega)$. This means that we only need to analyze $\psi^{\beta}(\theta)$ for $\theta \in (I_0 + \omega) \cup (I_0 + 2\omega) \cup (I_0 + 3\omega)$. We know that the part of $\psi^{\beta}$ lying below $1/100$ even in these intervals will rise with each iteration, meaning that the lowest part, the one closes to 0, must come from a previous value strictly greater than 99/100. Therefore, we are interested in seeing how far above 99/100 $\psi^{\beta}$ can get. By the above discussion, necessarily $\psi(\theta) \leq 2/5$ for $\theta \in I_0$, and so the theoretical maximum for $I_0 + \omega$ is $$\begin{aligned} \psi^{\beta}(\theta) \leq 4 p(2/5) = 24/25.\end{aligned}$$ The theoretical minimum coming from that is at least $\geq 1/25$. Thus, we turn to $I_0 + 2\omega$. By \[DistanceFromOneHalfForAlphaC\], $$\begin{aligned} |\psi^{\beta}(\alpha_c) - 1/2| = O(1-{\beta}).\end{aligned}$$ Therefore $$\begin{aligned} \psi^{\beta}(\alpha_c + \omega) = c(\alpha_c) p(1/2 + O(1-{\beta})) = \left(\frac32 + {\beta}\frac52 \right) \left(\frac14 + O((1-{\beta})^2) \right) = \frac38 + {\beta}\frac58 + o(1-{\beta}),\end{aligned}$$ and $$\begin{aligned} 1 - \psi^{\beta}(\alpha_c + \omega) = \frac58 (1-{\beta}) + o(1-{\beta}).\end{aligned}$$ Note that this maximum is, up to the error term $o(1-{\beta})$, equal to the theoretical maximum $c(\alpha_c)p(1/2)$. Therefore, the minimum is at most $$\begin{aligned} \psi^{\beta}(\alpha_c + 2\omega) = c_{\beta}(\alpha_c + \omega) p(\psi^{\beta}(\alpha_c + \omega)) \leq 4 (1 - \psi^{\beta}(\alpha_c + \omega)) \leq \frac52 (1-{\beta}) + o(1-{\beta}),\label{ValueAtAlphaCPlus2Omega}\end{aligned}$$ and at least ($\theta \in I_n + 2\omega$) $$\begin{aligned} \psi^{\beta}(\theta + \omega) \geq \frac54 (1 - \psi^{\beta}(\theta)) \geq 1 - \psi^{\beta}(\theta) \geq \frac58 (1-{\beta}) \geq \frac12(1-{\beta}),\end{aligned}$$ for ${\beta}$ sufficiently close to 1. More specifically, we have $$\begin{aligned} \psi^{\beta}(\theta + \omega) = c_{\beta}(\theta)\psi^{\beta}(\theta)(1 - \psi^{\beta}(\theta)).\end{aligned}$$ There is some $\tilde{\theta}$ between $\theta$ and $\alpha_c$, such that $$\begin{aligned} \psi^{\beta}(\theta + \omega) = \psi^{\beta}(\alpha_c) + \partial_\theta \psi^{\beta}(\tilde{\theta})(\theta - \alpha_c).\end{aligned}$$ A quick Taylor expansion gives $$\begin{aligned} p(y) = p(x) + (1 - 2x)(y - x) - (y - x)^2.\end{aligned}$$ Since $c(\theta) = c(\alpha_c) + \partial^2_\theta c(\alpha_c) (\theta - \alpha_c)^2 + o( (\theta - \alpha_c)^2)$ for $\theta$ very close to $\alpha_c$, such as for $\theta \in I_n + \omega$, and $\psi^{\beta}(\alpha_c) = 1/2 - \partial_{\beta}\psi^{\tilde{{\beta}}}(\alpha_c)(1-{\beta})$ for some $\tilde{{\beta}}$ between 1 and ${\beta}$, this means that $$\begin{aligned} \psi^{\beta}(\theta + \omega) &= \left( c_{\beta}(\alpha_c) + o(\theta - \alpha_c) \right) \left( \psi^{\beta}(\alpha_c) + \left( -2\partial_{\beta}\psi^{\tilde{{\beta}}}(\alpha_c)(1-{\beta}) \right) \partial_\theta \psi^{\beta}(\tilde{\theta})(\theta - \alpha_c) + o(\theta - \alpha_c) \right) =\\ &= \psi^{\beta}(\alpha_c + \omega) - A_2({\beta}, \theta) \cdot (\theta - \alpha_c),\end{aligned}$$ for some constant $A_2({\beta}, \theta) > \frac12 \lambda^{-1/7}$, since $\partial_\theta \psi^{\beta}(\tilde{\theta}) \geq {\beta}\lambda^{1/7}$ (see \[BigDerivative\]) and $\partial_{\beta}\psi^{\tilde{{\beta}}}(\alpha_c) \geq \frac13$ (see \[ParamDerivativeAtAlpha\]). Similarly, in the next iteration, we obtain $$\begin{aligned} \psi^{\beta}(\theta + 2\omega) &= c_{\beta}(\theta + \omega) \left( p(\psi^{\beta}(\alpha_c + \omega)) - (1 + O(1-K))(-A_2({\beta}, \theta) (\theta - \alpha_c) + o(\theta - \alpha_c) \right) =\\ &= c_{\beta}(\theta + \omega) \left( p(\psi^{\beta}(\alpha_c + \omega)) + A_2({\beta}, \theta) (\theta - \alpha_c) + o(\theta - \alpha_c) + o(1-{\beta}) \right).\end{aligned}$$ Since $c_{\beta}(\theta + \omega) = c_{\beta}(\alpha_c + \omega) + A_3(\theta)(\theta - \alpha_c) + o(\theta - \alpha_c)$, where $|A_3(\theta)| = |\partial_\theta c_{\beta}(\alpha_c + \omega)| \leq \lambda^{-1/2}$ (see \[CDerivativeOutsideI0\]), this reduces to $$\begin{aligned} \psi^{\beta}(\theta + 2\omega) &= c_{\beta}(\alpha_c + \omega) \left( p(\psi^{\beta}(\alpha_c + \omega)) + A_2({\beta}, \theta) (\theta - \alpha_c) + o(\theta - \alpha_c) + o(1-{\beta}) \right) +\\ &+ A_3(\theta)(\theta - \alpha_c) \left( p(\psi^{\beta}(\alpha_c + \omega)) + A_2({\beta}, \theta) (\theta - \alpha_c) + o(\theta - \alpha_c) + o(1-{\beta}) \right) =\\ &= c_{\beta}(\alpha_c + \omega) p(\psi^{\beta}(\alpha_c + \omega)) + K_4({\beta}, \theta)(\theta - \alpha_c) + o(1-{\beta}),\end{aligned}$$ where $K_4({\beta},\theta) > 0$. This gives us immediately the asymptotic on the distance, since $p(\psi^{\beta}(\alpha_c + \omega)) = \frac58(1-{\beta}) + o(1-{\beta})$, as shown above. If we can prove that no point outside $I_n + 2\omega$ reaches as high as this, we are done. Recall \[TimeOfAscent\], stating that $$\begin{aligned} T_1({\beta}) \leq \max \limits_{\theta \in I_n} \log_{5/4} \frac1{20\psi(\theta + 3\omega)} \leq \log_{5/4} \frac1{10(1-{\beta})}.\end{aligned}$$ This of course means that $$\begin{aligned} 2K_{n-1} - 2 \leq T_1({\beta}) \leq \log_{5/4} \frac1{10(1-{\beta})}.\end{aligned}$$ By definition of $I_n$, $|I_n| = (4/5)^{K_{n-1}}$, or $$\begin{aligned} |I_n| = (4/5)^{K_{n-1}} \geq (4/5)^{T_1({\beta})/2 + 1} \geq \frac45 \sqrt{10(1-{\beta})} \geq 2 \sqrt{1-{\beta}}.\end{aligned}$$ Since $I_n$ is centred at $\alpha_c$, this means that $$\begin{aligned} [\alpha_c - \sqrt{\lambda^{1/2(1-{\beta})}}\lambda^{-1/4}, \alpha_c + \sqrt{\lambda^{1/2(1-{\beta})}}\lambda^{-1/4}] \subset I_n.\end{aligned}$$ Invoking \[AlphaPeakZoom\], we obtain the set inclusion $$\begin{aligned} \{ \theta \in I_0 + \omega : c_{\alpha_c, {\beta}}(\theta) \geq \left( \frac32 + {\beta}\frac52 \right)(1 - \lambda^{1/2}(1 - {\beta})) \} \subset I_n.\end{aligned}$$ Hence, the theoretical maximum attained for $\theta \in (I_0 \backslash I_n) + 2\omega$ is $$\begin{aligned} \psi^{\beta}(\theta) < \left( \frac32 + {\beta}\frac52 \right)(1 - \lambda^{1/2}(1 - {\beta})) p(1/2) = \left( \frac38 + {\beta}\frac58 \right)(1 - \lambda^{1/2}(1 - {\beta})),\end{aligned}$$ which is by the order of $1-{\beta}$ less than the maximum in $I_n$. Hence, the minimum for $\theta + \omega \in (I_0 \backslash I_n) + 3\omega$ satisfies $$\begin{aligned} \psi^{\beta}(\theta + \omega) \geq \frac54 (1 - \psi^{\beta}(\theta)) \geq \left( \frac58 + \frac38\lambda^{1/2} \right)(1 - {\beta}) > \psi^{\beta}(\alpha_c + 2\omega),\end{aligned}$$ which is bigger than the minimum attained in $I_n + 3\omega$. Asymptotic growth of the maximum derivative of the attractor ------------------------------------------------------------ The basic idea in this section is that the derivative in the interval $I_{n({\beta})} + \omega$, which is centered at $\alpha_c$ where 1/2 is almost attained, is large and approximately linear in ${\beta}$. In the next iteration, this means that this segment becomes approximately quadratic around the maximum point, which is almost at$\alpha_c + \omega$. The approximately quadratic shape around the minimum point (almost $\alpha_c + 2\omega$) is retained in the next ieration. The derivative at a point $\theta + 2\omega \in I_{n({\beta})} + 3\omega$ will be approximately equal to $(\theta - \alpha_c)$, and the value $\psi^{\beta}(\theta + 2\omega)$ will be approximately $(1-{\beta}) + (\theta - \alpha_c)^2$. Expanding the derivative at $\theta + (2 + T)\omega$ as a recurrence relation (as we have done several times before), the dominant term as $T$ grows will behave like $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + 2\omega) \cdot \prod_{k=0}^{T} c(\theta_k) \cdot p'(x_k) \sim \frac{\partial_\theta \psi^{\beta}(\theta + 2\omega)}{\psi^{\beta}(\theta + 2\omega)} \sim \frac{\theta - \alpha_c}{(1-{\beta}) + (\theta - \alpha_c)^2},\end{aligned}$$ when $T = T_1({\beta}, \theta)$ (see \[DefinitionOfT1\] for the definition). In practice, we will work with a slightly enlarged set $J_{\beta}\supseteq I_{n({\beta})} + \omega$ which is centered at $\alpha_c$. This set will be of size $\gtrsim \sqrt{1-{\beta}}$. This allows us to choose $(\theta - \alpha_c) \sim \sqrt{1-{\beta}}$, which maximizes $$\begin{aligned} \frac{\theta - \alpha_c}{(1-{\beta}) + (\theta - \alpha_c)^2} \sim \frac1{\sqrt{1-{\beta}}}.\end{aligned}$$ The last step is showing that the derivative can’t grow much more. The worst case would be when we get close to the peak only a few iterations after $T_1({\beta}, \theta)$ (when we have come back to the contracting region), potentially causing the derivative to grow further. If this were to occur, we would only visit parts so far from the peaks that it wouldn’t have much effect on the derivative, since we would need a much longer time to get back to the “worst parts” of the peaks. We show this by considering two cases: - We just recently changed scales from some $I_m$ to $I_{m+1}$ (due to an increase in ${\beta}$). In this case, we show that actually we may work with $I_m$, as if it were the appropriate scale, having all the constants work to our advantage (which they wouldn’t have, had we been forced to work with $I_{m+1}$). - We changed scales a long time ago, meaning that $\frac1{\sqrt{1-{\beta}}}$ is large enough to withstand the relatively small products coming from having come close to the peak, even the ones using the estimates that were inappropriate in the former case. This last bit is the contents of \[DerivativeGrowth\], the main result in this section. \[DerivativeGrowsDuringExpansion\] There is a constant $K > 0$ such that if $|\partial_\theta x_0| \geq K$ and $x_0 \leq \frac1{100}$, then for any $0 \leq {\beta}< 1$ $$\begin{aligned} |\partial_\theta x_1| > |\partial_\theta x_0|.\end{aligned}$$ Since $x_0 \leq 1/100$, $c_{\beta}(\theta_0) p'(x_0) \geq \frac32 (1 - \frac2{100}) \geq \frac54$. Now $$\begin{aligned} |\partial_\theta x_1| &= | (\partial_\theta c_{\beta}(\theta_0)) \cdot p(x_0) + c_{\beta}(\theta_0) \cdot p'(x_0) \cdot \partial_\theta x_0| \geq\\ &\geq |c_{\beta}(\theta_0) \cdot p'(x_0) \cdot \partial_\theta x_0| - |\partial_\theta c_{\beta}(\theta) \cdot p(x_0)| \geq\\ &\geq \frac54 \cdot |\partial_\theta x_0| - |\partial_\theta c_{\beta}(\theta)|.\end{aligned}$$ If $|\partial_\theta x_0|$ is sufficiently large, the conclusion follows. Recall that we defined $T_1({\beta}, \theta)$, for $\theta \in I_{n({\beta})} + 3\omega$, as the smallest integer $0 \leq T_1({\beta}, \theta)$ such that $\psi^{\beta}(\theta + T_1({\beta}, \theta) \cdot \omega) \geq \frac1{100}$. Set $$\begin{aligned} T_1({\beta}) = \max \limits_{\theta \in I_{n({\beta})} + 3\omega} T_1({\beta}, \theta).\label{DefinitionOfT1}\end{aligned}$$ \[MayChooseSquareRoot\] When ${\beta}< 1$ is sufficiently close to 1, the following holds: If $2K_{n-1} - 2 < T_1({\beta}) \leq 2K_n - 2$, then there is an interval $J_{\beta}\subseteq I_n + 2\omega$, centered at the point $\alpha_c$, satisfying $$\begin{aligned} {\beta}\lambda^{1/7} \leq \partial_\theta \psi^{\beta}(\theta) \leq {\beta}\lambda,\label{JDerivativeInequality}\end{aligned}$$ for every $\theta \in J_{\beta}$, and $$\begin{aligned} |J_{\beta}| \geq \frac45 (\sqrt{1 - {\beta}})^{1/\eta},\end{aligned}$$ where $\eta = \frac{T_1({\beta})}{2K_{n-1} - 2} > 1$. By \[TimeOfAscent\], $$\begin{aligned} T_1({\beta}) = \max \limits_{\theta \in I_{n({\beta})} + 3\omega} \log_{5/4} \frac1{20\psi^{\beta}(\theta)}.\end{aligned}$$ Now, \[MinimumValue\] implies that $$\begin{aligned} \min \limits_{\theta \in I_{n({\beta})} + 3\omega} \psi^{\beta}(\theta) \geq \frac32 \cdot \frac58(1-{\beta}) + o(1-{\beta}) \geq \frac12(1-{\beta}).\end{aligned}$$ Therefore $$\begin{aligned} T_1({\beta}) \leq \log_{5/4} \frac1{10(1-{\beta})}.\end{aligned}$$ implies that any such $J_{\beta}$ can include at least the interval $I_n$, which is centered at $\alpha_c$. Now, recalling that $T_1({\beta}) = \eta (2K_{n-1} - 2)$, or $K_{n-1} = \frac{T_1}{2\eta} + 1$, we get $$\begin{aligned} |I_n| = (4/5)^{K_{n-1}} > (4/5)^{T_1({\beta})/(2\eta) + 1} \geq (4/5) \cdot (\sqrt{10 (1-{\beta})})^{1/\eta} \geq (4/5) \cdot (\sqrt{1-{\beta}})^{1/\eta}.\end{aligned}$$ Hence $J_{\beta}= I_n$ satisfies the conclusions. From this point on, let $J_{\beta}$ denote the largest interval centered at $\alpha_c$, and satisfying the conclusion in \[MayChooseSquareRoot\]. Suppose that $0 \leq {\beta}< 1$, and $n = n({\beta})$. If $T_1({\beta}) \leq K_{n-1}^{3/2}(2K_{n-1} - 2)$, then $$\begin{aligned} J_{\beta}\supseteq I_{n-1} + \omega.\end{aligned}$$ By our assumptions on $T_1({\beta})$, $$\begin{aligned} T_1({\beta}) \leq K_{n-1}^{3/2}(2K_{n-1} - 2) \ll K_{n-1}^3 \sim (N_{n-1})^{3/4}.\end{aligned}$$ Applying \[DerivativeBoundBeforePeak\] to the set $J = I_{n-1}$, the statement follows, since every $T_1({\beta}, \theta) \leq T_1({\beta}) \leq (N_{n-1})^{3/4}$. Since $I_{n-1}$ is centered at $\alpha_c$, the set $I_{n-1}$ satisfies the conclusions in \[MayChooseSquareRoot\]. We recall that $J_{\beta}+ \omega$ is where the maximum of the graph is located, and $J_{\beta}+ 2\omega$ will be the location of the minimum. \[DerivativeAfterPeak\] Assume that $0 \leq {\beta}< 1$ is sufficiently close to 1. Then there are numbers $\frac1K < A_1({\beta}, \theta) < K, \frac1K < A_2({\beta},\theta) < K$ (where $K > 0$), depending only on ${\beta}$ and $\theta$, such that, for every $\theta \in J_{\beta}$, - $\partial_\theta \psi^{\beta}(\theta + \omega) = -A_1({\beta},\theta) \cdot (\theta - \alpha_c) + O(1-{\beta})$, and - $\partial_\theta \psi^{\beta}(\theta + 2\omega) = A_2({\beta},\theta) \cdot (\theta - \alpha_c) + O(1-{\beta})$. Throughout this entire proof, we will make use of the previous result that $\psi^{\beta}(\alpha_c) = 1/2 + O(1-{\beta})$ (see \[DistanceFromOneHalfForAlphaC\]). Let $\theta +\omega \in J_{\beta}+ \omega$ be arbitrary. We have the usual recurrence relation $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + \omega) &= \partial_\theta c_{\beta}(\theta) \cdot p(\psi^{\beta}(\theta)) + c_{\beta}(\theta) \cdot p'(\psi^{\beta}(\theta)) \cdot \partial_\theta \psi^{\beta}(\theta).\end{aligned}$$ We will analyze each term in detail, starting with $$\begin{aligned} \partial_\theta c_{\beta}(\theta) &= \partial_\theta c_{\beta}(\alpha_c) + \partial_\theta^2 c_{\beta}(\alpha_c) (\theta - \alpha_c) + o(\theta - \alpha_c) =\\ &= \partial_\theta^2 c_{\beta}(\alpha_c) (\theta - \alpha_c) + o(\theta - \alpha_c),\end{aligned}$$ where $\partial_\theta^2 c_{\beta}(\alpha_c) \leq 0$, since $\alpha_c$ is a local maximum for $c_{\beta}$. $$\begin{aligned} p(\psi^{\beta}(\theta)) &= p(\psi^{\beta}(\alpha_c)) + p'(\psi^{\beta}(\alpha_c)) \partial_\theta \psi^{\beta}(\alpha_c) (\theta - \alpha_c) + o(\theta - \alpha_c) =\\ &= 1/4 + o(1-{\beta}) + O(1-{\beta})O(\theta - \alpha_c) + o(\theta - \alpha_c) = 1/4 + o(1-{\beta}) + o(\theta - \alpha_c),\end{aligned}$$ since $\theta - \alpha_c = o(1)$ as ${\beta}\to 1^-$ (they lie in successively smaller intervals $I_{n({\beta})}$). Putting it together, the effects of the first term is: $$\begin{aligned} \partial_\theta c_{\beta}(\theta) p(\psi^{\beta}(\theta)) = (1/4) \partial_\theta^2 c_{\beta}(\alpha_c) (\theta - \alpha_c) + o(1-{\beta}) + o(\theta - \alpha_c).\end{aligned}$$ The second term can be similarly analyzed, starting with $$\begin{aligned} p'(\psi^{\beta}(\theta)) &= p'(\psi^{\beta}(\alpha_c) + \partial_\theta \psi^{\beta}(\alpha_c)(\theta - \alpha_c) + o(\theta - \alpha_c)) =\\ &= 1 - 2(1/2 + O(1-{\beta}) + \partial_\theta \psi^{\beta}(\alpha_c)(\theta - \alpha_c) + o(\theta - \alpha_c) =\\ &= O(1-{\beta}) - 2\partial_\theta \psi^{\beta}(\alpha_c)(\theta - \alpha_c) + o(\theta - \alpha_c).\end{aligned}$$ Therefore $$\begin{aligned} c_{\beta}(\theta) \cdot p'(\psi^{\beta}(\theta)) \cdot \partial_\theta \psi^{\beta}(\theta) = - 2\partial_\theta \psi^{\beta}(\alpha_c)\partial_\theta \psi^{\beta}(\theta)(\theta - \alpha_c) + O(1-{\beta}) + o(\theta - \alpha_c).\end{aligned}$$ We thus obtain the equality $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + \omega) = (1/4) \partial_\theta^2 c_{\beta}(\alpha_c) (\theta - \alpha_c) - 2\partial_\theta \psi^{\beta}(\alpha_c)\partial_\theta \psi^{\beta}(\theta)(\theta - \alpha_c) + O(1-{\beta}) + o(\theta - \alpha_c),\end{aligned}$$ or, recalling that $\partial_\theta^2 c_{\beta}(\alpha_c) \leq 0$ and $\partial_\theta \psi^{\beta}(\alpha_c)\partial_\theta \psi^{\beta}(\theta) > {\beta}^2(\lambda^{1/7})^2$, the bounds $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + \omega) = -A_1({\beta}, \theta) (\theta - \alpha_c) + O(1-{\beta}),\end{aligned}$$ where $\frac1K < A_1({\beta}, \theta) < K$, for some $K > 0$, as ${\beta}\to 1^-$. In the next iteration, for $\theta + 2\omega \in J_{\beta}+ 2\omega$, we have $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + 2\omega) &= \partial_\theta c_{\beta}(\theta + \omega) \cdot p(\psi^{\beta}(\theta + \omega)) + c_{\beta}(\theta + \omega) \cdot p'(\psi^{\beta}(\theta + \omega)) \cdot \partial_\theta \psi^{\beta}(\theta + \omega).\end{aligned}$$ The first term is $O(1-{\beta}) + o(\alpha_c - \theta)$, since $$\begin{aligned} p(\psi^{\beta}(\theta + \omega)) &= p(\psi^{\beta}(\alpha_c + \omega)) + p'(\psi^{\beta}(\alpha_c + \omega)) \partial_\theta \psi^{\beta}(\alpha_c + \omega) (\theta - \alpha_c) + o(\theta - \alpha_c) =\\ &= O(1-{\beta}) + (O(\theta - \alpha_c) + O(1-{\beta}))(\theta - \alpha_c) + o(\theta - \alpha_c).\end{aligned}$$ For the second term, note that $$\begin{aligned} p'(\psi^{\beta}(\theta + \omega)) &= p'(\psi^{\beta}(\alpha_c + \omega) + \partial_\theta \psi^{\beta}(\alpha_c + \omega)(\theta - \alpha_c) + o(\theta - \alpha_c)) =\\ &= 1 - 2(\psi^{\beta}(\alpha_c + \omega) + \partial_\theta \psi^{\beta}(\alpha_c + \omega)(\theta - \alpha_c) + o(\theta - \alpha_c) =\\ &= -\psi^{\beta}(\alpha_c + \omega) + O(1-{\beta}) + O(\theta - \alpha_c) =\\ &= -\left( \frac38 + {\beta}\frac58 \right) + O(1-{\beta}) + O(\theta - \alpha_c),\end{aligned}$$ resulting in (note the cancellation of signs!), by the previous estimate of $\partial_\theta \psi^{\beta}(\theta + \omega)$, $$\begin{aligned} \partial_\theta \psi^{\beta}(\theta + 2\omega) &= c(\theta + \omega) (-)\left( \frac38 + {\beta}\frac58 \right) (-)A_1({\beta}, \theta) (\theta - \alpha_c) + O(1-{\beta}) + o(\theta - \alpha_c) =\\ &= c(\theta + \omega) \left( \frac38 + {\beta}\frac58 \right) A_1({\beta}, \theta) (\theta - \alpha_c) + O(1-{\beta}) + o(\theta - \alpha_c) =\\ &= A_2({\beta}, \theta) (\theta - \alpha_c) + O(1-{\beta}),\end{aligned}$$ where $\frac1K < A_2({\beta}, \theta) < K$ (for some $K > 0$). The lemma below says that the attracting curve is approximately quadratic around $\theta_{\max} + \omega$ (approximately where the global minimum is located). If we could control the higher derivatives sufficiently well, the proof would have been very straightforward. \[DifferenceInValuesAfterPeak\] Suppose that $0 \leq {\beta}< 1$ is sufficiently close to 1. Then there is a number $\frac1K < A_3({\beta}, \theta) < K$ (where $K > 0$), depending only on ${\beta}$ and $\theta$, such that $$\begin{aligned} \psi^{\beta}(\theta + 2\omega) - \psi^{\beta}(\alpha_c + 2\omega) = -A_3({\beta}, \theta)(\alpha_c - \theta)^2 + o(1-{\beta}),\end{aligned}$$ for every $\theta + 2\omega \in J_{\beta}+ 2\omega$. We remind ourselves that $\psi^{\beta}(\alpha_c + \omega) = \frac38 + {\beta}\frac58 + o(1-{\beta})$ (see \[PsiInAlphaC\]), and therefore $1 - \psi^{\beta}(\alpha_c + \omega) = \frac58(1 - {\beta}) + o(1-{\beta}) = O(1-{\beta})$. We also remind ourselves that $\psi^{\beta}(\alpha_c) = 1/2 + O(1-{\beta})$ (see \[DistanceFromOneHalfForAlphaC\]). We begin by analyzing the differences $$\begin{aligned} \psi^{\beta}(\alpha_c + \omega) - \psi^{\beta}(\theta + \omega),\end{aligned}$$ where $\theta + \omega \in J_{\beta}+ \omega$. Now $$\begin{aligned} \psi^{\beta}(\alpha_c + \omega) - \psi^{\beta}(\theta + \omega) &= c_{\beta}(\alpha_c) p(\psi^{\beta}(\alpha_c)) - c_{\beta}(\theta) p(\psi^{\beta}(\theta)) =\\ &= c_{\beta}(\theta) \left( p(\psi^{\beta}(\alpha_c)) - p(\psi^{\beta}(\theta)) \right) +\\ &+ \left( c_{\beta}(\alpha_c) - c_{\beta}(\theta) \right) p(\psi^{\beta}(\alpha_c)).\end{aligned}$$ We know that $$\begin{aligned} \psi^{\beta}(\alpha_c) - \psi^{\beta}(\theta) &= \partial_\theta \psi^{\beta}(\alpha_c) (\alpha_c - \theta) + o(\theta - \alpha_c).\end{aligned}$$ A quick Taylor expansion gives $$\begin{aligned} p(y) - p(x) = (1 - 2x)(y - x) - (y - x)^2.\end{aligned}$$ Now, $$\begin{aligned} p(\psi^{\beta}(\alpha_c)) - p(\psi^{\beta}(\theta)) &= (1 - 2\psi^{\beta}(\alpha_c)) \partial_\theta \psi^{\beta}(\alpha_c) (\alpha_c - \theta) + \left( \partial_\theta \psi^{\beta}(\alpha_c) \right)^2 (\alpha_c - \theta)^2 + o((\alpha_c - \theta)^2) =\\ &= o(1-{\beta}) + \left( \partial_\theta \psi^{\beta}(\alpha_c) \right)^2 (\alpha_c - \theta)^2 + o((\alpha_c - \theta)^2),\end{aligned}$$ since $(1 - 2\psi^{\beta}(\alpha_c)) = 1 - 2 \cdot (1/2 + O(1-{\beta})) = O(1-{\beta})$, and $(\theta - \alpha_c) = o(1)$ as ${\beta}\to 1^-$ (the interval $I_{n({\beta})}$ shrinks). Hence, the first term is $$\begin{aligned} c_{\beta}(\theta) \left( p(\psi^{\beta}(\alpha_c)) - p(\psi^{\beta}(\theta)) \right) = c_{\beta}(\theta) \left( \partial_\theta \psi^{\beta}(\alpha_c) \right)^2 (\alpha_c - \theta)^2 + o((\alpha_c - \theta)^2) + o(1-{\beta}).\end{aligned}$$ Taylor series expansions around $\alpha_c$ yield, since $\partial_\theta c(\alpha) = 0$, $$\begin{aligned} c_{\beta}(\alpha_c) - c_{\beta}(\theta) &= - \left( \partial_\theta c_{\beta}(\alpha_c) (\theta - \alpha_c) + \partial^2_\theta c_{\beta}(\alpha_c) (\theta - \alpha_c)^2 + o((\theta - \alpha_c)^2) \right) =\\ &= -\partial^2_\theta c_{\beta}(\alpha_c) (\theta - \alpha_c)^2 + o((\theta - \alpha_c)^2),\end{aligned}$$ where for some constant $K \leq 0$, $0 \leq -\partial^2_\theta c_{\beta}(\alpha) \leq K$ for all $0 \leq {\beta}< 1$, since $c_{\beta}(\theta)$ has a local maximum at $\alpha$. Therefore, the total effect is $$\begin{aligned} \psi^{\beta}(\alpha_c + \omega) - \psi^{\beta}(\theta + \omega) = K(\theta, {\beta})(\alpha_c - \theta)^2 + o(1-{\beta})\end{aligned}$$ where $K(\theta, {\beta}) = \left( \partial_\theta \psi^{\beta}(\alpha_c) \right)^2 - \partial^2_\theta c_{\beta}(\alpha_c)$ satisfies $\frac1K < K(\theta, {\beta}) < K$ (see \[BigDerivative\]) for some $K > 0$. Turning to the next iteration (the one we are interested in), where $\theta + 2\omega \in J_{\beta}+ 2\omega$, we have $$\begin{aligned} \psi^{\beta}(\alpha_c + 2\omega) - \psi^{\beta}(\theta + 2\omega) &= c_{\beta}(\theta + \omega) \left( p(\psi^{\beta}(\alpha_c + \omega)) - p(\psi^{\beta}(\theta + \omega)) \right) +\\ &+ \left( c_{\beta}(\alpha_c + \omega) - c_{\beta}(\theta + \omega) \right) p(\psi^{\beta}(\alpha_c + \omega)).\end{aligned}$$ As before $$\begin{aligned} \psi^{\beta}(\alpha_c + \omega) - \psi^{\beta}(\theta + \omega) &= \partial_\theta \psi^{\beta}(\theta + \omega) (\alpha_c - \theta) + o(\theta - \alpha_c),\end{aligned}$$ where $\partial_\theta \psi^{\beta}(\theta + \omega) = -A_1({\beta},\theta) \cdot (\theta - \alpha_c) + O(1-{\beta})$ and $\frac1K < A_1({\beta}, \theta) < K$ for some ${\beta}> 0$, and $$\begin{aligned} p(\psi^{\beta}(\alpha_c + \omega)) - p(\psi^{\beta}(\theta + \omega)) &= \left( 1 - 2\psi(\alpha_c + \omega) \right) \partial_\theta \psi^{\beta}(\theta + \omega) (\alpha_c - \theta) +\\ &+ \left( \partial_\theta \psi^{\beta}(\theta + \omega) \right)^2 (\alpha_c - \theta)^2 + o((\alpha_c - \theta)^2) =\\ &= \left( O(1-{\beta}) - (\frac38 + {\beta}\frac58) \right) \partial_\theta \psi^{\beta}(\alpha_c + \omega) (\alpha_c - \theta) + o((\alpha_c - \theta)^2) =\\ &= (\frac38 + {\beta}\frac58) A_1({\beta},\theta) \cdot (\theta - \alpha_c)^2 + o(1-{\beta}) + o((\alpha_c - \theta)^2).\end{aligned}$$ The first term is therefore equal to $$\begin{aligned} - A_3({\beta}, \theta)(\alpha_c - \theta)^2 + o(1-{\beta}),\end{aligned}$$ for some $\frac1K < A_3({\beta}, \theta) < K$ (for some $K > 0$), as we have shown above. The next term satisfies that $c_{\beta}(\alpha_c + \omega) - c_{\beta}(\theta + \omega) = O(\alpha_c - \theta)$ and $p(\psi^{\beta}(\alpha_c + \omega)) = O(1-{\beta})$. Therefore $$\begin{aligned} \psi^{\beta}(\alpha_c + 2\omega) - \psi^{\beta}(\theta + 2\omega) &= -A_3(\theta, {\beta})(\alpha_c - \theta)^2 + o((\alpha_c - \theta)(1-{\beta})) + o(1-{\beta}),\end{aligned}$$ or, since $\alpha_c - \theta = o(1)$ as ${\beta}\to 1^-$ (they belong to increasingly smaller intervals $I_{n({\beta})}$), $$\begin{aligned} \psi^{\beta}(\theta + 2\omega) - \psi^{\beta}(\alpha_c + 2\omega)&= -A_3({\beta}, \theta)(\alpha_c - \theta)^2 + o(1-{\beta}),\end{aligned}$$ where $\frac1K < A_3({\beta}, \theta) < K$, as above. For $\theta \in J_{\beta}+ 2\omega$, we have that, for ${\beta}< 1$ sufficiently close to 1 $$\begin{aligned} \max_{\theta \in \{\theta + (3 + k)\omega : \theta \in J, 0 \leq k \leq T_1({\beta}, \theta)\} } |\partial_\theta \psi^{\beta}(\theta)| = \max_{\theta \in \{\theta + (3 + T_1({\beta}, \theta)) \cdot \omega\} } |\partial_\theta \psi^{\beta}(\theta)|.\end{aligned}$$ and asymptotically, there is a constant $K > 0$, such that $$\begin{aligned} \frac1K \cdot \frac{1}{\sqrt{1-{\beta}}} \leq \max_{\theta \in \{J + (3 + T_1({\beta}, \theta) \cdot \omega\} } |\partial_\theta \psi^{\beta}(\theta)| \leq K \cdot \frac{1}{\sqrt{1-{\beta}}},\label{DerivativeEstimateAtRecoveryPoint}\end{aligned}$$ as ${\beta}\to 1^-$. Let $\theta_0 \in J_{\beta}+ 2\omega \supseteq I_n + 3\omega$, and set $x_0 = \psi^{\beta}(\theta_0)$. By \[DifferenceInValuesAfterPeak\] $$\begin{aligned} \partial_\theta x_0 = A_2({\beta}, \theta) (\theta - \alpha_c) + O(1-{\beta}),\end{aligned}$$ and by \[DerivativeAfterPeak\] $$\begin{aligned} x_0 = \psi^{\beta}(\alpha_c + 2\omega) + A_3({\beta}, \theta) (\alpha_c - \theta)^2 + o(1-{\beta}).\end{aligned}$$ Since $\psi^{\beta}(\alpha_c + 2\omega) = K({\beta}) (1-{\beta})$, where $\frac1K < K({\beta}) < K$ (for some $K > 0$), this gives us $$\begin{aligned} x_0 = K({\beta})(1-{\beta}) + A_3({\beta}, \theta)(\alpha_c - \theta)^2 + o(1-{\beta}).\end{aligned}$$ Let $\theta - \alpha_c = L \cdot \sqrt{1-{\beta}}$. By \[MayChooseSquareRoot\], it is possible to choose $L$ close to 1. Thus, we have $$\begin{aligned} \partial_\theta x_0 = L \cdot A_2({\beta}, \theta) \sqrt{1 - {\beta}} + o(\sqrt{1-{\beta}}),\end{aligned}$$ since $O(1-{\beta}) = o(\sqrt{1-{\beta}})$, and $$\begin{aligned} x_0 = K({\beta})(1-{\beta}) + A_3({\beta}, \theta) \cdot L^2 \cdot (1-{\beta}) + o(\sqrt{1-{\beta}}).\end{aligned}$$ Now, by \[EachIterationAtBottomAlmostLikeDerivative\], there are constants $0 < D_1 \leq D_2$ such that $$\begin{aligned} D_1 \cdot \frac1{x_0} \leq \prod_{k=0}^{T_1({\beta}, \theta_0)} c(\theta_k) \cdot p'(x_k) = D_2 \cdot \frac1{x_0}.\end{aligned}$$ Hence, for some $\epsilon > 0$, suppressing the dependence on parameters in the notation of $K, A_2, A_3$, $$\begin{aligned} D_1\frac{L \cdot A_2 + \epsilon({\beta})}{K + A_3 \cdot L^2 + \epsilon({\beta})} \cdot \frac{\sqrt{1-{\beta}}}{1-{\beta}} \leq |\partial_\theta x_0 \cdot \prod_{k=0}^{T_1({\beta}, \theta_0)} c(\theta_k) \cdot p'(x_k)| \leq D_2\frac{L \cdot A_2+ \epsilon({\beta})}{K + A_3 \cdot L^2 + \epsilon({\beta})} \cdot \frac{\sqrt{1-{\beta}}}{1-{\beta}},\end{aligned}$$ where $\epsilon({\beta}) \to 0$ as ${\beta}\to 1^-$. If $L$ is very big, then $L^2$ would dominate the denominator, and we would have $$\begin{aligned} \frac{L \cdot A_2 + \epsilon}{K + A_3 \cdot L^2 + \epsilon} \sim \frac1L.\end{aligned}$$ If $L$ is very small, then $K$ would dominate the denominator, and we would have $$\begin{aligned} \frac{L \cdot A_2 + \epsilon}{K + A_3 \cdot L^2 + \epsilon} \sim L.\end{aligned}$$ Hence, the the maximum would be obtained if we choose $L$ like $L \sim 1$. By \[SumProductsSmallAfterPeak\], $$\begin{aligned} \sum \limits_{k=0}^{N-1} \partial_\theta c(\theta_k) \cdot p(x_k) \cdot \prod \limits_{j=k+1}^{N} c(\theta_j) \cdot p'(x_j) = o(x_0^\gamma) = o((1-{\beta})^\gamma),\end{aligned}$$ for every $\gamma < 0$. Hence, the derivative will be like $$\begin{aligned} const + const_1 \frac{1}{\sqrt{1-{\beta}}} + o(1-{\beta}) \leq |\partial_\theta x_{T_1({\beta}, \theta_0)}| \leq const + const_2 \frac{1}{\sqrt{1-{\beta}}} + o(1-{\beta}).\end{aligned}$$ Once the derivative has grown to a certain point, it will grow monotonically (see \[DerivativeGrowsDuringExpansion\]). Therefore, as ${\beta}$ gets closer to 1, the derivative must grow past this point, and the maximum would be attained for $|\partial_\theta x_{T_1({\beta}, \theta_0)}|$. This is a good time to remind ourselves that the integers $N_n$ satisfy $\theta_0 \in I_n \Rightarrow \theta_i \not\in I_n$ for $0 \leq i < N_n$. \[DerivativeGrowth\] Suppose that $0 \leq {\beta}< 1$. Asymptotically, there is a constant $K > 0$, such that $$\begin{aligned} \frac1K \cdot \frac{1}{\sqrt{1-{\beta}}} \leq \max \limits_{\theta \in \mathbb{T}} |\partial_\theta \psi^{\beta}(\theta)| \leq K \cdot \frac{1}{\sqrt{1-{\beta}}},\end{aligned}$$ as ${\beta}\to 1^-$. Let $0 \leq {\beta}< 1$ be given, and set $n = n({\beta})$, $J = J_{\beta}$. Recall the definition of $T_1({\beta})$ given in \[DefinitionOfT1\]. Suppose that $2K_{n-1} - 2 < T_1({\beta}) \leq (K_{n-1})^{3/2}(2K_{n-1} - 2)$. Then $T_1({\beta}) \ll K_{n-1}^3 \sim (N_{m+1})^{3/4}$, and \[BigDerivative\] implies that $I_{n-1} \subseteq J$. In this case, set $m = n - 2$, to get $$\begin{aligned} K_m^{5/2} \ll K_{m+1} < T_1({\beta}) \ll K_{m+1}^3 \sim (N_{m+1})^{3/4}.\end{aligned}$$ Otherwise, if $(K_{n-1})^{3/2}(2K_{n-1} - 2) < T_1({\beta}) \leq 2K_n - 2$, set $m = n - 1$. By our choice of $m$ $$\begin{aligned} K_m^{5/2} < T_1({\beta}) \ll K_{m+1}^3 \sim (N_{m+1})^{3/4},\label{T1Bounds}.\end{aligned}$$ Let $\{J + k\omega\}_{k = 0}^M$ be a minimal (in the sense that $M > 0$ is the smallest possible) cover of $\mathbb{T}$. We know that $$\begin{aligned} \max_{\theta \in \{\theta + (3 + k)\omega : \theta \in J, 0 \leq k \leq T_1({\beta}, \theta)\} } |\partial_\theta \psi^{\beta}(\theta)| = \max_{\theta \in \{\theta + (3 + T_1({\beta}, \theta)) \omega\ : \theta \in J\} } |\partial_\theta \psi^{\beta}(\theta)|.\end{aligned}$$ Therefore, the parts of the cover where we have no control this far is $$\begin{aligned} \{\theta + (3 + T_1({\beta}, \theta) + k) \omega : \theta \in J, 1 \leq k \leq M - 3 + T_1({\beta}, \theta) \}.\end{aligned}$$ Pick a $\theta_0 = \theta + (3 + T_1({\beta}, \theta)) \omega$, where $\theta \in J$. Set $T_1 = T_1({\beta}, \theta)$ and $x_0 = \psi^{\beta}(\theta_0)$. Suppose that $t \geq 0$ is the smallest integer satisfying $$\begin{aligned} x_{t} \in C.\end{aligned}$$ We wish to get an upper bound on $t$. There are two possibilities; either $\theta_0 \in I_0 \cup (I_0 + \omega)$, or it’s not. In the case $\theta_0 \in I_0 \cup (I_0 + \omega)$, suppose that $\theta_0 \in I_k \backslash I_{k+1} \cup (I_k \backslash I_{k+1} + \omega)$, where necessarily $k \leq m$ since $T_1 \ll (N_{m+1})^{3/4} < N_{m+1}$. Then \[QuickReturnFromWorstToGood\] implies that $x_{2K_k + 20} \in C$, and therefore $t \leq 2K_m + 20$. In the case $\theta_0 \not\in I_0 \cup (I_0 + \omega)$, there are two possibilities; either $x_t \in C$ for $t \leq 20$, or $\theta_i \in I_0$ for some $i < 20$. This follows since $\theta_0, \dots, \theta_{19} \not\in I_0 \cup (I_0 + \omega)$ implies that $x_{20} \in C$, by \[20IterationsToC\]. Suppose then that $t > 20$, i.e. that $\theta_i \in I_0$, for some $i < 20$, say $\theta_i \in I_k \backslash I_{k+1}$ where $k \leq m$. It follows that $x_{i + 2K_k + 20} \in C$, or $t \leq i + 2K_k + 20 \leq 2K_m + 39$. Thus, we obtain the upper bound $t < 3K_m$ on the smallest $t > 0$ satisfying $x_t \in C$. We are now in a position to invoke \[LocalControlOnProducts\] for $x_t \in C$. As long as $k \leq N(\theta_0; J)$, this gives us the estimates $$\begin{aligned} \prod \limits_{i = 0}^{k-1} |c(\theta_i) \cdot p'(x_i)| = \prod \limits_{i = j}^{t-1} |c(\theta_i) \cdot p'(x_i)| \prod \limits_{i = t}^{k-1} |c(\theta_i) \cdot p'(x_i)| \leq 4^{3K_m} \cdot 4^{4K_m} \cdot (3/5)^{\left(1 - \frac1{M_0} \right)(k-t)/2},\end{aligned}$$ when $0 \leq j < t$, and $$\begin{aligned} \prod \limits_{i = j}^{k-1} |c(\theta_i) \cdot p'(x_i)| \leq \cdot 4^{4K_m} \cdot (3/5)^{\left(1 - \frac1{M_0} \right)(k-j)/2},\end{aligned}$$ when $t \leq j < k$. Now, $$\begin{aligned} \sum \limits_{j = 1}^{k-1} \prod \limits_{i = j}^{k-1} |c(\theta_i) \cdot p'(x_i)| &\leq 4^{7K_m} \cdot \sum \limits_{j = 1}^{k-1} (3/5)^{\left(1 - \frac1{M_0} \right)(k-j)/2} \leq\\ &\leq 4^{7K_m} \cdot \sum \limits_{j = 1}^{\infty-1} (3/5)^{\left(1 - \frac1{M_0} \right)j/2} = 4^{7K_m} \cdot A\\\end{aligned}$$ where $A > 0$ is some constant, as long as $k \leq N(\theta_0; J)$. Since $K_m < T_1({\beta})^{2/5}$ (see \[T1Bounds\]), we get $4^{7K_m} = O(4^{2T_1({\beta})/5}) = O(\frac1{(1-{\beta})^{2/5}}) = o(\frac1{\sqrt{1-{\beta}}})$. Therefore $$\begin{aligned} |\partial_\theta x_k| &\leq \|\partial_\theta c\| + |\partial_\theta x_0| \cdot \prod \limits_{i=0}^{k - 1} |c(\theta_i) \cdot p'(x_i)| +\\ &+ \|\partial_\theta c\| \sum \limits_{j = 1}^{k-1} \prod \limits_{i = j}^{k-1} |c(\theta_i) \cdot p'(x_i)| \leq\\ &\leq \|\partial_\theta c\| \left( 1 + 4^{7K_m} \cdot A \right) + |\partial_\theta x_0| \cdot 4^{7K_m} \cdot (3/5)^{\left(1 - \frac1{M_0} \right)(k-3K_m)/2} \leq\\ &\leq |\partial_\theta x_0| \cdot o\left( \frac1{\sqrt{1-{\beta}}} \right) + const,\end{aligned}$$ where the constant satisfies $const = o(\frac1{\sqrt{1-{\beta}}})$ as ${\beta}\to 1^-$, and therefore is negligible. Since we already have the bounds on $|\partial_\theta x_0|$ in \[DerivativeEstimateAtRecoveryPoint\], this gives us the asymptotic inequality $$\begin{aligned} \frac1K \cdot \frac{1}{\sqrt{1-{\beta}}} \leq \max \limits_{\theta \in \mathbb{T}} |\partial_\theta \psi^{\beta}(\theta_k)| \leq K \cdot \frac{1}{\sqrt{1-{\beta}}},\end{aligned}$$ where $K > 0$ as ${\beta}\to 1^-$, as long as $k \leq N(\theta_0, J)$. When $k = N(\theta_0; J)$, we are back in an interval, $J$, where we already know the derivative, and the derivative of its iterates. We may therefore terminate the process at this point. Acknowledgement {#acknowledgement .unnumbered} =============== I want to thank Kristian Bjerklöv for our many valuable discussions during the conception of this article. This research was partially supported by a Swedish Research Council grant. Some technical lemmas ===================== In the appendix, we will fix ${\beta}$, and write $c = c_{\beta}$. All the constants are independent of ${\beta}\in [0,1]$, or can be chosen to be independent for these ${\beta}$.
--- abstract: 'Previous research has indicated the possible existence of a liquid-liquid critical point (LLCP) in models of silica at high pressure. To clarify this interesting question we run extended molecular dynamics simulations of two different silica models (WAC and BKS) and perform a detailed analysis of the liquid at temperatures much lower than those previously simulated. We find no LLCP in either model within the accessible temperature range, although it is closely approached in the case of the WAC potential near 4000 K and 5 GPa. Comparing our results with those obtained for other tetrahedral liquids, and relating the average Si-O-Si bond angle and liquid density at the model glass temperature to those of the ice-like $\beta$-cristobalite structure, we conclude that the absence of a critical point can be attributed to insufficient “stiffness” in the bond angle. We hypothesize that a modification of the potential to mildly favor larger average bond angles will generate a LLCP in a temperature range that is accessible to simulation. The tendency to crystallize in these models is extremely weak in the pressure range studied, although this tendency will undoubtedly increase with increasing stiffness.' author: - Erik Lascaris - Mahin Hemmati - 'Sergey V. Buldyrev' - 'H. Eugene Stanley' - 'C. Austen Angell' date: 5 May 2014 title: 'Search for a liquid-liquid critical point in models of silica' --- Introduction ============ Silica (SiO$_2$) is one of the most important and widely used materials in today’s world. One could say that the fact of its ubiquity is as clear as window glass. Because silica is an excellent insulator and can be easily created through thermal oxidation of the silicon substrate, SiO$_2$ is also the insulator of choice in the semiconductor industry. Optical fibers made from pure silica are widely used by the telecommunications industry and, because silica and silicates make up over 90% of the Earth’s crust, SiO$_2$ plays a major role in the geosciences. Liquid silica is the extreme case of a “strong” liquid. When cooled, its viscosity approaches the glass transition slowly, following the Arrhenius law $\log \eta \propto 1/T$. In contrast, the so-called “fragile” liquids reach this glass transition far more quickly. Glasses rich in silica, but modified by other oxides to lower their viscosities, are “strong” liquids that have slow vitrification so are preferred by glassblowers who need time to work their magic. Simulations have indicated that liquid silica does not behave like a strong liquid for all temperatures, however. Using the BKS model [@vanBeestPRL1990] (see Appendix A), Vollmayr [*et al.*]{} found that at very high temperatures the diffusion greatly deviates from the Arrhenius law (and thus behaves like a fragile liquid), and that the temperature-dependence of the diffusion better fits the Vogel-Fulcher law [@VollmayrPRB1996]. It was later shown by Horbach and Kob [@HorbachPRB1999] that the temperature-dependence can also be fitted well by a power law of the shape $D \propto (T - T_{\text{MCT}})^{\gamma}$ in which the exponent $\gamma$ is close to 2.1 (compared to 1.4 for water) and $T_{\text{MCT}} \approx 3330$ K. This temperature dependence is often found in simple liquids and has been described in terms of mode-coupling theory (MCT) [@GotzeLHSS1989; @GotzeRPP1992]. A deviation from the Arrhenius law has also been measured in other models of silica [@HemmatiMineralogy2000], and small deviations from a pure Arrhenius law were found for the viscosity in experimental data [@HessCG1996; @RosslerJNCS1998]. This transition from fragile to strong upon cooling (often called the “fragile-to-strong crossover”) has also been found in simulations of other tetrahedral liquids, such as BeF$_2$ [@AngellPCCP2000], silicon [@SastryNatM2003; @AshwinPRL2004], and water [@ItoNat1999; @GalloJCP2012; @XuPNAS2005]. This phenomenon is not restricted to tetrahedral liquids, however. For example, it has been proposed that the fragile-to-strong crossover might be a behavior common to all metallic glass-forming liquids [@ZhangJCP2010; @LadJCP2012]. In addition to the fragile-to-strong crossover, it has been proposed that liquid silica also has a liquid-liquid critical point (LLCP) [@PoolePRL1997; @SaikaVoivodPRE2000; @AngellAIP2013] much like that proposed for liquid water [@PooleNat1992]. These phenomena may be related. It was recently shown that in analog plastic crystal systems many strong glass-formers are accompanied by a singularity (a lambda-type order-disorder transition) at high temperatures, and that in silica this singularity could be a LLCP [@AngellAIP2013]. The fragile-to-strong crossover arises simultaneously with a large increase of the isobaric heat capacity $C_P$. If a LLCP exists in silica, this heat capacity maximum should have its origin in its critical fluctuations. The discovery of a LLCP in liquid silica would thus provide a unifying thermodynamic explanation for the behavior of liquid silica. Methods {#SEC:Methods} ======= We consider here two different models of silica, the BKS model by van Beest [*et al.*]{} [@vanBeestPRL1990] and the WAC model (also known as the TRIM model for silica) introduced by Woodcock [*et al.*]{} [@WoodcockJCP1976]. Both models represent SiO$_2$ as a simple 1:2 mixture of Si ions and O ions, i.e., without any explicit bonds. One difference between the two models is that WAC uses full formal charges while in BKS partial charges are used. For a detailed description of both models, see Appendix A. All simulations are done using Gromacs 4.6.1 [@Gromacs4], with $N=1500$ ions, using the Ewald sum (PME) for electrostatics, and the v-rescale thermostat [@BussiJCP2007] to keep the temperature constant. Most simulations are done in the constant-volume/constant-temperature ($NVT$) ensemble. For the few constant-pressure ($N\!PT$) simulations we use the Parrinello-Rahman barostat. For most of the simulations we use a time step of 1 fs, but at very low temperatures we increase the time step to 4 fs to speed up the simulations to approximately 250 ns/day. We carefully check the temperatures below which the 4 fs time step gives the same results as the 1 fs time step and do not include any 4 fs data that lead to a small difference in pressure, energy, or diffusion. As a measure of the equilibration time, we define $\tau$ as the time at which $\sqrt{\left< r_{\text{O}}(t)^{2} \right>} = 0.56$ nm, i.e., the average time it requires for an O ion to move twice its diameter of 0.28 nm. Most simulations run for over $10\,\tau$, well beyond the time necessary for the system to reach equilibrium. For the range of temperatures and pressures considered here, the root mean squared displacement of the O ion is roughly 1.1–1.6 times that of the Si ion, this factor being the largest at low temperatures and low pressures. An important structural feature is the coordination number of Si by O, since a tetrahedral network is defined by 4-coordination of the network centers. We calculate the Si coordination number by the usual method, integrating the Si-O radial distribution function up to the first minimum. For both models, and at all state points considered here (below 10 GPa), the coordination number lies between 4.0 and 4.9. The coordination number is the largest at high densities, and levels off to 4 when the density is decreased and the pressure reaches zero and becomes negative. Isochores ========= The most direct method of locating a critical point is to calculate the pressure $P$ as a function of temperature $T$ along different isochores. In a $PT$-diagram the isochores cross within the coexistence region and at the critical point. At those state points (at a given $P$ and $T$) the system is a combination of two different phases with different densities. One can also locate a critical point by plotting the isotherms in a $PV$-diagram in order to determine the region in which the slope of the isotherms becomes zero (critical point) or negative (coexistence region). Because it is easier to determine whether two lines are crossing than whether a curve is flat, we study the isochores. Figure \[FIG:isochores\_BKS\_WAC\] shows the $PT$-diagrams with the isochores of BKS and WAC. ![ Isochores of liquid BKS silica (panel a) and liquid WAC silica (panel b). Thin black/brown lines are the isochores, the temperature of maximum density (TMD) is indicated by a thick black line, and green diamonds indicate part of the liquid-vacuum spinodal. Blue question marks indicate the approximate locations where a LLCP has been predicted by previous studies [@SaikaVoivodPRE2000; @AngellAIP2013]. The location of a LLCP can be identified by where the isochores cross. It seems a LLCP in BKS is unlikely, as the isochores do not approach each other. The isochores in WAC do approach each other, and might converge at the predicted point. However, at low temperatures the isochores near 2.3 g/cm$^3$ obtain a negative curvature. If this curvature becomes more negative as $T$ goes down, then it is possible that the isochores will not cross below 3500 K. We conclude that for the temperatures currently accessible, the isochores alone are insufficient to demonstrate a LLCP in WAC. []{data-label="FIG:isochores_BKS_WAC"}](isochores-BKS.eps "fig:"){width="\linewidth"}\ ![ Isochores of liquid BKS silica (panel a) and liquid WAC silica (panel b). Thin black/brown lines are the isochores, the temperature of maximum density (TMD) is indicated by a thick black line, and green diamonds indicate part of the liquid-vacuum spinodal. Blue question marks indicate the approximate locations where a LLCP has been predicted by previous studies [@SaikaVoivodPRE2000; @AngellAIP2013]. The location of a LLCP can be identified by where the isochores cross. It seems a LLCP in BKS is unlikely, as the isochores do not approach each other. The isochores in WAC do approach each other, and might converge at the predicted point. However, at low temperatures the isochores near 2.3 g/cm$^3$ obtain a negative curvature. If this curvature becomes more negative as $T$ goes down, then it is possible that the isochores will not cross below 3500 K. We conclude that for the temperatures currently accessible, the isochores alone are insufficient to demonstrate a LLCP in WAC. []{data-label="FIG:isochores_BKS_WAC"}](isochores-WAC.eps "fig:"){width="0.95\linewidth"} Both diagrams are similar. There is a clear density anomaly to the left of the temperature of maximum density (TMD), and if we raise the temperature by approximately 4000 K then the BKS isochores match those of WAC reasonably well. Thus, based on the isochores in Fig. \[FIG:isochores\_BKS\_WAC\], one could say that BKS and WAC are very similar systems, and that they mainly differ in a shift of temperature. At very low $P$ and high $T$ the liquid phase is bound by the liquid-gas (or liquid-vacuum) spinodal, and lowering $P$ below the spinodal leads to spontaneous bubble formation. At very low $T$ the liquid becomes a glass, and the diffusion coefficient drops rapidly. Because the time it takes to equilibrate the system is inversely proportional to the rate of diffusion, simulations require too much time once the oxygen diffusion $D_{\text{O}}$ drops below $\sim 10^{-8}$ cm$^2$/s, which is where the isochores stop in Fig. \[FIG:isochores\_BKS\_WAC\]. For both models this limit is reached at a higher temperature for low $P$ than for high $P$. This is caused by the diffusion anomaly (an increase in $P$ leads to an [*increase*]{} in diffusion), which is present in both BKS and WAC models. No crystallization was observed, unless the pressure was raised to values far outside the range of our detailed studies (e.g., above 40 GPa the WAC liquid spontaneously crystallizes into an 8-coordinated crystal). Normally, crystallization is readily detected by a rapid drift of the energy to lower values. However, when the diffusivity is very low (as in the present system, in the domain of greatest interest) the situation is different and crystal growth can be unobservably slow. More direct tests are then needed. In the present case we have sought information on crystal growth and melting by creating a crystal front (half simulation box of the liquid interfacing with half box of the topologically closest crystal) and have watched the crystal front receding at high temperature. However, the attempt to determine melting point by lowering the temperature and observing reversal of the interface motion, was unsuccessful because the growth rate became unobservably small (observed over microseconds) before any reversal was seen. We conclude that, since this crystal front was put in by hand, the possibility of crystallization by [*spontaneous*]{} nucleation (always the slowest step) followed by crystal growth, is zero. Based on the fitting and extrapolation of data, previous studies have predicted a liquid-liquid critical point (LLCP) in both WAC and BKS [@SaikaVoivodPRE2000]. With the increase in computing power, and using the techniques to speed up the simulations discussed in Sec. \[SEC:Methods\], we are able to obtain data at lower temperatures than was previously possible. Our results for BKS (Fig. \[FIG:isochores\_BKS\_WAC\]a) show that for $T>2500$ K the isochores are nearly parallel, and therefore a LLCP in BKS is very unlikely. On the other hand, the isochores of the WAC model (Fig. \[FIG:isochores\_BKS\_WAC\]b) show a more interesting behavior in that they clearly approach one another at low $T$ in the vicinity of $P \approx 5$ GPa. If we only consider the WAC isochores above 4000 K, then extrapolation would predict that the isochores cross around 3500 K and 5 GPa. However, below 4000 K we see that the isochores are starting to display a negative curvature in the $PT$-plane. This signals an approach to a density minimum, which is the low-$T$ boundary of the density anomaly region. The negative curvature makes it hard to perform an extrapolation that convincingly shows that the isochores cross at lower $T$. We can therefore only conclude that (for the temperatures currently accessible) the isochores are insufficient to prove or disprove the existence of a LLCP in WAC. Response functions ================== Upon approaching a critical point, the response functions should diverge. Although true divergence occurs only in the thermodynamic limit $N \to \infty$, a large maximum should still be visible in response functions such as the isothermal compressibility $K_T$ and the isobaric heat capacity $C_P$ even when the box size is relatively small. Calculations using the Ising model and finite size scaling techniques applied to simulation results have shown that (for sufficiently large boxes) the location of the critical point is very close to where both $K_T$ and $C_P$ reach their global maximum [@KesselringJCP2013; @LascarisAIP2013]. If a LLCP truly exists in WAC, then the $PT$-diagrams of $C_P$ and $K_T$ should show a large $C_P$ maximum close to where $K_T$ has a maximum—exactly where the isochores come together and where the LLCP has been predicted to be. ![image](WAC_KT-PT_12sep2013.eps){width="0.45\linewidth"}(a) ![image](WAC_CP-PT_12sep2013.eps){width="0.45\linewidth"}(b)\ ![image](WAC_alphaP-PT_12sep2013.eps){width="0.45\linewidth"}(c) ![image](WAC_CV-PT_12sep2013.eps){width="0.45\linewidth"}(d) Figure \[FIG:response\_functions\_WAC\] shows four response functions for WAC: (a) the isothermal compressibility $K_T$, (b) the isobaric heat capacity $C_P$, (c) the isobaric thermal expansivity $\alpha_P$, and (d) the isochoric heat capacity $C_V$. These have been obtained using $NVT$ simulations together with the smooth surface technique described in Appendix \[appendix\_smooth\_surface\]. To check the results generated by this technique, we determine whether the response functions satisfy the thermodynamic relation $VT \alpha_{P}^{2}/K_T + C_V - C_P = 0$. Because of statistical errors in the data we find slight deviations from zero, but these are less than 1 J/(mol K) in magnitude. The compressibility $K_T$ in Fig. \[FIG:response\_functions\_WAC\]a shows a clear global maximum near $P \approx 5$ GPa and $T \approx 4000$ K, because this is where the isochores in Fig. \[FIG:isochores\_BKS\_WAC\]b are the closest together in terms of pressure. It is quite likely that below 4000 K this maximum increases further. If WAC has a LLCP then $C_P$ should also have a maximum in that vicinity. However, Fig. \[FIG:response\_functions\_WAC\]b shows that this is not the case. There is clear global $C_P$ maximum, but it is located near $P \approx 1$ GPa and $T \approx 6000$ K, which is far from the global $K_T$ maximum. Therefore, based on the response functions, we conclude that WAC does not have a LLCP. The isobaric thermal expansivity $\alpha_P$ (Fig. \[FIG:response\_functions\_WAC\]c) has a global minimum between the global maxima of $C_P$ and $K_T$ (Figs. \[FIG:response\_functions\_WAC\]a,b). This should come as no surprise, since $C_P \propto \left< (\Delta S)^2 \right>$ arises from fluctuations in entropy and $K_T \propto \left< (\Delta V)^2 \right>$ from volume fluctuations, while the expansivity $\alpha_P \propto \left< \Delta S \Delta V \right>$ arises from a combination of both. Even though the global maxima occur at different places, the slopes $dP/dT$ of the loci of local maxima are the same, so it seems likely they have a common origin. Because the system is not quite critical, the enthalpy fluctuations that determine the heat capacity can be statistically independent of the density fluctuations. The variation of the heat capacity with temperature at constant pressure is shown over the temperature range in which the system remains in equilibrium, in Fig. \[FIG:smoothingspline\_BKS\_WAC\]. Fig. \[FIG:smoothingspline\_BKS\_WAC\]b is basically a cross-section of Fig. \[FIG:response\_functions\_WAC\]b. We note first that at moderately high pressures, 8 GPa, there is no difference between the WAC and BKS models. In each case the heat capacity reaches about 35 J/(K mol) before the diffusion becomes too slow that we can no longer equilibrate. This is 1.4 times the vibrational heat capacity of $3R \approx 25$ J/(K mol), as is typical of moderately fragile inorganic liquids (e.g. anorthite, ZnCl$_2$) right before ergodicity is broken [@AngellJNCS1985; @HemmatiJCP2001]. However, at pressures between zero and 5 GPa, a major difference is seen between the models. Near the TMD we have $C_P \approx C_V$ (because the expansivity is very small) so we can compare data with $C_V$ from Scheidler [*et al.*]{} [@ScheidlerPRB2001] for the case of BKS at $P=0$. The agreement is quantitative, up to the point where the earlier study was cut off. Our data confirms the existence of a peak in the equilibrium heat capacity—an unusual behavior that was not reported in Ref. [@ScheidlerPRB2001] but had been noted in the earlier study of Saika-Voivod [*et al.*]{} [@SaikaVoivodNat2001] and was emphasized in Ref. [@AngellAIP2013]. Although BKS is far from having a critical point, the existence of this $C_V$ maximum reveals the tendency of this system—which accords well with many aspects of experimental silica—to develop the same anomalous entropy fluctuations, and an analog of the Widom line made famous by water models. For the WAC model (which approaches criticality much more closely than BKS does, as we have already seen in Fig. \[FIG:isochores\_BKS\_WAC\]), this heat capacity peak becomes much more prominent, reminiscent of the behavior of the Jagla model near its critical point. $C_P$ reaches a value almost twice that of the vibrational component; behavior unseen in any previous inorganic system except for BeF$_2$ which is a WAC silica analog [@HemmatiJCP2001]. ![ Comparison of the heat capacities of BKS (panel a) and WAC (panel b), obtained by calculating the smoothing spline of $H(T)$ at constant $P$, followed by taking its derivative (a slightly different method than was used in Fig. \[FIG:response\_functions\_WAC\]b). At 8 GPa there is no significant difference between the WAC and BKS models, but below 5 GPa WAC has a large maximum in the range 5000–8000 K (also clearly visible in Fig. \[FIG:response\_functions\_WAC\]b). In panel b we have included $C_V$ data of Scheidler [*et al.*]{} [@ScheidlerPRB2001] (red diamonds), which shows a maximum around 4500 K. Near the TMD (around 5000 K for $P=0$) the expansivity is small, which means that $C_V \approx C_P$, in agreement with our results. For BKS this maximum is less clear in $C_P$, though still visible. Because of small fluctuations in the data, it is difficult to obtain a fit of $H(T)$ that produces a perfect estimate of $C_P = dH/dT$, leading to artificial oscillations in $C_P$. A larger data set would reduce this artifact. In addition, the smoothing spline method assumes zero curvature at the end-points of the data, and this leads to artifacts at very low $T$ and very high $T$. For clarity, we have removed the parts of the curves below the temperature at which $C_P$ starts to bend toward a constant a $C_P$ value. []{data-label="FIG:smoothingspline_BKS_WAC"}](Cp-vs-T_smoothingspline_BKS.eps "fig:"){width="0.95\linewidth"} ![ Comparison of the heat capacities of BKS (panel a) and WAC (panel b), obtained by calculating the smoothing spline of $H(T)$ at constant $P$, followed by taking its derivative (a slightly different method than was used in Fig. \[FIG:response\_functions\_WAC\]b). At 8 GPa there is no significant difference between the WAC and BKS models, but below 5 GPa WAC has a large maximum in the range 5000–8000 K (also clearly visible in Fig. \[FIG:response\_functions\_WAC\]b). In panel b we have included $C_V$ data of Scheidler [*et al.*]{} [@ScheidlerPRB2001] (red diamonds), which shows a maximum around 4500 K. Near the TMD (around 5000 K for $P=0$) the expansivity is small, which means that $C_V \approx C_P$, in agreement with our results. For BKS this maximum is less clear in $C_P$, though still visible. Because of small fluctuations in the data, it is difficult to obtain a fit of $H(T)$ that produces a perfect estimate of $C_P = dH/dT$, leading to artificial oscillations in $C_P$. A larger data set would reduce this artifact. In addition, the smoothing spline method assumes zero curvature at the end-points of the data, and this leads to artifacts at very low $T$ and very high $T$. For clarity, we have removed the parts of the curves below the temperature at which $C_P$ starts to bend toward a constant a $C_P$ value. []{data-label="FIG:smoothingspline_BKS_WAC"}](Cp-vs-T_smoothingspline_WAC.eps "fig:"){width="0.95\linewidth"} Discussion ========== We find no LLCP in either model within the accessible temperature range, although it is closely approached in the case of the WAC potential near 4000 K and 5 GPa. The isochores of BKS, which are the most direct indicators of criticality in a physical system, fail to converge into a critical point. In the case of WAC we cannot conclude anything from the isochores, but an analysis of the global extrema of the response functions indicates that there is no LLCP in WAC because the global $C_P$ maximum and the global $K_T$ maximum are significantly separated in the $PT$-plane. Liquid silica forms a tetrahedral network of bonds, and below we will show that the lack of a LLCP is related to the openness of this network structure, which in turn is related to the stiffness of the inter-tetrahedral bond angles. In addition we will argue that criticality in WAC could be achieved with an adaptation of the pair potential. The occurrence of a LLCP requires two competing liquid structures that can be in a (meta-stable) equilibrium with each other. In the case of a tetrahedral network-forming liquid the two relevant structures are usually: (i) a high-density collapsed structure that is highly diffusive, and (ii) a low-density open network structure that is more rigid, i.e., one that is still a liquid but less diffusive and more structured. Because the high-density structure occupies a smaller volume but has higher entropy (more disorder), the competition between these two structures is accompanied by a region with a density anomaly: $\alpha_P \propto \left<\Delta S \Delta V\right> <0$. The high-density structure is very stable and is the dominant structure at high temperatures, but the low-density structure requires a more delicate balance of forces in order to be stable. If the bonds in the liquid are too flexible, the liquid collapses into the high-density structure. On the other hand, if the bonds are too rigid the liquid can no longer flow and becomes a glass. There are several studies that address this situation. The 2006 study of Molinero [*et al.*]{} [@MolineroPRL2006] shows how reducing the three-body repulsion parameter $\lambda$ in the Stillinger-Weber potential [@StillingerPRB1985] (which controls the bond angle stiffness) causes the first order liquid-liquid phase transition of silicon ($\lambda = 21$) to disappear at $P=0$ when $\lambda < 20.25$ (see Fig. \[FIG:modSW\]). This transition occurs between a low-density liquid and a high-density liquid, where both liquids are metastable with respect to the diamond cubic (dc) crystal. Crystallization to the dc crystal always occurs from the low-density liquid. When $\lambda > 21.5$ crystallization happens so fast that it is no longer possible to accurately determine the temperature $T_{LL}$ at which the phase transition occurs for $P=0$. Simulations of the Stillinger-Weber model indicate that the LLCP for $\lambda=21$ is located at $-0.60$ GPa and 1120 K [@VasishtNP2011]. Since each value of $\lambda$ defines a unique system with a unique critical pressure, the vanishing of the liquid-liquid transition at $\lambda<20.25$ implies that this is the $\lambda$ value for which the LLCP is at $P=0$. Isochore-crossing studies conducted elsewhere [@KapkoPRIVATE2013] show that this is indeed the case, with $T_c \approx 700$ K for $P_c=0$. It is clear that decreasing $\lambda$ means decreasing the tetrahedrality and increasing density. When $\lambda<20.25$ the LLCP shifts to positive pressures, and therefore the phase transition line can no longer be seen in Fig. \[FIG:modSW\], as it only considers $P=0$. We thus lack the information to determine exactly for which $\lambda$ there is no LLCP at [*any*]{} pressure, but it is certain that this happens at some value $\lambda>0$, since in the most extreme case of $\lambda=0$ we are left with a simple Lennard-Jones-like model that has no LLCP. ![ Phase diagram of the modified Stillinger-Weber potential in terms of the tetrahedral repulsion parameter $\lambda$ and temperature $T$, at zero pressure [@MolineroPRL2006]. The black triangles indicate the melting line of the diamond cubic (dc) crystal, while the green squares denote the melting line of the bcc crystal. The dashed line separates the dc and bcc regions. Yellow circles indicate the transition temperature $T_{\text{LL}}$ at which the liquid-liquid phase transition line crosses the $P=0$ isobar for that particular value of $\lambda$. Silicon is represented by $\lambda=21$ and has a liquid-liquid critical point at $-0.60$ GPa [@VasishtNP2011], and therefore all LLCPs for $\lambda>20.25$ lie at negative pressures (there is a LLCP for each value of $\lambda$). For $\lambda<20.25$ the LLCPs are at positive pressures and therefore the phase transition line can no longer be seen in this diagram. When $\lambda$ is large the system easily crystallizes, and therefore the phase transition line at $P=0$ can no longer be accurately located when $\lambda > 21.5$. []{data-label="FIG:modSW"}](mod-SW.eps){width="0.8\linewidth"} That weakening the tetrahedrality (i.e., making the tetrahedral bonds more flexible) leads to the removal of a LLCP, was also shown in 2012 by Tu and co-authors using a different monatomic model [@TuEPL2012]. The Hamiltonian of this model includes a term that lowers the energy when particles are aligned along near-tetrahedral angles and thus favors a diamond cubic ground state. The study of Ref. [@TuEPL2012] considers two versions: one that allows broad flexibility of the inter-tetrahedral bond angles (leading to weak tetrahedrality), and another in which the bond angle is more constrained (giving rise to strong tetrahedrality). The behavior for strong tetrahedrality is shown in Fig. \[FIG:isochores\_strong\_Tu\], and we see that the isochores converge into a critical point. If the tetrahedrality is weakened slightly, then the isochores separate, the LLCP disappears, and the diagram starts to resemble that of Fig. \[FIG:isochores\_BKS\_WAC\]b for WAC. It should be mentioned that a separation of the global $C_P$ and $K_T$ maxima also occurs in the weak tetrahedrality version (as is the case for WAC), while the $C_P$ and $K_T$ maxima are close together and near the LLCP in the strong version of the model. ![ Isochores of the Tu model for the strong tetrahedrality version, which has a LLCP [@TuEPL2012]. Gray area indicates the density anomaly region. By reducing the tetrahedrality, the Tu model can be smoothly changed into the weak tetrahedrality version, which does not have a LLCP. The isochores of WAC (Fig. \[FIG:isochores\_BKS\_WAC\]b) show no LLCP but closely resembles that of the strong Tu model. We can interpret this as that WAC is [*close*]{} to having a LLCP, but not close enough. If we were to enhance the tetrahedrality of WAC, it is likely a LLCP would appear. []{data-label="FIG:isochores_strong_Tu"}](isochores_strong_Tu.eps){width="0.5\linewidth"} Finally we should consider the simulations done on “patchy” colloids by Sciortino and coworkers. Using the Kern-Frenkel (KF) model [@KernJCP2003] (which consists of particles with tetrahedrally arranged sticky points), these authors demonstrated that the colloids developed tetrahedral network topologies, with each particle being surrounded by four others—which is not itself surprising. More interesting was the finding that, when the effective sizes of the patches were varied, conditions could be found in which not only were the relaxation kinetics strictly Arrhenius in form, but also the amorphous state became the free energy ground state of the system, over a wide range of densities [@SmallenburgNatP2013]. This corresponds to a more dramatic stabilization of the amorphous state than the kinetic stability observed in our work. It signifies an absolute stability against crystallization on any time scale, i.e., the system has become an “ideal glassformer” [@KapkoJCP2013]. Studies with the KF model have also demonstrated that highly directional bonds are needed to observe spontaneous crystallization in tetrahedral interacting particles [@RomanoJCP2011], in agreement with the results found by Molinero [*et al.*]{} using the Stillinger-Weber family of potentials. Since the KF colloids can be used to describe different tetrahedral models, they promote our understanding of tetrahedral liquids such as ST2 and mW water, Stilling-Weber silicon, and BKS silica. Surprisingly, there exists a mapping from these models to the KF model, using only a single parameter: the patch width [@SaikaVoivodJCP2013]. The patch width is related to the flexibility of the bonds between the particles, and it is therefore likely that spontaneous crystallization and the existence of a LLCP are related to bond angle flexibility. All of these studies show that the occurrence of a LLCP becomes less likely when the parameters controlling tetrahedrality are weakened. Unfortunately, the BKS and WAC models do not have an explicit parameter that controls tetrahedrality, such as the parameter $\lambda$ in the Stillinger-Weber model. In this model there is a direct relation between the value of $\lambda$ and the tetrahedrality of the liquid measured by the orientational order parameter $q$ as defined by Errington and Debenedetti [@ErringtonNat2001]. This parameter is constructed such that its average value $\left< q \right>$ will equal zero if all atoms are randomly distributed within the liquid, while $q=1$ for each atom within a perfect tetrahedral network (such as in a cubic diamond lattice). For silica the situation is more complicated. It is not immediately clear how to define the tetrahedrality of a system that consists of two types of atoms. One way would be to find for each Si atom its four nearest neighboring Si atoms and compute $\left< q \right>$ for this subset of atoms. However, this measure would completely ignore the positions of the O atoms which form ionic bridges between the Si atoms. Since the O-Si-O bond angle deviates very little from the perfect tetrahedral angle of $109^{\circ}$ [@VollmayrPRB1996], it makes sense to focus on the inter-tetrahedral Si-O-Si bond angle instead. It is commonly agreed that structures such as diamond cubic have maximum tetrahedrality, and for silica this corresponds to a system where all Si-O-Si bond angles are equal to $180^{\circ}$ (such as $\beta$-cristobalite). How much the inter-tetrahedral Si-O-Si bond angles differ from $180^{\circ}$ can thus be employed as a measure of the tetrahedrality, and we have therefore calculated this bond angle distribution for both BKS and WAC. The location of the maximum in the Si-O-Si bond angle distribution (i.e., the most probable angle) is a parameter that one could use to quantify the tetrahedrality. If we denote the most probable angle at the lowest accessible temperature ($T_g$) as $\theta_{\text{max}}$, then the tetrahedrality parameter $t$ can be defined as $t \equiv \theta_{\text{max}}/180^{\circ}$, where $0 < t < 1$. Since the “openness” of the structure will increase with the average Si-O-Si angle, one could also define the tetrahedrality using the volume ratio, i.e., $t \equiv V^{*} / V_{\text{dc}}$, which would require much less effort to calculate. Here $V_{\text{dc}}$ is the volume of the perfect diamond cubic and $V^{*}$ is the system volume at some corresponding state, for instance at the TMD (which is less arbitrary than $T_g$). Let us consider the angular relations and the mechanical forces that determine them in more detail. In terms of the familiar ball-and-stick model, the Si-O-Si bond could be represented by two sticks connected at the oxygen atom, with a spring in between the sticks. This spring constrains the bond angle to some preferred bond angle $\theta_0$, while the value of its spring constant $k_2$ (the [*stiffness*]{}) dictates how flexible the bond angle is. From the bond angle probability distribution $\mathcal{P}(\theta)$, it is possible to estimate the values of the preferred bond angle $\theta_0$ and the bond angle stiffness $k_2$. To extract the Si-O-Si bond angles from the data, we consider each O ion together with its two nearest Si neighbors and calculate the angle between the two Si-O bonds. In Fig. \[FIG:bond\_angles\] we show the resulting probability distributions $\mathcal{P}(\theta)$ of the Si-O-Si angle $\theta$ for BKS and WAC at zero pressure. These curves have been measured before in previous studies [@VollmayrPRB1996; @HemmatiMineralogy2000] but with less detail. As the temperature decreases, the width of the distribution decreases and the maximum shifts toward $180^{\circ}$. This implies that the liquid becomes more structured and stiffer. This is to be expected, since at a high temperatures there are more thermal fluctuations and therefore $\mathcal{P}(\theta)$ has a broader distribution. ![ Probability distribution of the Si-O-Si bond angle $\mathcal{P}(\theta)$ in liquid silica for (a) the BKS model and (b) the WAC model. As $T$ goes down, the most probable angle moves closer to $180^{\circ}$ while simultaneously the width of the distribution decreases. The first phenomenon causes the liquid to expand upon cooling, while a reduction in width means that the bonds become stiffer, which leads to a decrease in diffusion. Both phenomena are related (see below) and are much stronger for WAC than for BKS. Instead of $\mathcal{P}(\theta)$ it is better to consider $\mathcal{P}(\cos\theta) = \mathcal{P}(\theta) / \sin\theta$, since a completely random distribution such as in the vapor has $\mathcal{P}(\theta) \propto \sin\theta$ while $\mathcal{P}(\cos\theta)$ is uniform (see inset of panel a). For both models and all temperatures $\mathcal{P}(\cos\theta)$ resembles a normal distribution with mean $180^{\circ}$. This indicates that the preferred angle is in fact $180^{\circ}$, and that the width of $\mathcal{P}(\cos\theta)$ determines both the location of the peak in $\mathcal{P}(\theta)$ as well as its width. []{data-label="FIG:bond_angles"}](angle-histo-BKS-P0.eps "fig:"){width="\linewidth"}\ ![ Probability distribution of the Si-O-Si bond angle $\mathcal{P}(\theta)$ in liquid silica for (a) the BKS model and (b) the WAC model. As $T$ goes down, the most probable angle moves closer to $180^{\circ}$ while simultaneously the width of the distribution decreases. The first phenomenon causes the liquid to expand upon cooling, while a reduction in width means that the bonds become stiffer, which leads to a decrease in diffusion. Both phenomena are related (see below) and are much stronger for WAC than for BKS. Instead of $\mathcal{P}(\theta)$ it is better to consider $\mathcal{P}(\cos\theta) = \mathcal{P}(\theta) / \sin\theta$, since a completely random distribution such as in the vapor has $\mathcal{P}(\theta) \propto \sin\theta$ while $\mathcal{P}(\cos\theta)$ is uniform (see inset of panel a). For both models and all temperatures $\mathcal{P}(\cos\theta)$ resembles a normal distribution with mean $180^{\circ}$. This indicates that the preferred angle is in fact $180^{\circ}$, and that the width of $\mathcal{P}(\cos\theta)$ determines both the location of the peak in $\mathcal{P}(\theta)$ as well as its width. []{data-label="FIG:bond_angles"}](angle-histo-WAC-P0.eps "fig:"){width="\linewidth"} Plotting $\mathcal{P}(\theta)$ may not be the best way of presenting the bond angle distribution, as this distribution is biased toward $90^{\circ}$ angles. This is particularly clear from the distribution of the vapor (the thin black line in Fig. \[FIG:bond\_angles\]a). The ions in the vapor have no preferred position with respect to their neighbors, yet $\mathcal{P}(\theta)$ is not uniform but proportional to $\sin\theta$. This is related to the fact that the infinitesimal area element of the unit sphere is $dA = \sin\theta\,d\theta\,d\phi$ rather than $d\theta\,d\phi$. As $\theta \to 180^{\circ}$ the area element $dA$ approaches zero, and therefore $\mathcal{P}(\theta) = 0$ at $\theta = 180^{\circ}$. Instead of $\mathcal{P}(\theta)$ it is better to consider the probability distribution $\mathcal{P}(\cos\theta) = \mathcal{P}(\theta)/\sin\theta$, as is shown in the insets of Fig. \[FIG:bond\_angles\]. The $\mathcal{P}(\cos\theta)$ distribution of the vapor is a uniform distribution (inset of Fig. \[FIG:bond\_angles\]a). For the liquid, the distribution $\mathcal{P}(\cos\theta)$ is approximately a normal distribution with its mean at $\theta_0 = 180^{\circ}$. Evidently the most probable inter-tetrahedral angle (the location of the $\mathcal{P}(\theta)$-peak) is purely an effect of the width of this normal distribution combined with the fact that $dA \propto \sin\theta$. It is possible to interpret the bond angle distribution in terms of an effective potential $U_{\text{eff}}(\theta)$, assuming that $\mathcal{P}(\cos\theta) \propto \exp[ -U_{\text{eff}}(\theta) / k_{B}T ]$. When the effective potential is harmonic, i.e. $U_{\text{eff}} = \tfrac{1}{2} k_2 (\theta - \theta_0)^2$, the resulting probability distribution is a normal distribution with mean $\theta_0$ and a width that depends on temperature $T$ and stiffness $k_2$. In general the effective potential will not be perfectly harmonic and includes anharmonic terms. Because $\cos\theta$ is an even function about $\theta=180^{\circ}$, it is required that $\mathcal{P}(\cos\theta)$ is as well, and therefore also $U_{\text{eff}}(\theta)$. Consequently, the leading-order anharmonic term in $U_{\text{eff}}(\theta)$ is of the fourth order. The Si-O-Si bond angle distribution can thus be described by $$\begin{aligned} \mathcal{P}(\theta) = A \sin\theta \exp[ -U_{\text{eff}}(\theta) / k_{B}T ] \label{EQ:Ptheta}\end{aligned}$$ with $U_{\text{eff}}$ a Taylor series about the mean angle $\theta_0 = 180^{\circ}$, $$\begin{aligned} U_{\text{eff}}(\theta) = \frac{1}{2} k_2 (\theta-\theta_0)^2 + \frac{1}{4!} k_4 (\theta-\theta_0)^4 + \dots \label{EQ:Ueff}\end{aligned}$$ Here $A$ is a temperature-dependent normalization constant that ensures that the total probability $\int\mathcal{P}(\theta)\,d\theta = \int\mathcal{P}(\cos\theta)\,d\!\cos\theta$ is equal to one, and $k_B$ is the Boltzmann constant. The probability distributions of Fig. \[FIG:bond\_angles\] can be fitted quite well with Eqs. \[EQ:Ptheta\] and \[EQ:Ueff\], even when the sixth power and higher-order terms are ignored. The resulting values for the stiffness $k_2$ are shown in Fig. \[FIG:k2\]. It is immediately clear that WAC is far more rigid than BKS. For BKS the stiffness does not vary much with temperature, while increasing the pressure makes the bonds slightly less stiff. The same is true for WAC at high $T$, but below 5 GPa the stiffness shows an increase when the liquid is cooled. This increase is exactly where $C_P$ has its maximum in Fig. \[FIG:response\_functions\_WAC\]b, and thus we may argue that the increase in $C_P$ is due to a structural change, namely the stiffening of the tetrahedral network. ![ Stiffness of the Si-O-Si bond angle for both WAC (solid lines, top) and BKS (dashed lines, bottom). For both models the stiffness $k_2$ goes down with increasing pressure. It is clear that BKS has more flexible bonds (small $k_2$), and that WAC is more rigid (large $k_2$) and therefore “more tetrahedral”. In addition WAC shows a transition at low $T$ for $P \leq 5$ GPa to a state with an even higher stiffness. []{data-label="FIG:k2"}](BKS-WAC-k2.eps){width="\linewidth"} From the isochores in Fig. \[FIG:isochores\_BKS\_WAC\]b it is clear that WAC is very close to having a LLCP. If we compare the results of previous studies done on tetrahedral liquids [@MolineroPRL2006; @TuEPL2012] with our results for BKS and WAC, then we see that the tetrahedrality of BKS is far too small (i.e., the inter-tetrahedral bond angles are not sufficiently stiff) to have a LLCP, and that WAC is close, but not close enough. However, it might be possible to make a small change to the WAC potential to enhance its tetrahedrality. One simple way to achieve this would be to add a repulsive term similar to the three-body interaction of the Stillinger-Weber model. This term should penalize any Si-O-Si configuration with an angle less than $180^{\circ}$ with a repulsive energy determined by the intensity parameter $\lambda$ and the size of the deviation. The $\lambda$ value associated with this interaction should be carefully chosen; if $\lambda$ is too small no LLCP will arise, while applying a $\lambda$ that is too large will likely lead to crystallization into a diamond ($\beta$-cristobalite) structure. It would be interesting to see at what value of $k_2$ this criticality is introduced, and if this value is the same across other tetrahedral models as well, but this is beyond the scope of the present project. The results presented here are also relevant to the possible existence of a LLCP in different water models, and highlight the importance of a thorough analysis of the O-H-O bond angle distribution. Such an analysis, possibly with the use of a bond angle stiffness parameter such as $k_2$, might be able to predict if a particular water model will have a LLCP. Unfortunately, to the best of our knowledge, it is currently not possible to measure these angles directly in experiments, as significant help from computer simulations is required to obtain the angular structure of liquid water [@SharpACR2010; @SoperPRL2000]. Conclusion ========== Although it has been suggested, based on a combination of simulation and theoretical considerations [@SaikaVoivodPRE2000], that both BKS and WAC have LLCPs at temperatures beyond the accessible simulation range, our study suggests that neither BKS nor WAC can reach a critical point. We have compared our results to those of other tetrahedral models [@MolineroPRL2006; @TuEPL2012], analyzed the bond angle distributions, and conclude that the lack of a LLCP in both BKS and WAC is due to a lack of stiffness in the inter-tetrahedral Si-O-Si bond angles. WAC is close to criticality, but BKS shows little sign of a LLCP, and since the latter is considered to be the more realistic model for experimental silica, we expect that no LLCP occurs in real silica either. However, this does not mean that manifestations of criticality can never be observed. As Chatterjee and Debenedetti [@ChatterjeeJCP2006] have shown theoretically, even a weak tendency toward criticality (as in BKS) can be amplified into a liquid-liquid phase separation in a binary system. Indeed this notion has been exploited elsewhere [@AngellVarna1996] to interpret the (much-studied [@CharlesJACS1966; @CharlesJACS1967; @CharlesPCG1969; @GalakhovCHAPTER1973; @DoremusBOOK1973; @HallerJACS1974; @MorishitaJACS2004] but incompletely understood) splitting out of an almost pure SiO$_2$ phase from such simple systems as the Na$_2$O-SiO$_2$ and Li$_2$O-SiO$_2$ binary glasses during supercooling. Acknowledgments =============== We would like to thank P. Debenedetti, V. Molinero, H. Arag[ã]{}o, and C. Calero for the many valuable discussions. EL and HES thank the National Science Foundation (NSF) Chemistry Division for support (Grant No. CHE 12-13217) SVB thanks the Dr. Bernard W. Gamson Computational Science Center at Yeshiva College for support. CAA acknowledges the support of this research through the the National Science Foundation (NSF) experimental chemistry program under collaborative Grant no. CHE 12-13265. WAC and BKS silica {#appendix_WAC_BKS} ================== One of the simplest models for silica is the WAC model introduced by L. V. Woodcock, C. A. Angell, and P. Cheeseman [@WoodcockJCP1976]. The model is sometimes also known as the Transferable Ion Model (TRIM) because its potential is rather general and can also be used to model other ionic liquids [@HemmatiJNCS1997]. In the WAC model, the material consists of a 1:2 mixture of Si$^{+4}$ and O$^{-2}$ ions, without any explicit bonds. Apart from the electrostatic force, the ions also interact with each other via an exponential term: $$\begin{aligned} & U_{\text{WAC}}(r_{ij}) \equiv \frac{1}{4 \pi \varepsilon_0} \frac{z_i z_j e^2}{r_{ij}} + a_{ij} \left( 1+\frac{z_i}{n_i}+\frac{z_j}{n_j} \right) \times \nonumber \\ & \qquad \exp \left[ B_{ij} (\sigma_i + \sigma_j - r_{ij}) \right] % \label{EQ:def_WAC_1}\end{aligned}$$ Here the subscripts $i,j \in \text{Si,O}$ indicate the species of the two ions involved, $z_i$ the charge of each ion ($z_{\text{Si}}=+4$, $z_{\text{O}}=-2$), $n_{\text{Si}}=n_{\text{O}}=8$ the number of outer shell electrons, and $\sigma_i$ the size of each ion ($\sigma_{\text{Si}}=0.1310$ nm, $\sigma_{\text{O}}=0.1420$ nm). For WAC silica the parameters $a_{ij}$ and $B_{ij}$ are the same for all pairs: $a_{ij} = 0.19$ perg $\approx 11.44$ kJ/mol and $B_{ij}=34.48$ nm$^{-1}$ [@HemmatiJNCS1997]. The potential can also be written as $$\begin{aligned} U_{\text{WAC}}(r_{ij}) = \frac{1}{4 \pi \varepsilon_0} \frac{q_i q_j}{r_{ij}} + A_{ij} \exp(-B_{ij} r_{ij}), \label{EQ:def_WAC_2}\end{aligned}$$ with $A_{\text{SiSi}} =1.917\,991\,469 \times 10^5$ kJ/mol, $A_{\text{SiO}} = 1.751\,644\,217 \times 10^5$ kJ/mol, and $A_{\text{OO}} = 1.023\,823\,519 \times 10^5$ kJ/mol. The second model that we consider here is BKS. Currently one of the most popular models, the BKS model was introduced by B. W. H. van Beest, G. J. Kramer, and R. A. van Santen [@vanBeestPRL1990] and is similar to WAC. Silica is again modeled as a simple 1:2 mixture of Si- and O-ions, without explicit bonds. To produce results that better match experiments and [*ab initio*]{} simulations, and to be able to effectively represent screening effects, the charges in BKS are not integer values of $e$ but instead are given by $q_{\text{Si}}=+2.4e$ and $q_{\text{O}}=-1.2e$. In addition to this, the BKS potential also differs from the WAC model in that it includes an attractive $r^{-6}$ term: $$\begin{aligned} U_{\text{BKS}}(r_{ij}) \equiv \frac{1}{4 \pi \varepsilon_0} \frac{q_i q_j}{r_{ij}} + A_{ij} \exp(-B_{ij} r_{ij}) - C_{ij} r_{ij}^{-6}. % \label{EQ:def_BKS_1}\end{aligned}$$ In BKS there is no interaction between two Si-ions apart from the electrostatics, i.e. $A_{\text{SiSi}} = B_{\text{SiSi}} = C_{\text{SiSi}} = 0$. The parameters for the Si-O pair are $A_{\text{SiO}} \equiv 18\,003.7572$ eV, $B_{\text{SiO}} \equiv 4.87318$ Å$^{-1}$, and $C_{\text{SiO}} \equiv 133.5381 $ eVÅ$^6$. For the O-O interaction, the numbers are $A_{\text{OO}} \equiv 1388.7730$ eV, $B_{\text{OO}} \equiv 2.76$ Å$^{-1}$, and $C_{\text{OO}} \equiv 175$ eVÅ$^6$. Although the BKS model has been quite successful in simulations of quartz and amorphous silica, at temperatures above $\sim 5000$K two ions can come very close, causing problems. As $r \to \infty$ the BKS potential diverges to $-\infty$ and the two ions fuse together—a non-physical phenomenon that is an artifact of the model. One way to solve this issue is by including an additional repulsive term at very small $r$, e.g., by adding a $r^{-30}$ term [@SaikaVoivodPRE2000]. When such a large power is used, however, a small time step is required to prevent large forces, which leads to much slower simulations. Because of this, we instead adjust the BKS potential at small $r$ by adding a second-degree polynomial for $r\leq r_{\text{s}}$. Here $r_{\text{s}}$ is the point at which the original BKS force has an inflection, i.e., where $d^{2}F_{\text{BKS}}/dr^{2} = -d^{3}U_{\text{BKS}}/dr^{3} = 0$. We choose the coefficients of the polynomial such that the new potential $U(r)$ has no inflection at $r = r_{\text{s}}$. Adding the polynomial still leads to $U(r) \to -\infty$ when $r \to 0$, but increases the height of the energy barrier sufficiently to allow us to simulate the high temperatures we wish to explore. Choosing a short-range correction to BKS has been found to have little effect on the simulation results, and merely prevents the ions from fusing. To further speed up the simulations, we modify the BKS potential as described by K. Vollmayr, W. Kob, and K. Binder in Ref. [@VollmayrPRB1996], and truncate and shift the potential at $r_{\text{c}}=0.55$ nm. Although this truncation leads to a shift in pressure, it otherwise produces approximately the same results [@VollmayrPRB1996]. In conclusion, the modified BKS potential we use is given by $$\begin{aligned} & U'_{\text{BKS}}(r_{ij}) = \frac{1}{4 \pi \varepsilon_0}\frac{q_i q_j}{r_{ij}} \nonumber \\ &+ \left\{ \begin{array}{lll} a_{ij} r_{ij}^2 + b_{ij} r_{ij} + c_{ij} - \frac{1}{4 \pi \varepsilon_0}\frac{q_i q_j}{r_{ij}} & & (r_{ij} < r_{\text{s}}) \\ A_{ij} \exp(-B_{ij} r_{ij}) - C_{ij} r_{ij}^{-6} - U_{\text{c},ij} & & (r_{\text{s}} < r_{ij} < r_{\text{c}}) \\ 0 & & (r_{ij} > r_{\text{c}}), \\ \end{array} \right. \label{EQ:def_mod_BKS}\end{aligned}$$ with the parameter values for $ij=\text{SiO}$ and $ij=\text{OO}$ listed in Table \[TAB:parameters\_mod\_BKS\]. For the Si-Si interaction the potential is $U'_{\text{BKS}}(r_{\text{SiSi}}) = \frac{1}{4 \pi \varepsilon_0} q_{\text{Si}}^2 /r_{ij}$ and does not involve any cutoffs, apart from the real-space cutoff of the Ewald sum. Si-O O-O units ------------------- -------------------------------- -------------------------------- -------------- $a_{ij}$ 2.678430850$\times$10$^{5}$ 9.208901230$\times$10$^{4}$ kJ/molnm$^2$ $b_{ij}$ $-7.343$377221$\times$10$^{4}$ $-4.873$373066$\times$10$^{4}$ kJ/molnm $c_{ij}$ 2.353960789$\times$10$^{3}$ 7.337042047$\times$10$^{3}$ kJ/mol $A_{ij}$ 1.737098076$\times$10$^{6}$ 1.339961920$\times$10$^{5}$ kJ/mol $B_{ij}$ 48.7318 27.6 nm$^{-1}$ $C_{ij}$ 1.288446484$\times$10$^{-2}$ 1.688492907$\times$10$^{-2}$ nm$^6$kJ/mol $U_{\text{c},ij}$ $-0.465\,464\,470$ $-0.575\,753\,031$ kJ/mol $r_{\text{s}}$ 0.139018528 0.195499453 nm $r_{\text{c}}$ 0.55 0.55 nm : Parameters of the modified BKS potential of Eq. (\[EQ:def\_mod\_BKS\]). Because Si-Si only has the (repulsive) Coulomb interaction, all parameters are zero for Si-Si. One mol here indicates one mol of ions, not one mol of SiO$_2$ molecules. []{data-label="TAB:parameters_mod_BKS"} Calculation of response functions via surface fits {#appendix_smooth_surface} ================================================== In order to construct isobaric response functions from a large set of constant-volume ($NVT$) data, some type of fit or interpolation is needed. For example, to calculate $C_P = (\partial H / \partial T)_P$ we consider the enthalpy $H$ as a function of both $P$ and $T$ and fit the data $[P,T,H]$ with a smooth 3-dimensional surface $H(P,T)$. Abrupt changes in $H(P,T)$ lead to large spikes in its derivative $\partial H / \partial T$, and thus the $H(P,T)$ surface must be smooth if we are to obtain a meaningful $C_P$. Fitting a surface rather than a curve has the additional advantage that more data is taken into account, resulting in better statistics. An alternative approach is to calculate $C_P$ via fluctuations in $H$, but it has been shown [@LascarisAIP2013] that first fitting $H(T)$ and then taking a derivative leads to cleaner results. It is of course easier to calculate $C_P$ by doing constant-pressure ($N\!PT$) simulations instead, but then one would have the same problem with calculating $C_V$. We conclude that we can easily calculate all response functions if we apply a smooth surface fit $f(x,y)$ to a set of 3-dimensional points $z_k(x_k,y_k)$. Fitting a surface to a set of points means striking a balance between the “smoothness” of the fit and the fitting error induced. One measure of smoothness is the Laplacian $\nabla^2 f$, since a small Laplacian means little change in the slope of $f(x,y)$, and thus a smoother function. Hence, to obtain a smooth surface fit $f(x,y)$ through the data points $z_k(x_k,y_k)$ with $k = 1,2,\dots,N$, we minimize $$\begin{aligned} J = \sum_{k=1}^{N} w_k \left[ f(x_k,y_k) - z_k \right]^2 + \iint \left| \nabla^2 f(x,y) \right|^2 dx\,dy. \label{EQ:smooth_surface}\end{aligned}$$ The weights $w_k$ provide the balance between the smoothness and the fitting error. If we set $w_k$ too low, we obtain a very smooth fitting function $f(x,y)$ that poorly represents the data. If we set $w_k$ too high, the function $f(x,y)$ will go through all the data points but will show large variations. Because large variations in the surface lead to even larger variations in the derivatives, the $H(P,T)$ surface must be very smooth when we calculate the $C_P$. Fortunately, introducing small fitting errors does not cause problems, because the simulation data already suffers from small statistical errors. If the underlying response function is in fact smooth, then it is possible to use the fitting errors to partially cancel the statistical errors. Minimization of the functional $J$ in Eq. \[EQ:smooth\_surface\] is not a new concept. For example, the <span style="font-variant:small-caps;">csaps</span> function in MATLAB applies a similar minimization scheme to calculate a cubic smoothing spline. As opposed to this MATLAB function, we do not impose the constraint that $f(x,y)$ is a tensor product spline, but instead represent $f(x,y)$ by a set of $100 \times 100$ points $(x_i,y_j,f_{ij})$ placed on a regular grid $(x_i,y_j)$. Bilinear interpolation is used to estimate the value of $f(x,y)$ between these grid points, and the derivatives and the Laplacian are calculated using finite (central) differences. To compensate for the reduced number of data points near the edges of the domain, we recommend that higher-order differences near the edges be used. [10]{} , [G. J. Kramer]{}, and [R. A. van Santen]{}, , 1955 (1990). , [W. Kob]{}, and [K. Binder]{}, , 15808 (1996). and [W. Kob]{}, , 3169 (1999). , Liquids, Freezing and the Glass Transition, in [*Proceedings of the Les Houches Summer School of Theoretical Physics, Session LI*]{}, edited by [J.-P. Hansen]{}, [D. Levesque]{}, and [J. Zinn-Justin]{}, pp. 287–503, North-Holland, Amsterdam, 1991, 1989. and [L. Sjögren]{}, , 241 (1992). and [C. A. Angell]{}, Comparison of Pair Potential Models for the Simulation of Liquid [SiO2]{}: Thermodynamic, Angular-Distribution, and Diffusional Properties, in [*Physics Meets Mineralogy: Condensed Matter Physics in the Geosciences*]{}, edited by [H. Aoki]{}, [Y. Syono]{}, and [R. J. Hemley]{}, chapter 6.1, pp. 325–339, Cambridge University Press, Cambridge, England, 2000. , [D. B. Dingwell]{}, and [E. Rössler]{}, , 155 (1996). , [K.-U. Hess]{}, and [V. N. Novikov]{}, , 207 (1998). , [R. D. Bressel]{}, [M. Hemmatti]{}, [E. J. Sare]{}, and [J. C. Tucker]{}, , 1559 (2000). and [C. A. Angell]{}, , 739 (2003). , [U. V. Waghmare]{}, and [S. Sastry]{}, , 175701 (2004). , [C. T. Moynihan]{}, and [C. A. Angell]{}, , 492 (1999). and [M. Rovere]{}, , 164503 (2012). , [P. Kumar]{}, [S. V. Buldyrev]{}, [S.-H. Chen]{}, [P. H. Poole]{}, [F. Sciortino]{}, and [H. E. Stanley]{}, , 16558 (2005). , [L. Hu]{}, [Y. Yue]{}, and [J. C. Mauro]{}, , 014508 (2010). , [N. Jakse]{}, and [A. Pasturel]{}, , 104509 (2012). , [M. Hemmati]{}, and [C. A. Angell]{}, , 2281 (1997). , [F. Sciortino]{}, and [P. H. Poole]{}, , 011202 (2000). and [M. Hemmati]{}, Glass Transitions and Critical Points in Orientationally Disordered Crystals and Structural Glassformers: (“Strong” Liquids are More Interesting Than We Thought), in [*4th International Symposium on Slow Dynamics in Complex Systems*]{}, edited by [M. Tokuyama]{} and [I. Oppenheimer]{}, volume 1518, p. 9, AIP Conf. Proc., 2013. , [F. Sciortino]{}, [U. Essmann]{}, and [H. E. Stanley]{}, , 324 (1992). , [C. A. Angell]{}, and [P. Cheeseman]{}, , 1565 (1976). , [C. Kutzner]{}, [D. van der Spoel]{}, and [E. Lindahl]{}, , 435 (2008). , [D. Donadio]{}, and [M. Parrinello]{}, , 014101 (2007). , [E. Lascaris]{}, [G. Franzese]{}, [S. V. Buldyrev]{}, [H. J. Herrmann]{}, and [H. E. Stanley]{}, , 244506 (2013). , [T. A. Kesselring]{}, [G. Franzese]{}, [S. V. Buldyrev]{}, [H. J. Herrmann]{}, and [H. E. Stanley]{}, Response functions near the liquid-liquid critical point of [ST2]{} water, in [*4th International Symposium on Slow Dynamics in Complex Systems*]{}, edited by [M. Tokuyama]{} and [I. Oppenheimer]{}, volume 1518, pp. 520–526, AIP Conf. Proc., 2013. , , 1 (1985). , [C. T. Moynihan]{}, and [C. A. Angell]{}, , 6663 (2001). , [W. Kob]{}, [A. Latz]{}, [J. Horbach]{}, and [ K. Binder]{}, , 104204 (2001). , [P. H. Poole]{}, and [F. Sciortino]{}, , 514 (2001). , [S. Sastry]{}, and [C. A. Angell]{}, , 075701 (2006). and [T. A. Weber]{}, , 5262 (1985). , [S. Saw]{}, and [S. Sastry]{}, , 549 (2011). , private communication, 2013. , [S. V. Buldyrev]{}, [Z. Liu]{}, [H. Fang]{}, and [H. E. Stanley]{}, , 56005 (2012). and [D. Frenkel]{}, , 9882 (2003). and [F. Sciortino]{}, , 554 (2013). , [Z. Zhao]{}, [D. V. Matyushov]{}, and [C. A. Angell]{}, , 12A549 (2013). , [E. Sanz]{}, and [F. Sciortino]{}, , 174502 (2011). , [F. Smallenburg]{}, and [F. Sciortino]{}, , 234901 (2013). and [P. G. Debenedetti]{}, , 318 (2001). and [J. M. Vanderkooi]{}, , 231 (2010). and [M. A. Ricci]{}, , 2881 (2000). and [P. G. Debenedetti]{}, , 154503 (2006). , [P. H. Poole]{}, and [M. Hemmati]{}, A New Interpretation of Liquid-Liquid Unmixing in Classical Alkali Silicate Glasses, in [*Proc. 12th East European Glass Conf. (Varna, Bulgaria)*]{}, edited by [B. Samunova]{} and [Y. Demetriew]{}, pp. 100–109, 1996. , , 55 (1966). , , 631 (1967). , , 169 (1969). and [B. G. Varshal]{}, Causes of phase separation in simple silicate systems, in [*Phase-Separation Phenomena in Glasses*]{}, edited by [E. A. Porai-Koshits]{}, volume 8 of [*The Structure of Glass*]{}, pp. 7–11, Consultants Bureau, New York, 1973. , , Wiley, New York, 1973. , [D. H. Blackburn]{}, and [J. H. Simmons]{}, , 120 (1974). , [A. Navrotsky]{}, and [M. C. Wilding]{}, , 1550 (2004). and [C. A. Angell]{}, , 236 (1997).
--- abstract: 'Many statistical estimation procedures lead to nonconvex optimization problems. Algorithms to solve these are often guaranteed to output a stationary point of the optimization problem. Oracle inequalities are an important theoretical instrument to asses the statistical performance of an estimator. Oracle results have focused on the theoretical properties of the uncomputable (global) minimum or maximum. In the present work a general framework used for convex optimization problems to derive oracle inequalities for stationary points is extended. A main new ingredient of these oracle inequalities is that they are *sharp*: they show closeness to the best approximation within the model plus a remainder term. We apply this framework to different estimation problems.' address: | Seminar für Statistik\ ETH Zürich\ 8092 Zürich\ Switzerland\ \ author: - - bibliography: - 'myreferences.bib' title: 'Sharp oracle inequalities for stationary points of nonconvex penalized M-estimators' ---
--- abstract: 'Let $T$ be a random field invariant under the action of a compact group $G$ We give conditions ensuring that independence of the random Fourier coefficients is equivalent to Gaussianity. As a consequence, in general it is not possible to simulate a non-Gaussian invariant random field through its Fourier expansion using independent coefficients.' author: - | P.Baldi, D.Marinucci\ [*Dipartimento di Matematica, Università di Roma [*Tor Vergata*]{}, Italy*]{}\ V.S.Varadarajan\ [*Department of Mathematics, University of California at Los Angeles*]{} bibliography: - 'bibbase.bib' title: On the characterization of isotropic Gaussian fields on homogeneous spaces of compact groups --- [*Key words and phrases*]{} Isotropic Random Fields, Fourier expansions, Characterization of Gaussian Random Fields. [*AMS 2000 subject classification:*]{} Primary 60B15; secondary 60E05,43A30. Introduction {#intro} ============ Recently an increasing interest has been attracted by the topic of rotationally [*real*]{} invariant random fields on the sphere $\mathbb{S^2}$, due to applications to the statistical analysis of Cosmological and Astrophysical data (see [@MR2065205], [@mari2006a] and [@Kim]). Some results concerning their structure and their spectral decomposition have been obtained in [@BM06], where a peculiar feature has been pointed out, namely that if the development into spherical harmonics $$T=\sum_{\ell=1}^\infty \sum_{-m}^m a_{\ell,m}Y_{\ell,m}$$ of a rotationally invariant random field $T$ is such that the coefficients $a_{\ell,m}$, $\ell=1,2,\dots, 0\le m\le \ell$ are independent, then the field is necessarily Gaussian (the other coefficients are constrained by the condition $a_{\ell,-m}=(-1)^m\overline a_{\ell,m}$). This fact (independence of the coefficients+isotropy$\Rightarrow$Gaussianity) is not true for isotropic random fields on other structures, as the torus or ${{\mathbb{Z}}}$ (which are situations on which the action is Abelian). This property implies in particular that non Gaussian rotationally invariant random fields on the sphere [*cannot*]{} be simulated using independent coefficients. In this note we show that this is a typical phenomenon for homogeneous spaces of compact non-Abelian groups. This should be intended as a contribution to a much more complicated issue, i.e. the characterization of the isotropy of a random field in terms of its random Fourier expansion. In §2 and 3 we review some background material on harmonic analysis and spectral representations for random fields. §4 contains the main results, whereas we moved to §5 an auxiliary proposition. The Peter-Weyl decomposition {#Peter-Weil} ============================ Let ${\mathscr}X$ be a compact topological space and $G$ a compact group acting on ${\mathscr}X$ transitively. We denote by $m_G$ the Haar measure of $G$. We know that there exists on ${\mathscr}X$ a probability measure $m$ that is invariant by the action of $G$, noted $x\to g^{-1}x$, $g\in G$. We assume that both $m$ and $m_G$ are normalized and have total mass equal to $1$. We shall write $L^2({\mathscr}X)$ or simply $L^2$ instead of $L^2({\mathscr}X,m)$. Unless otherwise stated the spaces $L^2$ are spaces of [*complex valued*]{} square integrable functions. We denote by $L_g$ the action of $G$ on $L^2$, that is $L_gf(x)= f(g^{-1}x)$. Let ${\widehat {\cl X}}$ be the set of equivalence classes of irreducible unitary representations of $G$ which occur in the decomposition of $L^2(\cl X, m)$. Since the action of $G$ commutes with the complex conjugation on $L^2(\cl X, m)$, it is clear that for any irreducible subspace $H$, $\overline H$, its conjugate subspace is also irreducible. If $H=\overline H$, we can find orthonormal bases $(\phi_k)$ for $H$ which are stable under conjugation; for instance we can choose the $\phi_k$ to be real. If $H\not=\overline H$, then there are two cases according as the action of $G$ on $\overline H$ is, or is not, equivalent to the action on $H$. If the two actions are inequivalent, then automatically $H\perp\overline H$. If the actions are equivalent, it is possible that $H$ and $\overline H$ are not orthogonal to each other. In this case $H\cap \overline H=0$ as both are irreducible and $S=H+\overline H$ is stable under $G$ and conjugation. In this case we can find $K\subset S$ stable under $G$ and irreducible such that $\overline K\perp K$ and $S=K\oplus \overline K$ is an orthogonal direct sum. The proof of this is postponed to the Appendix so as not to interrupt the main flow of the argument. We thus obtain the following orthogonal decomposition of $L^2(\cl X, m)$, [*compatible with complex conjugation*]{}: $$\label{e.pw-dec2} L^2(\cl X, m)=\bigoplus_{i\in {\cl I}^o}H_i\oplus \bigoplus_{i\in {\cl I}^+}(H_i\oplus \overline {H_i})$$ where the direct sums are orthogonal and $$i\in {\cl I}^o\Leftrightarrow H_i=\overline H_i,\qquad i\in {\cl I}^+\Leftrightarrow H_i\perp\overline H_i.$$ We can therefore choose an orthonormal basis $(\phi_{ik})$ for $L^2(\cl X, m)$ such that for $i\in {\cl I}^o$, $(\phi_{ik})_{1\le k\le d_i}$ is an orthonormal basis of $H_i$ stable under conjugation, while, for $i\in {\cl I}^+$, $(\phi_{ik})_{1\le k\le d_i}$ is an orthonormal basis for $H_i$, where $d_i$ is the dimension of $H_i$; then, for $i\in {\cl I}^+$, $(\overline {\phi_{ik}})_{1\le k\le d_i}$ is an orthonormal basis for $\overline H_i$. Such a orthonormal basis $(\phi_{ik})_{ik}$ of $L^2(\cl X, m)$ is said to be [*compatible with complex conjugation*]{}. ${\mathscr}X=\mathbb{S}^1$, the one dimensional torus. Here $\widehat G={{\mathbb{Z}}}$ and $H_k$, $k\in{{\mathbb{Z}}}$ is generated by the function $\gamma_k({\theta})={{\rm e}}^{ik{\theta}}$. $\overline H_k=H_{-k}$ and $\overline H_k\perp H_{k}$ for $k\not=0$. All of the $H_k$’s are one-dimensional. Recall that the irreducible representations of a compact topological group $G$ are all one-dimensional if and only if $G$ is Abelian. $G=SO(3)$, ${\mathscr}X=\mathbb{S}^2$, the sphere. A popular choice of a basis of $L^2({\mathscr}X,m)$ are the spherical harmonics, $(Y_{\ell,m})_{-\ell\le m\le\ell}$, $\ell\in{{\mathbb{N}}}$ (see [@MR1143783]). $H_\ell={\rm span}((Y_{\ell,m})_{-\ell\le m\le\ell})$ are subspaces of $L^2({\mathscr}X,m)$ on which $G$ acts irreducibly. We have $\overline Y_{\ell,m}=(-1)^mY_{\ell,-m}$ and $Y_{\ell,0}$ is real. By choosing $\phi_{\ell,m}=Y_{\ell,m}$ for $m\ge0$ and $\phi_{\ell,m}=(-1)^mY_{\ell,m}$ for $m<0$, we find a basis of $H_\ell$ such that if $\phi$ is an element of the basis, then the same is true for $\overline\phi$. Here $\dim(H_\ell)=2\ell+1$, $\overline H_\ell=H_\ell$, so that in the decomposition (\[e.pw-dec2\]) there are no subspaces of the form $H_i$ for ${\cl I}^+$. The Karhunen-Loève expansion ============================ We consider on ${\mathscr}X$ a real [*centered*]{} square integrable random field $(T(x))_{x\in {\mathscr}X}$. We assume that there exists a probability space $(\Omega,\cl F,P)$ on which the r.v.’s $T(x)$ are defined and that $(x,\omega)\to T(x,\omega)$ is $\cl B(\cl X)\otimes\cl F$ measurable, $\cl B(\cl X)$ denoting the Borel $\sigma$-field of $\cl X$. We assume that $$\label{eq-l2bound} \E\Bigl[\int_{{\mathscr}X}T(x)^2\, dm(x)\Bigr]=M<+\infty$$ which in particular entails that $x\to T_x(\omega)$ belongs to $L^2(m)$ a.s. Let us recall the main elementary facts concerning the Karhunen-Loève expansion for such fields. We can associate to $T$ the bilinear form on $L^2(m)$ $$\label{eq-bilinear} T(f,g)= \E\Bigl[\int_{\cl X}T(x) f(x)\, dm(x) \int_{\cl X}T(y) g(y)\, dm(y)\Bigr]$$ By (\[eq-l2bound\]) and the Schwartz inequality one gets easily that $$|T(f,g)|\le M\Vert f\Vert_2\Vert g\Vert_2\ .$$ Therefore, by the Riesz representation theorem there exists a function $R\in L^2(\cl X\times\cl X,m\otimes m)$ such that $$T(f,g)=\int_{\cl X\times\cl X}f(x)g(y)R(x,y)\, dm(x)dm(y)\ .$$ We can therefore define a continuous linear operator $R:L^2(m)\to L^2(m)$ $$Rf(x)=\int_{\cl X} R(x,y)f(y)\,dm(y)\ .$$ It can be even be proved that the linear operator $R$ is of trace class and therefore compact (see [@MR2169627] for details). Since it is self-adjoint there exists an orthonormal basis of $L^2(\cl X,m)$ that is formed by eigenvectors of $R$. Let us define, for $\phi\in L^2(\cl X,m)$, $$a(\phi)=\int_{{\mathscr}X}T(x)\phi(x)\, dm(x)\ ,$$ Let $\lambda$ be an eigenvalue of $R$ and denote by $E_\lambda$ the corresponding eigenspace. Then the following is well-known. \[prop1\]*Let $\phi\in E_\lambda$. If $\psi\in L^2(\cl X,m)$ is orthogonal to $\phi$, $a(\psi)$ is orthogonal to $a(\phi)$ in $L^2(\Omega,\P)$. Moreover $\E[|a(\psi)|^2]=\lambda\Vert\psi\Vert_2^2$. If $\phi$ is orthogonal to $\overline\phi$, then the r.v.’s $\Re a(\phi)$ and $\Im a(\phi)$ are orthogonal and have the same variance. If the field $T$ is Gaussian, $a(\phi)$ is a Gaussian r.v. If moreover $\phi$ is orthogonal to $\overline\phi$, then $a(\phi)$ is a complex centered Gaussian r.v. (that is $\Re a_i$ and $\Im a_i$ are centered, Gaussian, independent and have the same variance).* [[*Proof*]{}. ]{}a) We have $${\displaylines}{ \E[a(\phi)\overline a(\psi)]=\E\Bigl[ \int_{{\mathscr}X}T(x)\phi(x)\, dm(x) \int_{{\mathscr}X}T(y)\overline\psi(y)\, dm(y)\Bigr]=\cr =\int_{{\mathscr}X\times {\mathscr}X}R(x,y)\phi(x)\overline\psi(y) \, dm(x)\, dm(y)=\lambda\int_{{\mathscr}X}\phi(y)\overline\psi(y) \, dm(y)=\lambda\langle \phi,\psi\rangle\ .\cr }$$ From this relation, by choosing first $\psi$ orthogonal to $\phi$ and then $\psi=\phi$, the statement follows. From the computation in a), as $a(\overline\phi)=\overline{ a(\phi)}$, one gets $\E[a(\phi)^2]=\lambda\langle \phi,\overline\phi\rangle$. Therefore, if $\phi$ is orthogonal to $\overline\phi$, $\E[a(\phi)^2]=0$ which is equivalent to $\Re a(\phi)$ and $\Im a(\phi)$ being orthogonal and having the same variance. It is immediate that $a(\phi)$ is Gaussian. If $\phi$ is orthogonal to $\overline\phi$, $a(\phi)$ is a complex centered Gaussian r.v., thanks to b). $\blacksquare$ If $(\phi_k)_k$ is an orthonormal basis that is formed by eigenvectors of $R$, then under the assumption (\[eq-l2bound\]) it is well-known that the following expansion holds $$\label{e.kr-dev2} T(x)=\sum_{k=1}^\infty a(\phi_k)\phi_k(x)$$ which is called the Karhunen-Loève expansion. This is intended in the sense of $L^2(\cl X,m)$ a.s. in $\omega$. Stronger assumptions (continuity in square mean of $x\to T(x)$, e.g.) ensure also that the convergence takes place in $L^2(\Omega,\P)$ for every $x$ (see [@MR838963], p.210 e.g.) More relevant properties are true if we assume in addition that the random field is invariant by the action $G$. Recall that the field $T$ is said to be (weakly) [*invariant*]{} by the action of $G$ if, for $f_i,\dots f_m\in L^2(\cl X)$ the joint laws of $(T(f_1),\dots,T(f_m))$ and $(T(L_g f_1),\dots,T((L_g f_m))$ are equal for every $g\in G$. Here we write $$T(f)=\int_{\cl X}T(x)f(x)\, dm(x),\qquad f\in L^2(\cl X)\ .$$ If, in addition, the field is assumed to be continuous in sqare mean, this imples that for every $x_1,\dots, x_m\in{\mathscr}X$, $(T(x_1),\dots,T(x_m))$ and $(T(g^{-1}x_1),\dots,T(g^{-1}x_m))$, have the same joint laws for every $g\in G$. If the field is invariant then it is immediate that the covariance function $R$ enjoys the invariance property $$\label{inv:R} R(x,y)=R(g^{-1}x,g^{-1}y)\qquad \strut\hbox to 0pt{a.e. for every $g\in G$}$$ which also reads as $$\label{kh-inv} L_g(Rf)=R(L_gf)\ .$$ Then, thanks to (\[kh-inv\]), it is clear that $G$ acts on $E_\lambda$. Therefore $E_\lambda$ is the direct sum of some of the $H_i$’s introduced in the previous section. Moreover it is a finite direct sum, unless $\lambda=0$, as the eigenvalues of a compact operator that are different from $0$ cannot have but a finite multiplicity. It turns out therefore that the basis $(\phi_{ik})_{ik}$ of $L^2$ introduced in the previous section is always formed by eigenvectors of $R$. Moreover, if some of the $H_i$’s are of dimension $>1$, some of the eigenvalues of $R$ have necessarily a multiplicity that is strictly larger than $1$. As pointed out in §\[Peter-Weil\], this phenomenon is related to the non commutativity of $G$. For more details on the Karhunen-Loève expansion and group representations see [@PecPyc2005h]. Remark that if the random field is isotropic and satisfies (\[eq-l2bound\]), then (\[e.kr-dev2\]) follows by the Peter-Weyl theorem. Actually (\[eq-l2bound\]) entails that, for almost every $\omega$, $x\to T(x)$ belongs to $L^2(\cl X,m)$. An important issue when dealing with isotropic random fields is simulation. In this regard, a natural starting point is the Karhunen-Loève expansion: one can actually sample random r.v.’s $\alpha(\phi_k)$, (centered and standardized) and write $$\label{rem-sim} T_n(x)=\sum_{k=1}^n\sqrt{\lambda_k}\,\alpha(\phi_k) \phi_k$$ where the sequence $(\lambda_k)_k$ is summable. The point of course is what conditions, in addition to those already pointed out, should be imposed in order that (\[rem-sim\]) defines an isotropic field. In order to have a real field, it will be necessary that $$\label{conjugate-cond} \alpha(\overline\phi_k)=\overline{\alpha(\phi_k)}$$ Our main result (see next section) is that if the $\alpha(\phi_k)$’s are independent r.v.’s (abiding nonetheless to condition (\[conjugate-cond\])), then the coefficients, and therefore the field itself are Gaussian. If $H_i\subset L^2({\mathscr}X,m)$ is a subspace on which $G$ acts irreducibly, then one can consider the random field $$T_{H_i}(x)=\sum a(\phi_j)\phi_j(x)$$ where the $\phi_j$ are an orthonormal basis of $H_i$. As remarked before, all functions in $H_i$ are eigenvectors of $R$ associated to the same eigenvalue $\lambda$. Putting together this fact with (\[e.kr-dev2\]) and (\[e.pw-dec2\]) we obtain the decomposition $$T=\sum_{i\in {\mathscr}I^\circ}T_{H^\circ_i}+\sum_{i\in {\mathscr}I^+}(T_{H^+_i}+T_{H^-_i})\ .$$ Let $T$ be a centered random field satisfying assumption (\[eq-l2bound\]) over the torus $\mathbb{T}$, whose Karhunen-Loève expansion is $$T({\theta})=\sum_{-\infty}^{+\infty} a_k\,e^{ik{\theta}},\qquad {\theta}\in\mathbb{T}\ .$$ Then, if $T$ is invariant by the action of $\mathbb{T}$ itself, the fields $(T({\theta}))_{\theta}$ and $(T({\theta}+{\theta}'))_{\theta}$ are equi-distributed, which implies that the two sequences of r.v.’s $$\label{torus-inv1} (a_k)_{-\infty<k<+\infty}\qquad\mbox{and}\qquad (e^{ik{\theta}'}a_k)_{-\infty<k<+\infty}$$ have the same finite distribution for every ${\theta}'\in\mathbb{T}$. Actually one can restrict the attention to the coefficients $(a_k)_{0\le k<+\infty}$, as necessarily $a_{-k}=\overline a_k$. Conversely it is clear that if the two sequences in (\[torus-inv1\]) have the same distribution for every ${\theta}'\in\mathbb{T}$, then the field is invariant. Condition (\[torus-inv1\]) implies in particular that, for every $k,-\infty<k<+\infty, k\not=0$ the distribution of $a_k$ must be invariant by rotation (i.e. by the multiplication of a complex number of modulus $1$). If one assumes moreover that the r.v.’s $a_k,$ are independent, then every choice of a distribution for $a_k, 0< k<+\infty $ that is rotationally invariant gives rise to a random field that is invariant with respect to the action of $\mathbb{T}$. $\!$Independent coefficients and non-Abelian groups =================================================== In this section we prove our main results showing that, if the group $G$ is non commutative and under some mild additional assumptions, independence of the coefficients of the Fourier development implies their Gaussianity and, therefore, also that the random field must be Gaussian. We stress that we [*do not*]{} assume independence of the real and imaginary parts of the random coefficients. \[plus\] *Let $\cl X$ be an homogeneous space of the compact group $G$. Let $H^+_i\subset L^2({\mathscr}X,m)$ be a subspace on which $G$ acts irreducibly, having a dimension $\ge 2$ and such that if $f\in H^+_i$ then $\overline f\not\in H^+_i$. Let $(\phi_k)_k$ be an orthonormal basis of $H^+_i$ and consider the random field $$T_{H^+_i}(x)=\sum_k a_k\phi_k(x)\ .$$ for a family of r.v.’s $(a_k)_k\subset L^2(\Omega,\P)$. Then, if the r.v.’s $a_i$ are independent, the field $T_{H^+_i}$ is $G$-invariant if and only if the r.v.’s $(a_k)_k$ are jointly Gaussian and $\E(|a_k|^2)=c$ (and therefore also the field $T_{H^+_i}$ is Gaussian).* [[*Proof*]{}. ]{}Since $G$ acts irreducibly on $H^+_i$, we have $$\phi_k(g^{-1}x)=\sum_{\ell=1}^{d_i}D_{k,\ell}(g)\phi_\ell(x)\ ,$$ $d_i$ being the dimension of $H^+_i$ and $D(g)$ being the representative matrix of the action of $g\in G$. Therefore $$T(g^{-1}x)=\sum_{\ell=1}^{d_i} \tilde a_\ell \phi_\ell(x)$$ where $$\tilde a_\ell=\sum_{k=1}^{d_i} D_{k,\ell}(g)a_k\ .$$ If the field is $G$-invariant, then the vectors $(\tilde a_\ell)_\ell$ have the same joint distribution as $(a_k)_k$ and in particular the $(\tilde a_\ell)_\ell$ are independent. One can then apply the Skitovich-Darmois theorem below (see [@MR0346969] e.g.) as soon as it is proved that $g\in G$ can be chosen so that $D_{k,\ell}(g)\not=0$ for every $k,\ell$. This will follow from the considerations below, where it is proved that the set $Z_{k,\ell}$ of the zeros of $D_{k,\ell}$ has measure zero. Indeed, let $G_1$ be the image of $G$ in the representation space so that $G_1$ is also a connected compact group, and is moreover a Lie group since it is a closed subgroup of the unitary group ${\rm U}(d_i)$. If the representation is non trivial, then $G_1\not=\{1\}$ and in fact has positive dimension, and the $D_{k,\ell}$ are really functions on $G_1$. For any fixed $k, \ell$ the irreducibility of the action of $G_1$ implies that $D_{k,\ell}$ is not identically zero. Indeed, if this were not the case, we must have $(g\phi_\ell, \phi_k)=0$ for all $g\in G_1$, so that the span of the $g\phi_\ell$ is orthogonal to $\phi_k$; this span, being $G_1$-invariant and nonzero, must be the whole space by the irreducibility, and so we have a contradiction. Since $D_{k\ell}$ is a non zero analytic function on $G_1$, it follows from standard results that $Z_{k\ell}$ has measure zero. Hence $\bigcup _{k\ell}Z_{k\ell}$ has measure zero also, and so its complement in $G_1$ is non empty. $\blacksquare$ We use the following version of the Skitovich-Darmois theorem, which was actually proved by S. G. Ghurye and I. Olkin [@MR0137201] (see also [@MR0346969]). *Let $X_1,\dots ,X_r$ be mutually independent random vectors in $R^{n}.$ If the linear statistics $$L_{1}=\sum_{j=1}^{r}A_{j}X_{j}, \qquad L_{2}=\sum_{j=1}^{r}B_{j}X_{j}\ ,$$are independent for some real nonsingular $n\times n$ matrices $A_{j},B_{j}, $ $j=1,\dots ,r,$ then each of the vectors $X_{1},\dots ,X_{r}$ is normally distributed.* We now investigate the case of the random field $T_H$, when $H$ is a subspace such that $\overline H=H$. In this case we can consider a basis of the form $\phi_{-k},\dots,\phi_k$, $k\le \ell$, with $\phi_{-k}=\overline \phi_k$. The basis may contain a real function $\phi_0$, if $\dim H$ is odd. Let us assume that the random coefficients $a_k$, $k\ge 0$ are independent. Recall that $a_{ -k}=\overline{a_{k}}$. The argument can be implemented along the same lines as in Proposition \[plus\]. More precisely, if $m_1\ge 0$, $m_2\ge 0$, the two complex r.v.’s $$\label{tilde1} \begin{array}{c} \displaystyle \widetilde a_{m_{1}}=\sum_{m=-\ell}^\ell D_{m,m_{1}}(g)a_{m}\\ \displaystyle \widetilde a_{m_{2}}=\sum_{m=-\ell}^\ell D_{m ,m_{2}}(g)a_{m} \end{array}$$ have the same joint distribution as $a_{m_{1}}$ and $a_{ m_{2}}$. Therefore, if $m_1\not =m_2$, they are independent. Moreover $a_{-m}=\overline{a_{m}}$, so that the previous relation can be written $$\begin{aligned} \widetilde a_{m_{1}}&=D_{0 m_{1}}(g)a_{0}+ \sum_{m=1}^\ell \Bigl(D_{m, m_{1}}(g)a_{m}+D_{-m, m_{1}}(g) \overline{a_{m}}\Bigr)\\ \widetilde a_{m_{2}}&=D_{0, m_{2}}(g)a_{0}+ \sum_{m=1}^\ell\Bigl( D_{m, m_{2}}(g)a_{m}+D_{-m, m_{2}}(g) \overline{a_{m}}\Bigr)\end{aligned}$$ In order to apply the Skitovich-Darmois theorem, we must ensure that $g\in G$ can be chosen so that the real linear applications $$\label{condition} z\to D_{m, m_{i}}(g)z+D_{-m, m_i}(g)\overline z, \qquad m=1,\dots,\ell, i=1,2$$ are all non singular. It is immediate that this condition is equivalent to imposing that $|D_{m, m_{i}}(g)|\not=|D_{-m, m_{i}}(g)|$. We show below that (\[condition\]) is satisfied for some well-known examples of groups and homogeneous spaces. We do not know whether (\[condition\]) is always satisfied for every compact group. We are therefore stating our result conditional upon (\[condition\]) being fulfilled. \[assum0\]There exist $g\in G$, $0\le m_1<m_2\le\ell$ such that $$|D_{m, m_{i}}(g)|\not=|D_{-m, m_{i}}(g)|$$ for every $0\le m\le \ell$. We have therefore proved the following. \[zero\] *Let $\cl X$ be an homogeneous space of the compact group $G$. Let $H_i\subset L^2({\mathscr}X,m)$ be a subspace on which $G$ acts irreducibly, having a dimension $d> 2$ and such that $\overline H_i=H_i$. Let $(\phi_k)_k$ be an orthonormal basis of $H_i$ such that $\phi_{-k}=\overline \phi_k$ and consider the random field $$T_{H_i}(x)=\sum_k a_k\phi_k(x)$$ where the r.v.’s $a_k,k\ge 0$ are centered, square integrable, independent and $a_{-k}=\overline a_k$. Then $T_{H_i}$ is $G$-invariant if and only if the r.v.’s $(a_k)_{k\ge 0}$ are jointly Gaussian and $\E(|a_k|^2)=c$ (and therefore also the field $T_{H_i}$ is Gaussian).* Putting together Propositions \[plus\] and \[zero\] we obtain our main result. \[main\] Let $\cl X$ be an homogeneous space of the compact group $G$. Consider the decomposition (\[e.pw-dec2\]) and let $\big((\phi_{ik})_{i\in {\mathscr}I^\circ}, (\phi_{ik}, \overline\phi_{ik})_{i\in {\mathscr}I^+} \big)$ be a basis of $L^2(G)$ adapted to that decomposition. Let $$T=\sum_{i\in {\mathscr}I^\circ}\sum_k a_{ik}\phi_{ik}+\sum_{i\in {\mathscr}I^+}\sum_k \big(a_{ik}\phi_{ik}+\overline a_{ik}\overline \phi_{ik}\big)$$ be a random field on $\cl X$, where the series above are intended to be converging in square mean. Assume that $T$ is isotropic with respect to the action of $G$ and that the coefficients $(a_{ik})_{i\in {\mathscr}I^\circ, k\ge 0}, (a_{ik})_{i\in I^+}$ are independent. If moreover the only one-dimensional irreducible representation appearing in (\[e.pw-dec2\]) are the constants; there are no $2$-dimensional subspaces $H\subset L^2(\cl X)$, invariant under the action of $G$ and such that $\overline H=H$; The random coefficient corresponding to the trivial representation vanishes. For every $H\subset L^2(\cl X)$, irreducible under the action of $G$ and such that $\overline H=H$, Assumption \[assum0\] holds. Then the coefficients $(a_{ik})_{i\in {\mathscr}I^\circ, k\ge 0}, (a_{ik})_{i\in {\mathscr}I^+}$ are Gaussian and the field itself is Gaussian. Let us stress with the following statements the meaning of assumption a)–d). The following result gives a condition ensuring that assumption b) of Theorem \[main\] is satisfied. \[prop411\] Let $U$ be an irreducible unitary $2$-dimensional representation of $G$ and let $H_1$ and $H_2$ be the two corresponding subspaces of $L^2(G)$ in the Peter-Weyl decomposition. Then if $U$ has values in $SU(2)$, then $\overline{H}_1=H_2\not=H_1$. [[*Proof*]{}. ]{}If we note $$U(g)=\begin{pmatrix}a(g)&b(g)\cr c(g)&d(g)\cr \end{pmatrix}$$ then one can assume that $H_1$ is generated by the functions $a$ and $c$, whereas $H_2$ is generated by $b$ and $d$. It suffices now to show that $\overline a$ is orthogonal both to $a$ and $c$. But, the matrix $U(g)$ belonging to $SU(2)$, we have $\overline{a(g)}=d(g)\in H_2$. $\blacksquare$ Recall that the commutator $G_0$, of a topological group $G$ is the closed group that is generated by the elements of the form $xyx^{-1}y^{-1}$ \[semisimple\] Let $G$ be a compact group such that its commutator $G_0$ coincides with $G$ himself. Then assumptions a) and b) of Theorem \[main\] are satisfied. In particular these assumptions are satisfied if $G$ is a semisimple Lie group. [[*Proof*]{}. ]{}Recall that if $G_0=G$, $G$ cannot have a quotient that is an abelian group. If there was a unitary representation $U$ with a determinant not identically equal to $1$, then $g\to\det(U(G))$ would be an homomorphism onto the torus $\mathbb{T}$ and therefore $G$ would possess $\mathbb{T}$ as a quotient. The same argument proves that $G$ cannot have a one dimensional unitary representation other than the trivial one. One can therefore apply Proposition \[prop411\] and b) is satisfied. $\blacksquare$ It is easy to prove that Assumption \[assum0\] is satisfied when $\cl X=\mathbb S^2$ and $G=SO(3)$. As mentioned in [@BM06], this can be established using explicit expressions of the representation coefficients as provided e.g. in [@MR1022665]. In the same line of arguments it is also easy to check the same in the cases $\cl X=SO(3)$, $G=SO(3)$ and $\cl X=SU(2)$, $G=SU(2)$. As far as condition c) of Theorem \[main\], let us remark that the coefficient of the trivial representation corresponds to the empirical mean of the field. As any random field can be decomposed into the sum of its empirical mean plus a field whose coefficient of the trivial representation vanishes, our result can be interpreted in terms of Gaussianity of this second component. Appendix {#sec-appendix} ======== \[raja5\] Let $V$ be a finite dimensional Hilbert space on which $G$ acts unitarily, and let $V$ be equipped with a conjugation $\sigma (v\to \bar v)$ commuting with the action of $G$. Let $H$ be an irreducible $G$-invariant subspace and let $V=H+\overline H$. If the actions of $G$ on $H$ and $\overline H$ are inequivalent, then $\overline H\perp H$ and $V=H\oplus \overline H$. If the actions of $G$ on $H$ and $\overline H$ are equivalent, then either $H=\overline H$ or we can find an irreducible $G$-invariant subspace $K$ of $V$ such that $\overline K\perp K$ and $V=K\oplus \overline K$. [[*Proof*]{}. ]{}Let $P$ be the orthogonal projection $V\to \overline H$ and $A$ its restriction to $H$. Then, for every $h\in H$, $h'\in\overline H$ and $g\in G$, we have $$\langle g(Ah),h'\rangle=\langle Ah,gh'\rangle=\langle h,gh'\rangle=\langle gh,h'\rangle=\langle A(gh),h'\rangle$$ From this we get that $G$ acts on $A(H)$. The action of $G$ on $\overline H$ being irreducible, we have either $A(H)=\{0\}$ or $A(H)=\overline H$. In the first case $H$ is already orthogonal to $\overline H$. Otherwise $A$ intertwines the actions on $H$ and on $\overline H$, so that these are equivalent and $V=H\oplus H^\perp$. $V$ being the sum of two copies of the representation on $H$, there is a [*unitary*]{} isomorphism $V\simeq H\otimes {{\mathbb{C}}}^2$ where ${{\mathbb{C}}}^2$ is given the standard scalar product. So we assume that $V=H\otimes {{\mathbb{C}}}^2$. $G$ acts only on the first component, so that $G$ acts irreducibly on every subspace of the form $H\otimes Z$, $Z$ being a one dimensional subspace of ${{\mathbb{C}}}^2$. Let us identify the action of $\sigma$ on $H\otimes {{\mathbb{C}}}^2$. Let $\sigma_0$ be the conjugation on $V$ defined by $\sigma_0(u\otimes v)=u\otimes \bar v$ where $v\to \bar v$ is the standard conjugation $(z_1,z_2)\to (\overline {z_1}, \overline {z_2})$. Then $\sigma\sigma_0$ is a [*linear operator*]{} commuting with $G$ and so is of the form $1\otimes L$ where $L({{\mathbb{C}}}^2\to {{\mathbb{C}}}^2)$ is a linear operator. Hence $$\sigma (u\otimes v)=\sigma\sigma_0 (u\otimes \overline v)=u\otimes L\bar v.$$ If $Z$ is any one dimensional subspace of ${{\mathbb{C}}}^2$, $H\otimes Z$ is $G$-invariant and irreducible, and we want to show that for some $Z$, $H\otimes Z\perp H\otimes Z^\sigma$, i.e., $Z\perp Z^\sigma$. Here $ Z^\sigma=\sigma(Z)$. For any such $Z$, let $v$ be a nonzero vector in it; then the condition $Z\perp Z^\sigma$ becomes $(v, L\bar v)=0$ where $(,)$ is the scalar product in ${{\mathbb{C}}}^2$. Since $(,)$ is Hermitian, $B(v,w):=(v,L\bar w)$ is [*bilinear*]{} and we want $v$ to satisfy $B(v,v)=0$. This is actually standard: indeed, replacing $B$ by $B+B^T$ (which just doubles the quadratic form) we may assume that $B$ is symmetric. If $B$ is degenerate, there is a nonzero $v$ such that $B(v,w)=0$ for all $w$, hence $B(v,v)=0$. If $B$ is nondegenerate, there is a basis $v_1, v_2$ for ${{\mathbb{C}}}^2$ such that $B(v_i,v_j)=\delta_{ij}$. Then, if $w=v_1+iv_2$ where $i=\sqrt {-1}$, $B(w,w)=0$. $\blacksquare$
--- abstract: 'The article is devoted to the problem of Hilbert-Schmidt type analytic extensions in Hardy spaces over the infinite-dimensional unitary group endowed with an invariant probability measure. Reproducing kernels of Hardy spaces, integral formulas of analytic extensions and their boundary values are considered.' address: | 1 Pigonia str.\ 35-310 Rzeszów\ Poland author: - Oleh Lopushansky date: 'January 11, 2015' title: 'The Hilbert-Schmidt analyticity associated with infinite-dimensional unitary groups' --- [^1] Introduction ============ The paper deals with the problem of Hilbert-Schmidt type analytic extensions in the Hardy space ${H}^2_\chi$ of complex functions over the infinite-dimensional group $U(\infty)=\bigcup\left\{ U(m)\colon m\in\mathbb{N}\right\}$ endowed with an invariant probability measure $\chi$ where $U(m)$ are subgroups of unitary $m\times m$-matrices. The measure $\chi$ is defined as a projective limit $\chi=\varprojlim\chi_m$ of the Haar probability measures $\chi_m$ on $U(m)$. Moreover, $\chi$ is supported by a projective limit $\mathfrak{U}=\varprojlim U(m)$ and is invariant under the right action of $U^2(\infty):= {U(\infty)\times U(\infty)}$ on $\mathfrak{U}$. A goal of this work is to find integral formulas for Hilbert-Schmidt analytic extensions of functions from ${H}^2_\chi$ and to describe their radial boundary values on the open unit ball in a Hilbert space $\mathsf{E}$ where $U(\infty)$ acts irreducibly. The measure $\chi$ on $\mathfrak{U}$ was described by G. Olshanski [@Olshanski2003], Y. Neretin [@Neretin2002]. The notion $\mathfrak{U}$ is related to D. Pickrell’s space of virtual Grassmannian [@Pickrell]. Hardy spaces in infinite-dimensional settings were discussed in the works of B. Cole, T.W. Gamelin [@ColeGamelin86], B. rted, K.H. Neeb [@OrtedNeeb98]. Spaces of analytic functions of Hilbert-Schmidt holomorphy types were considered by T.A.W. Dwyer III [@Dwyer71], H. Petersson [@Petersson2001]. More general classes of analytic functions associated with coherent sequences of polynomial ideals were described by D. Carando, V. Dimant, S. Muro [@Carado09]. Integral formulas for analytic functions employing Wiener measures on infinite-dimensional Banach spaces were suggested by D. Pinasco, I. Zalduendo [@PinascoZalduendo05]. Note that spaces of integrable functions with respect to invariant measures over infinite-dimensional groups have been widely applied in stochastic processes [@BorodinOlshanski05; @Borodin11], as well as in other areas. This paper presents the following results. In Theorem \[irrep1\], we describe an orthogonal basis in the Hardy space ${H}^2_\chi$ indexed by means of Yang diagrams, consisting of $\chi$-essentially bounded functions. Using this basis, in Theorem \[Cauchy1\] the reproducing kernel of ${H}^2_\chi$ is calculated. It also allows us to define an antilinear isometric isomorphism $\mathcal{J}$ between ${H}^2_\chi$ and the symmetric Fock space $\Gamma$ generated by $\mathsf{E}$. This isomorphism equips ${H}^2_\chi$ with a suitable infinite-dimensional analytic structure. By means of $\mathcal{J}$, we establish in Theorem \[hard3\] an integral formula for Hilbert-Schmidt analytic extensions of functions from ${H}^2_\chi$ on the open unit ball $\mathsf{B}\subset\mathsf{E}$. The radial boundary values of these analytic extensions are described in Theorem \[car:hardy2\]. Background on invariant measure =============================== Let $U(m)$ $(m\in\mathbb{N})$ be the group of unitary $(m\times m)$-matrices. We endow $U(\infty)=\bigcup U(m)$ with the inductive topology under every continuous inclusion $U(m)\looparrowright U(\infty)$ which assigns to any $u_m\in U(m)$ the matrix ${\begin{bmatrix} u_m & 0\\ 0 &\mathbbm{1}\\ \end{bmatrix}\in U(\infty)}$. The right action over $U(\infty)$ is defined via $$\label{right0} u.g=w^{-1}uv,\qquad u\in U(\infty),\quad g=(v, w)\in U^2(\infty)$$ (the right action over $U(m)$ is defined similarly with ${u\in U(m)}$ and $g={(v, w)\in U^2(m)}$ where $U^2(m):= U(m)\times U(m)$). Following [@Neretin2002; @Olshanski2003], every $u_m\in U(m)$ with $m>1$ can be written as $u_m=\begin{bmatrix} z_{m-1} & a \\ b & t \\ \end{bmatrix}$ so that $z_{m-1}$ is a $(m-1)\times(m-1)$-matrix and $t\in\mathbb{C}$. It was proven that the Livšic-type mapping (which is not a group homomorphism) $$\begin{aligned} \label{projective} &\pi^m_{m-1}\colon{u_m}\longmapsto u_{m-1}:=\left\{\!\!\begin{array}{lc} z_{m-1}-[a(1+t)^{-1}b]&\!\!\!\!: t\not\eq-1 \\ z_{m-1}&\!\!\!\!: t=-1 \\ \end{array}\right.\end{aligned}$$ from $U(m)$ onto $U(m-1)$ is Borel and surjective. Consider the projective limit $\mathfrak{U}=\varprojlim U(m)$ taken with respect to $\pi_{m-1}^m$. The embedding $\rho\colon U(\infty)\looparrowright\mathfrak{U}$ assigns to every $u_m\in U(m)$ the stabilized sequence $u=(u_k)_{k\in\mathbb{N}}$ (see [@Olshanski2003 n.4]) so that $$\label{rho} \rho\colon U(m)\ni u_m\longmapsto(u_k)\in\mathfrak{U},\qquad u_k=\left\{\begin{array}{ccl} \pi^m_k(u_m)&:&k<m,\\ u_m &:& k=m, \\ \begin{bmatrix} u_m& 0 \\ 0 &1\\ \end{bmatrix} &:&k>m \end{array}\right.$$ where the projections $\pi_m\colon\mathfrak{U}\ni u\longrightarrow u_m\in U(m)$ such that ${\pi_{m-1}^m\circ\pi_m}=\pi_{m-1}$ are surjective and $\pi^m_k:=\pi^{k+1}_k\circ\ldots\circ \pi^m_{m-1}$ for ${k<m}$. Using , the right action of $U^2(\infty)$ over $\mathfrak{U}$ can be defined as $$\label{right} \pi_m(u.g)=w^{-1}\pi_m(u)v,\qquad u\in \mathfrak{U}$$ where $m$ is so large that $g=(v,w)\in U^2(m)$ (see [@Olshanski2003 Def 4.5]). We endow every group $U(m)$ with the probability Haar measure $\chi_m$. It is known [@Neretin2002 Thm 1.6] that the pushforwards of $\chi_m$ to $U(m-1)$ under $\pi_{m-1}^m$ is the probability Haar measure $\chi_{m-1}$ on $U(m)$. Let $U'(m)$ be the subset in $U(m)$ of matrices which do not have $\{-1\}$ as an eigenvalue. Then $U'(m)$ is open in $U(m)$ and ${U(m)\setminus U'(m)}$ is $\chi_m$-negligible. Moreover, the restriction $\pi_{m-1}^m\colon U'(m)\longrightarrow U'(m-1)$ is continuous and surjective [@Olshanski2003 Lem. 3.11]. Following [@Olshanski2003 Lem. 4.8], [@Neretin2002 n.3.1], via of the Kolmogorov consistency theorem we uniquely define on $\mathfrak{U}$ the probability measure $\chi$ which is the projective limit under the mapping , i.e., we put $$\label{proj1} \chi=\varprojlim\chi_m\quad\text{with}\quad \chi_m=\chi\circ\pi_m^{-1} \quad\text{for all}\quad m\in\mathbb{N}.$$ If $\mathfrak{U}'=\varprojlim U'(m)$ is the projective limit with respect to $\pi_{m-1}^m\mid_{U'(m)}$ then $\mathfrak{U}\setminus\mathfrak{U}'$ is $\chi$-negligible, because $\chi_m$ is zero on $U(m)\setminus U'(m)$ for any $m$. A complex-valued function on $\mathfrak{U}$ is called cylindrical if it has the form $f=f_m\circ \pi_m$ for a certain $m\in\mathbb{N}$ and a complex function $f_m$ on $U(m)$ [@Olshanski2003 Def. 4.5]. By $L_\chi^\infty$ we denote the closed linear hull of all cylindrical $\chi$-essentially bounded Borel functions endowed with the norm $\|f\|_{L_\chi^\infty}=\mathop{\rm ess\,sup}_{u\in\mathfrak{U}}|f(u)|$. The measure is a probability measure and is $U^2(\infty)$-invariant under the right actions over $\mathfrak{U}$ [@Neretin2002 Prop. 3.2]. Moreover, this measure is Radon so that $$\label{inv} \int_\mathfrak{U}f(u.g)\,d\chi(u)=\int_\mathfrak{U}f(u)\,d\chi(u),\qquad g\in U^2(\infty),\quad f\in L_\chi^\infty$$ and it satisfies the property: $(\chi\circ\pi_m^{-1})(K)=\chi_m(K)$ for any compact set $K$ in $U(m)$ [@lopushansky2013 Lem. 1]. Using the invariance property and the Fubini theorem (see [@lopushansky2013 Lem. 2]), we obtain $$\begin{aligned} \label{inv1} \int_\mathfrak{U} f\,d\chi&=\int_\mathfrak{U}d\chi(u)\int_{U^2(m)}f(u.g)\,d(\chi_m\otimes\chi_m)(g),\\\label{inv2} \int_\mathfrak{U}f\,d\chi&=\frac{1}{2\pi}\int_\mathfrak{U}\!d\chi(u)\int_{-\pi}^{\pi}f\left[\exp(\mathbbm{i}\vartheta)u\right]\,d\vartheta\end{aligned}$$ for all $f\in L_\chi^\infty$. The closed linear hull of cylindrical complex functions endowed with the norm $\|f\|_{L^2_\chi}=\left(\int_\mathfrak{U}|f|^2\,d\chi\right)^{1/2}$ is denoted by $L^2_\chi$. It is clear that $L^\infty_\chi\looparrowright L^2_\chi$ and $\|f\|_{L^2_\chi}\le\|f\|_{L^\infty_\chi}$ for all ${f\in{L}^\infty_\chi}$. Hardy spaces ============ Throughout the paper $\mathsf{E}$ is a separable complex Hilbert space with an orthonormal basis $\left\{\mathfrak{e}_k\colon k\in\mathbb{N}\right\}$, scalar product $\langle\cdot\mid\cdot\rangle$ and norm $\|\cdot\|={\langle\cdot\mid\cdot\rangle^{1/2}}$. So, for any element $x\in\mathsf{E}$ the following Fourier decomposition holds, $$\label{Fx} x=\sum\mathfrak{e}_k\hat{x}_k,\qquad \hat{x}_k=\langle x\mid\mathfrak{e}_k\rangle.$$ In what follows, let $\mathsf{B}=\left\{x\in\mathsf{E}\colon\|x\|<1\right\}$ and $\mathsf{S}=\left\{x\in\mathsf{E}\colon\|x\|=1\right\}$. Let $\mathsf{E}^{\otimes n}$ be the complete $n$th tensor power of $\mathsf{E}$ endowed with the scalar product and norm $$\big\langle \psi\mid\phi\big\rangle=\langle x_1\mid y_1\rangle\ldots\langle x_n\mid y_n\rangle, \qquad \left\|\psi\right\|=\left\langle\psi\mid\psi\right\rangle^{1/2}$$ for all $\psi=x_1\otimes\ldots\otimes x_n$, $\phi=y_1\otimes\ldots\otimes y_n\in\mathsf{E}^{\otimes n}$ with $x_i,y_i\in\mathsf{E}$ ${(i=1,\ldots,n)}$. As $\sigma\colon\{1,\ldots,n\}\longmapsto\{\sigma(1),\ldots,\sigma(n)\}$ runs through all $n$-elements permutations, the symmetric complete $n$th tensor power $\mathsf{E}^{\odot n}$ is defined to be a codomain of the orthogonal projector $$\mathsf{E}^{\otimes n}\ni\psi\longmapsto {x_1\odot\ldots\odot x_n}:= \frac{1}{n!} \sum_\sigma{x_{\sigma(1)}\otimes\ldots\otimes x_{\sigma(n)}}\in\mathsf{E}^{\odot n}.$$ Note that $x^{\otimes n}={x\otimes\ldots\otimes x}={x\odot\ldots\odot x}=x^{\odot n}$. Put $\mathsf{E}^{\otimes 0}=\mathsf{E}^{\odot 0}=\mathbb{C}$. Let $\lambda=(\lambda_1,\ldots,\lambda_m)\in\mathbb{N}^m$ be a partition of an integer ${n\in\mathbb{N}}$ with ${m\le n}$ and ${\lambda_1\ge\lambda_2\ge\ldots\lambda_m>0}$, i.e., $|\lambda|=n$ where $|\lambda|:=\lambda_1+\ldots+\lambda_m$. We identify partitions with Young diagrams. By $\ell(\lambda)=m$ we denote the length of $\lambda$ defined as the number of rows in $\lambda$. Let $\mathbb{Y}$ denote all Young diagrams and $\mathbb{Y}_n:=\left\{\lambda\in\mathbb{Y}\colon|\lambda|=n\right\}$. Assume that $\mathbb{Y}$ includes the empty partition $\emptyset = (0, 0, \ldots )$. An orthogonal basis in $\mathsf{E}^{\odot n}$ is formed by the system of symmetric tensor products (see e.g. [@BerezanskiKondratiev95 Sec. 2.2.2]) $$\mathfrak{e}^{\odot\mathbb{Y}_n}=\bigcup_{\lambda\in\mathbb{Y}_n} \left\{\mathfrak{e}^{\odot\lambda}_\imath:= {\mathfrak{e}^{\otimes\lambda_1}_{\imath_1}\odot\ldots\odot\mathfrak{e}^{\otimes\lambda_m}_{\imath_m}} \colon\imath\in\mathbb{N}^m_\ast, \ m=\ell(\lambda)\right\}, \quad \mathfrak{e}^{\odot\emptyset}_\imath=1$$ where $\mathbb{N}^m_*:= \left\{\imath=\left({\imath_1},\ldots,{\imath_m}\right)\in\mathbb{N}^m\colon\imath_j\noteq\imath_k, \,\forall\,j\noteq k\right\}$. As is well known, $$\label{normfock} \left\|\mathfrak{e}^{\odot\lambda}_\imath\right\|^2= \frac{\lambda!}{n!},\qquad\lambda!:=\lambda_1!\cdot\ldots\cdot\lambda_m!.$$ In what follows, we will use the fact that for every ${\psi\in\mathsf{E}^{\odot n}}$ one can uniquely define the so-called [*Hilbert-Schmidt $n$-homogenous polynomial*]{} $$\psi^*(x):={\left\langle x^{\otimes n}\mid\psi\right\rangle},\qquad {x\in\mathsf{E}}.$$ In fact, the polarization formula for symmetric tensor products (see [@Floret97 1.5]) $$\label{polarization} z_1\odot\dots \odot z_n=\frac{1}{2^nn!} \sum_{\theta_1,\ldots,\theta_n=\pm 1} \theta_1\dots \theta_n\,x^{\otimes n},\quad x=\sum_{k=1}^n\theta_k z_k$$ $(z_1,\ldots, z_n\in\mathsf{E})$ implies that the $n$-homogenous polynomial ${\left\langle x^{\otimes n}\mid\psi\right\rangle}$ is uniquely defined by $\psi$, because the set ${z_1\odot\dots \odot z_n}$ is total in ${\mathsf{E}^{\odot n}}$. Using the embedding , we define the $\mathsf{E}$-valued mapping $$\zeta\colon{\mathfrak{U}\ni u\longmapsto\rho^{-1}(u)\mathfrak{e}_1}$$ which do not depend on the choice of $\mathfrak{e}_1$ in $$\mathsf{S}(\infty):=\left\{\zeta(u)\colon u\in\mathfrak{U}\right\}=\bigcup\left\{\mathsf{S}(m)\colon m\in\mathbb{N}\right\}$$ where $\mathsf{S}(m)$ is the $m$-dimensional unit sphere. In fact, for each stabilized sequence $u=(u_k)\in\mathfrak{U}$ there exists an index $m$ such that $\rho^{-1}(u)\mathfrak{e}_1=u_k\mathfrak{e}_1$ belongs to $\mathsf{S}(m)$ for all ${k\ge m}$. On the other hand, for each $\mathfrak{e}\in\mathsf{S}(k)$ there exists ${v\in U(k)}$ such that $v\mathfrak{e}=\mathfrak{e}_1$. Defining ${u.g\in\mathfrak{U}}$ with ${g=(1,v)\in U^2(k)}$ by means of -, we have $\rho^{-1}(u.g)\mathfrak{e}=\pi_k(u.g)\mathfrak{e}=\pi_k(u)\mathfrak{e}_1 =\rho^{-1}(u)\mathfrak{e}_1$. Consider the following system of cylindrical Borel functions $$\varepsilon_k(u):= \left\langle\zeta(u)\mathrel{\big|}\mathfrak{e}_k\right\rangle, \qquad k\in\mathbb{N}$$ where $\varepsilon_k:=\mathfrak{e}_k^*\circ\zeta$. Using $\zeta$, we may define the $\mathsf{E}^{\odot n}$-valued Borel mapping $$\zeta^{\otimes n}\colon\mathfrak{U}\ni u\longmapsto {\underbrace{\zeta(u)\otimes\ldots\otimes\zeta(u)}}_n,\qquad\zeta^{\otimes 0}\equiv1.$$ The following assertion, which is a consequence of the polarization formula , is proved in [@lopushansky2013 Lem. 3]. \[irrep\] The equality $\mathsf{S}(\infty)=\left\{\zeta(u)\colon u\in\mathfrak{U}'\right\}$ holds. As a consequence, to every ${\psi\in\mathsf{E}^{\odot n}_\imath}$ there uniquely corresponds the function in $L^\infty_\chi$ $$\psi_\zeta(u):=\left\langle\zeta^{\otimes n}(u)\mathrel{\big|}\psi\right\rangle,\qquad u\in\mathfrak{U}$$ given by continuous restriction to $\mathfrak{U}'$. In particular, to every ${\mathfrak{e}^{\odot\lambda}_\imath\in\mathfrak{e}^{\odot\mathbb{Y}_n}}$ there corresponds in $L^\infty_\chi$ the cylindrical function in the variable $u\in\mathfrak{U}$, $$\label{base2} \varepsilon^{\lambda}_\imath(u) :=\left\langle\zeta^{\otimes n}(u)\mathrel{\big|} \mathfrak{e}^{\odot \lambda}_\imath\right\rangle=\prod_{k=1}^{\ell(\lambda)} \left\langle\zeta(u)\mathrel{\big|}\mathfrak{e}_{\imath_k}\right\rangle^{\lambda_k}.$$ Lemma \[irrep\] straightforwardly implies that the system $\mathfrak{e}^{\odot\mathbb{Y}}:=\bigcup\mathfrak{e}^{\odot\mathbb{Y}_n}$ of tensor products $\mathfrak{e}^{\odot\lambda}_\imath= \mathfrak{e}^{\otimes\lambda_1}_{\imath_1}\odot\ldots\odot\mathfrak{e}^{\otimes\lambda_m}_{\imath_m}$, indexed by $\lambda={(\lambda_1,\ldots,\lambda_m)\in\mathbb{Y}}$ and $\imath={\left({\imath_1},\ldots,{\imath_m}\right)\in\mathbb{N}^m_\ast}$ with $m=\ell(\lambda)$, uniquely defines the appropriate system $$\varepsilon^\mathbb{Y}:=\bigcup_{\lambda\in\mathbb{Y}} \left\{\varepsilon^{\lambda}_\imath:= \varepsilon^{\lambda_1}_{\imath_1}\odot\ldots\odot\varepsilon^{\lambda_m}_{\imath_m} \colon\imath\in\mathbb{N}^m_\ast, \ m=\ell(\lambda)\right\},\quad \varepsilon^\emptyset_\imath\equiv1,$$ of $\chi$-essentially bounded cylindrical functions in the variable $u\in\mathfrak{U}$ that possess continuous restrictions to $\mathfrak{U}'$. \[irrep1\] For any $\imath\in\mathbb{N}^m_\ast$ and $\psi,\phi\in\mathsf{E}^{\odot n}_\imath$, the following equality holds, $$\label{polin} \binom{n+m-1}{n}\int_\mathfrak{U}\phi_\zeta\,\bar\psi_\zeta\,d\chi=\left\langle\psi\mid \phi\right\rangle.$$ As a consequence, given $(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^m_\ast$ with $m=\ell(\lambda)$, the system $\varepsilon^\mathbb{Y}$ of functions $\varepsilon^\lambda_\imath$ is orthogonal in the space $L_\chi^2$ and $$\label{norm} \left\|\varepsilon^{\lambda}_\imath\right\|_{L^2_\chi}=\left(\frac{(m-1)! \lambda!}{(m-1+|\lambda|)!}\right)^{1/2}.$$ Let $\mathsf{E}_\imath$ with $\imath=\left(\imath_1,\ldots,\imath_m\right)\in\mathbb{N}^m_\ast$ be the $m$-dimensional subspace in $\mathsf{E}$ spanned by $\left\{\mathfrak{e}_{\imath_1},\ldots,\mathfrak{e}_{\imath_m}\right\}$ and $U(\imath)$ be the unitary subgroup of $U(\infty)$ acting in $\mathsf{E}_\imath$. The symbol $\mathsf{E}^{\odot n}_\imath$ means the $n$th symmetric tensor power of $\mathsf{E}_\imath$. Briefly denote $\psi_\dag[v\zeta(u)]:= \big\langle\big([v\rho^{-1}(u)]\mathfrak{e}_1\big)^{\otimes n}\mathop{\big|}\psi\big\rangle$ with ${\psi\in\mathsf{E}^{\odot n}_\imath}$ for all ${v\in U(\imath)}$ and ${u\in\mathfrak{U}}$. Using with $U(\imath)$ instead of $U(m)$, we have $$\label{iso0} \int_\mathfrak{U}\phi_\zeta\,\bar\psi_\zeta\,d\chi=\int_\mathfrak{U}d\chi(u) \int_{U(\imath)}\phi_\dag[v\zeta(u)]\cdot\bar\psi_\dag[v\zeta(u)]\,d\chi_\imath(v)$$ for all $\psi,\phi\in\mathsf{E}^{\odot n}_\imath$. It is clear that $$\Big|\int_{U(\imath)}\phi_\dag\,\bar\psi_\dag\,d\chi_\imath\Big|\le \sup_{v\in {U(\imath)}}\big|\phi_\dag[v\zeta(u)]\big|\, \big|\psi_\dag[v\zeta(u)]\rangle\big|\le\|\phi\|\,\|\psi\|$$ for all ${u\in\mathfrak{U}}$. Hence, the corresponding sesquilinear form in is continuous on $\mathsf{E}^{\odot n}_\imath$. Thus, there exists a linear bounded operator $A$ over $\mathsf{E}^{\odot n}_\imath$ such that $$\left\langle A\psi\mid\phi\right\rangle=\int_{U(\imath)}\phi_\dag\,\bar\psi_\dag\,d\chi_\imath.$$ Next we show that $A$ commutes with all operators $w^{\otimes n}\in\mathscr{L}\left(\mathsf{E}^{\odot n}_\imath\right)$ with $w\in {U(\imath)}$ acting as $w^{\otimes n}x^{\otimes n}=(wx)^{\otimes n}$, ${(x\in\mathsf{E}_\imath)}$. Invariant properties of $\chi_\imath$ under the right action yield $$\begin{split} &\left\langle(A\circ w^{\otimes n})\psi\mid\phi\right\rangle=\\ &=\int_{U(\imath)}\left\langle[v\zeta(u)]^{\otimes n}\mid\phi\right\rangle \overline{\left\langle[v\zeta(u)]^{\otimes n}\mid w^{\otimes n}\psi\right\rangle}d\chi_\imath(v)\\ &=\int_{U(\imath)}\left\langle[w^{-1}v\zeta(u)]^{\otimes n} \mid (w^{-1})^{\otimes n}\phi\right\rangle \overline{\left\langle[w^{-1}v\zeta(u)]^{\otimes n} \mid \psi\right\rangle}d\chi_\imath(v)\\ &=\int_{U(\imath)}\left\langle[v\zeta(u)]^{\otimes n}\mid (w^{-1})^{\otimes n}\phi\right\rangle \overline{\left\langle[v\zeta(u)]^{\otimes n}\mid \psi\right\rangle}d\chi_\imath(v)\\ &=\left\langle A\psi\mid(w^{-1})^{\otimes n}\phi\right\rangle =\left\langle(w^{\otimes n}\circ A)\psi\mid\phi\right\rangle, \end{split}$$ where $w^{-1}\in {U(\imath)}$ is the hermitian adjoint matrix of $w$. Hence, the equality $$\label{schur} A\circ w^{\otimes n}=w^{\otimes n}\circ A,\qquad {w\in U(\imath)}$$ holds. Let us check that the operator $A$, satisfying the condition , is proportional to the identity operator on $\mathsf{E}^{\otimes n}_\imath$. To this end we form the $n$th tensor power of the unitary group $U(\imath)$, $$[U(\imath)]^{\otimes n}=\left\{ w^{\otimes n}\in\mathscr{L}\left(\mathsf{E}^{\odot n}_\imath\right)\colon w\in U(\imath)\right\}, \qquad [U(\imath)]^{\otimes 0}=1.$$ Clearly, $[U(\imath)]^{\otimes n}$ is a unitary group over $\mathsf{E}^{\odot n}_\imath$. Let us check that the corresponding unitary representation $$\label{diag} U(\imath)\ni w\longmapsto w^{\otimes n}\in\mathscr{L}\left(\mathsf{E}^{\odot n}_\imath\right)$$ is irreducible. This means that there is no subspace in $\mathsf{E}^{\odot n}_\imath$ other than $\{0\}$ and the whole space which is invariant under the action of $[U(\imath)]^{\otimes n}$. Suppose, on the contrary, that there is an element ${\psi\in\mathsf{E}^{\odot n}_\imath}$ such that the equality ${\big\langle\big([w\rho^{-1}(u)]\mathfrak{e}_1\big)^{\otimes n}\mathop{\big|}\psi\big\rangle=0}$ holds for all ${w\in U(\imath)}$ and ${u\in U(\infty)}$. By Lemma \[irrep\] the elements $w\rho^{-1}(u)$ act transitively on $\mathsf{S}(\infty)$. Hence, by $n$-homogeneity, we obtain ${\langle x^{\otimes n}\mid\psi\rangle=0}$ for all ${x\in\mathsf{E}_\imath}$. Applying the polarization formula , we get ${\psi=0}$. Hence, is irreducible. Thus, we can apply to the Schur lemma [@HewittRoss70 Thm 21.30]: a non-zero matrix which commutes with all matrices of an irreducible representation is a constant multiple of the unit matrix. As a result, we obtain that the operator $A$, satisfying , is proportional to the identity operator on $\mathsf{E}^{\odot n}_\imath$ i.e. $A =\alpha_{(n,\imath)}\mathbbm{1}_{\mathsf{E}^{\odot n}_\imath}$ with a constant $\alpha_{(n,\imath)}>0$. It follows that $$\label{MainEq} \int_{U(\imath)}\phi_\dag\,\bar\psi_\dag\,d\chi_\imath= \alpha_{(n,\imath)}\left\langle \psi\mid\phi\right\rangle,\qquad \phi,\psi\in\mathsf{E}^{\odot n}_\imath.$$ In particular, the subsystem of cylindrical functions $\varepsilon^\lambda_\imath$ with a fixed ${\imath\in\mathbb{N}^m_\ast}$ is orthogonal in $L_\chi^2$, because the corresponding system of tensor products $\mathfrak{e}^{\odot \lambda}_\imath$ indexed by $\lambda\in\mathbb{Y}_n$ with ${\ell(\lambda)=m}$ forms an orthogonal basis in $\mathsf{E}^{\odot n}_\imath$. It remains to note that the set of all indices $\imath={\left({\imath_1},\ldots,{\imath_m}\right)\in\mathbb{N}^m_\ast}$ with all $m=\ell(\lambda)$ is directed with respect to the set-theoretic embedding, i.e., for any $\imath,\imath'$ there exists $\imath''$ so that $\imath\cup\imath'\subset\imath''$. This fact and the above reasoning imply that the whole system $\varepsilon^\mathbb{Y}$ is also orthogonal in $L_\chi^2$. Taking into account , we can choose $\phi_n=\psi_n=\varepsilon^{\lambda}_\imath\sqrt{n!/\lambda!}$ in . As a result, we obtain $$\alpha_{(n,\imath)}=\frac{n!}{\lambda !}\int_{U(\imath)}\left|\varepsilon^\lambda_\imath\right|^2d\chi_\imath =\frac{n!}{\lambda !}\left\|\varepsilon^\lambda_\imath\right\|^2_{L^2_\chi}.$$ The well known formula [@RudinFT80 1.4.9] for the unitary $m$-dimensional group gives $$\int_{U(\imath)}\left|\varepsilon^\lambda_\imath\right|^2d\chi_\imath= \frac{\lambda !(m-1)!}{(n+m-1)!},\qquad|\lambda|=n,\quad{\ell(\lambda)=m}.$$ Using the last two formulas, we arrive at the relation $$\label{rudin} \alpha_{(n,\imath)}= \frac{n!}{\lambda !}\int_{U(\imath)}\left|\varepsilon^\lambda_\imath\right|^2d\chi_\imath =\frac{n!}{\lambda !}\,\frac{\lambda !(m-1)!}{(n+m-1)!}=\frac{n!(m-1)!}{(n+m-1)!}.$$ Combining and , we get and, as a consequence, . By $H_\chi^2$ we denote the Hardy space over $U(\infty)$ defined as the $L^2_\chi$-closure of the complex linear span of the orthogonal system $\varepsilon^\mathbb{Y}$. Let the space $H_\chi^{2,n}$ be the $L^2_\chi$-closure of the complex linear span of the subsystem $\varepsilon^{\mathbb{Y}_n}:= \big\{\varepsilon^\lambda_\imath\in\varepsilon^\mathbb{Y}\colon{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast}\big\}$ with a fixed ${n\in\mathbb{Z}_+}$. \[ortog\] For any positive integers $n\noteq k$ the orthogonality $H_\chi^{2,n}\perp H_\chi^{2,k}$ in $L^2_\chi$ holds. As a consequence, the following orthogonal decomposition holds, $$\label{ort} H_\chi^2=\mathbb{C}\oplus H_\chi^{2,1}\oplus H_\chi^{2,2}\oplus\ldots.$$ The orthogonal property ${\varepsilon^\mu_\jmath\perp\varepsilon^{\lambda}_\imath}$ with ${|\mu|\not\eq|\lambda|}$ for any ${\imath\in\mathbb{N}^{\ell(\lambda)}_\ast}$ and ${\jmath\in\mathbb{N}^{\ell(\mu)}_\ast}$ follows from , since $$\begin{split} \int_\mathfrak{U}\varepsilon^\mu_\jmath\,\bar\varepsilon^\lambda_\imath\,d\chi&= \int_\mathfrak{U} \varepsilon^\mu_\jmath\big(\exp(\mathbbm{i}\vartheta)u\big)\, \bar\varepsilon^\lambda_\imath\big(\exp(\mathbbm{i}\vartheta)u\big)d\chi(u)\\ &=\frac{1}{2\pi}\int_\mathfrak{U}\varepsilon^\mu_\jmath\,\bar\varepsilon^\lambda_\imath\,d\chi \int_{-\pi}^\pi{\exp\big(\mathbbm{i}(|\mu|-|\lambda|)\vartheta\big)}\,d\vartheta=0 \end{split}$$ for all $\lambda\in\mathbb{Y}$ and $\mu\in\mathbb{Y}\setminus\{\emptyset\}$. This yields $H_\chi^{2,|\mu|}\perp H_\chi^{2,|\lambda|}$ in the space $L^2_\chi$. Reproducing kernels =================== Let us construct the reproducing kernel of $H_\chi^2$. We refer to [@Sait] regarding reproducing kernels. \[ReprodP\] For every $u,v\in\mathfrak{U}$ there exists ${q\in\mathbb{N}}$ such that the reproducing kernel of the subspace $H_\chi^{2,n}$ in $L^2_\chi$ has the form $$\label{reprod} \begin{split} \mathfrak{h}_n(v,u)&=\sum_{m\le q} \binom{n+m-1}{n} \left\langle\zeta(v)\mid\zeta(u)\right\rangle^n\\ &=\sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\varepsilon^\lambda_\imath(v)\,\bar\varepsilon^\lambda_\imath(u)} {\|\varepsilon^{\lambda}_\imath\|^2_{L^2_\chi}}, \qquad u,v\in\mathfrak{U}. \end{split}$$ Note that $\mathfrak{h}_0\equiv1$. From it follows that for each stabilized sequence ${u\in\mathfrak{U}}$ there exists ${u_m\in U(m)}$ with a certain $m=m(u)$ such that $u=\rho(u_m)$. So, the element $\zeta(u)=\rho^{-1}(u)\mathfrak{e}_1$ is located on the $m$-dimensional sphere $\mathsf{S}(m)$. It means that its Fourier series $\zeta(u)={\sum{\mathfrak e}_k\varepsilon_k(u)}$ has $m(u)$ terms. The tensor multinomial theorem yields the Fourier decomposition $$[\zeta(u)]^{\otimes n}= \left(\sum{\mathfrak e}_k\varepsilon_k(u)\right)^{\otimes n}=\sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{n!}{\lambda!}\mathfrak{e}^{\odot\lambda}_\imath\,\varepsilon^{\lambda}_\imath(u)$$ in the space $\mathsf{E}^{\odot n}$. Using the formula , we obtain $$\begin{aligned} &\left\langle \zeta(v)\mid\zeta(u)\right\rangle^n= \left\langle [\zeta(v)]^{\otimes n}\mid[\zeta(u)]^{\otimes n}\right\rangle\\ &=\sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast}\Big(\frac{n!}{\lambda!}\Big)^2 \left\langle \mathfrak{e}^{\odot\lambda}_\imath\mid\mathfrak{e}^{\odot\lambda}_\imath\right\rangle \varepsilon^\lambda_\imath(v)\,\bar\varepsilon^\lambda_\imath(u) =\sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\varepsilon^\lambda_\imath(v)\,\bar\varepsilon^\lambda_\imath(u)} {\|\varepsilon^{\lambda}_\imath\|^2_{L^2_\chi}}\end{aligned}$$ where $\left\langle \zeta(v)\mid\zeta(u)\right\rangle$ is decomposed into $q=\min\{m(u),m(v)\}$ summands in virtue of orthogonality. Multiplying both sides by $\binom{n+m-1}{n}$ and summing over all $m\le q$, we get . It follows that $\int_\mathfrak{U}\mathfrak{h}_n(v,u)\varepsilon^\lambda_\imath(u)\,d\chi(u)=\varepsilon^\lambda_\imath(v)$ for each ${v\in\mathfrak{U}}$. Via Theorem \[irrep\] the system $\varepsilon^{\mathbb{Y}_n}$ of functions $\varepsilon^\lambda_\imath$ forms an orthogonal basis in $H_\chi^{2,n}$. So, the integral operator $$\label{Taypol} \int_\mathfrak{U}\mathfrak{h}_n(v,u)\psi_\zeta(u)\,d\chi(u)=\psi_\zeta(v),\qquad {\psi_\zeta\in H_\chi^{2,n}}$$ acts identically on $H_\chi^{2,n}$. Thus, the kernel is reproducing in $H_\chi^{2,n}$. Let us consider the complex-valued kernel $$\mathfrak{h}(z;v,u)=\prod_{m\le\min\{m(u),m(v)\}} \left[{\phantom{\big|}}\!\!1-z\left\langle\zeta(v)\mid\zeta(u)\right\rangle\right]^{-m},\quad u,v\in\mathfrak{U},\quad |z|<1$$ where $m(u)$ is the number of terms in the Fourier series $\zeta(u)={\sum{\mathfrak e}_k\varepsilon_k(u)}$. \[Cauchy1\] The expansion $\mathfrak{h}(z;v,u)=\sum z^n\mathfrak{h}_n(v,u)$ holds for any ${u,v\in\mathfrak{U}}$ and ${|z|<1}$. The kernel $\mathfrak{h}(1;v,u)=\sum\mathfrak{h}_n(v,u)$ is reproducing in $H^2_\chi$ in the sense that $$\label{sum} \int_\mathfrak{U}\mathfrak{h}(1;v,u)f(u)\,d\chi(u)=f(v),\qquad{f\in H_\chi^2},\quad v\in\mathfrak{U}.$$ Let $q=\min\{m(u),m(v)\}$ and $m\le q$. As is well known [@RudinFT80 1.4.10], $$\label{prod} \left[{\phantom{\big|}}\!\!1- z\left\langle\zeta(v)\mid \zeta(u)\right\rangle\right]^{-m}=\sum_{n\in\mathbb{Z}_+}\binom{n+m-1}{n} \left\langle z\zeta(v)\mid\zeta(u)\right\rangle^n$$ for all ${|z|<1}$. By the Vandermonde identity, we have $$\begin{aligned} &\binom{n+m-1}{n}\left\langle z\zeta(v)\mid\zeta(u)\right\rangle^n= \binom{r+k+p+l-2}{r+k}\left\langle z\zeta(v)\mid\zeta(u)\right\rangle^{r+k}\\ &=\sum_{r=0}^n\binom{r+p-1}{r}\binom{n-r+l-1}{n-r}\left\langle z\zeta(v)\mid\zeta(u)\right\rangle^{r+k}\end{aligned}$$ for all $n=r+k$ and $m=p+l-1$. Applying recursively this identity to the series with any ${m\le q}$ and using Lemma \[ReprodP\], we obtain $$\begin{aligned} \mathfrak{h}(z;v,u)&=\prod_{m\le q}\sum_{n\in\mathbb{Z}_+} \binom{n+m-1}{n}\left\langle z\zeta(v)\mid\zeta(u)\right\rangle^n\\ &=\sum_{n\in\mathbb{Z}_+}z^n \sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\varepsilon^\lambda_\imath(v)\,\bar\varepsilon^\lambda_\imath(u)} {\|\varepsilon^{\lambda}_\imath\|^2_{L^2_\chi}}=\sum_{n\in\mathbb{Z_+}}z^n\mathfrak{h}_n(v,u).\end{aligned}$$ Hence, the required expansion holds. By we have $f=\sum_nf_n$ for any ${f\in H_\chi^2}$ where $f_n\in H_\chi^{2,n}$ is the orthogonal projection of $f$. Observing that $\mathfrak{h}_k(z;\cdot,u)\perp f_n(\cdot)$ with $n\noteq k$ holds in $L^2_\chi$, we obtain $$\int_\mathfrak{U}\mathfrak{h}(1;v,u)f(u)\,d\chi(u)= \sum \int_\mathfrak{U}\mathfrak{h}_n(v,u)f_n(v)\,d\chi(u)=\sum f_n(v)=f(v)$$ for all $v\in\mathfrak{U}$ and ${f\in H_\chi^2}$. Hence, is valid. The Hilbert-Schmidt analyticity =============================== Recall (see e.g. [@G]) that a function $f$ on an open domain in a Banach space is said to be analytic if it is Gâteaux analytic and norm continuous. Similarly to [@Dwyer71; @Petersson2001], we say that $f$ is [*Hilbert-Schmidt analytic*]{} if its Taylor coefficients are Hilbert-Schmidt polynomials. Now we describe a space $H^2$ of Hilbert-Schmidt analytic complex functions on the open ball $\mathsf{B}$. The symmetric Fock space is defined to be the orthogonal sum $$\Gamma=\bigoplus_{n\in\mathbb{Z}_+}\mathsf{E}^{\odot n},\qquad \langle \psi\mid\phi\rangle=\sum_{n\in\mathbb{Z}_+}\langle \psi_n\mid\phi_n\rangle$$ for all elements $\psi=\bigoplus_n\psi_n$, $\phi=\bigoplus_n\phi_n\in\Gamma$ with ${\psi_n,\phi_n\in\mathsf{E}^{\odot n}}$. The subset $\left\{x^{\otimes n}\colon x\in\mathsf{B}\right\}$ is total in $\mathsf{E}^{\odot n}$ by virtue of . This provides the total property of the subsets $\left\{(1-x)^{-\otimes1}\colon x\in\mathsf{B}\right\}$ in $\Gamma$ where we denote $$(1-x)^{-\otimes1}:=\sum x^{\otimes n},\qquad x^{\otimes 0}=1.$$ The $\Gamma$-valued function $(1-x)^{-\otimes1}$ in the variable $x\in\mathsf{B}$ is analytic, since $$\label{ob} \left\|(1-x)^{-\otimes1}\right\|^2=\sum\|x\|^{2n}=\left(1-\|x\|^2\right)^{-1}<\infty.$$ Let us define the Hilbert space of analytic complex functions in the variable $x\in\mathsf{B}$, associated with the Fock space $\Gamma$, as follows $$H^2=\left\{\psi^*(x)=\left\langle(1-x)^{-\otimes1}\mid\psi\right\rangle\colon \psi\in\Gamma\right\},\qquad \left\|\psi^*\right\|_{H^2}:=\left\|\psi\right\|$$ for all $x\in\mathsf{B}$. This description is correct, because each function $\psi^*$ in the variable $x\in\mathsf{B}$ is analytic by virtue of [@Herv Prop. 2.4.2], as a composition of the analytic $\Gamma$-valued function $(1-x)^{-\otimes1}$ in the variable ${x\in\mathsf{B}}$ and the linear functional $\left\langle\cdot\mid\psi\right\rangle$ on $\Gamma$. Similarly, we define the closed subspace in $H^2$ of $n$-homogenous Hilbert-Schmidt polynomials $\psi_n^*$ in the variable $x\in\mathsf{E}$ as $$H^2_n=\left\{\psi_n^*(x)=\left\langle x^{\otimes n}\mid\psi_n\right\rangle\colon \psi_n\in\mathsf{E}^{\odot n}\right\}.$$ Differentiating at zero any function $\psi^*={\bigoplus\psi^*_n\in H^2}$ with ${\psi^*_n\in H^2_n}$, we obtain that its Taylor coefficients at zero $(n!)^{-1}d^n_0\psi^*=\psi^*_n$ are Hilbert-Schmidt polynomials. Hence, every function from $H^2$ is Hilbert-Schmidt analytic. Clearly, the following orthogonal decomposition holds, $$\label{iso} H^2=\mathbb{C}\oplus H_1^2\oplus H_2^2\oplus\ldots.$$ One can show that $\left(H^2_n\right)_n$ is a coherent sequence of polynomial ideals over $\mathsf{E}$ in the meaning of [@Carado09 Def. 1.1]. For each pair ${(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast}$, we can uniquely assign the Hilbert-Schmidt $n$-homogenous polynomial $$\hat{x}^\lambda_\imath:=\left\langle x^{\otimes n}\mathrel{\big|}\mathfrak{e}^{\odot \lambda}_\imath\right\rangle,\qquad x\in\mathsf{E},$$ defined via the Fourier coefficients $\hat{x}_k:=\mathfrak{e}_k^*(x)={\langle x\mid\mathfrak{e}_k\rangle}$ of an element ${x\in\mathsf{E}}$. Taking into account , the tensor multinomial theorem yields the following orthogonal decompositions with respect to the basis $\mathfrak{e}^{\odot\mathbb{Y}}$ in $\Gamma$, $$\label{Tayl} (1-x)^{-\otimes1}= \sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\hat{x}^\lambda_\imath\mathfrak{e}^{\odot\lambda}_\imath}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2},\qquad x\in\mathsf{B}.$$ Hence, any function ${\psi^*\in H^2}$ has the orthogonal expansion $$\label{Pn1} \psi^*(x)= \left\langle(1-x)^{-\otimes1}\mid\psi\right\rangle=\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \hat\psi_{(\lambda,\imath)}{\hat{x}^\lambda_\imath},\qquad x\in\mathsf{B}$$ where $\hat\psi_{(\lambda,\imath)}:=\langle\mathfrak{e}^{\odot\lambda}_\imath\mid\psi\rangle \|\mathfrak{e}^{\odot\lambda}_\imath\|^{-2}$ are the Fourier coefficients of ${\psi\in\Gamma}$ with respect to the basis $\mathfrak{e}^{\odot\mathbb{Y}}$ and, moreover, $\|\psi^*\|_{H^2}^2=\sum_{(\lambda,\imath)} |\langle\mathfrak{e}^{\odot\lambda}_\imath\mid\psi\rangle|^2\|\mathfrak{e}^{\odot\lambda}_\imath\|^{-2}$. Thus, $\|\psi^*\|_{H^2}$ is a Hilbert-Schmidt type norm on $H^2$. Integral formulas ================= The one-to-one correspondence $\mathfrak{e}^{\odot \lambda}_\imath\leftrightarrow\varepsilon^\lambda_\imath$ allows us to construct an antilinear isometric isomorphism $\mathcal{J}\colon\Gamma\longrightarrow{H}^2_\chi$ and its adjoint ${\mathcal{J}^*\colon{H}^2_\chi\longrightarrow\Gamma}$ by the following change of orthonormal bases $$\mathcal{J}\colon\Gamma\ni\mathfrak{e}^{\odot \lambda}_\imath\left\|\mathfrak{e}^{\odot\lambda}_\imath\right\|^{-1} \longmapsto\varepsilon^\lambda_\imath\left\|\varepsilon^\lambda_\imath\right\|^{-1}_{L^2_\chi}\in{H}^2_\chi, \qquad \lambda\in\mathbb{Y},\quad \imath\in\mathbb{N}^{\ell(\lambda)}_\ast.$$ Clearly, $\mathcal{J}^*\colon\varepsilon^\lambda_\imath\left\|\varepsilon^\lambda_\imath\right\|^{-1}_{L^2_\chi} \longmapsto\mathfrak{e}^{\odot \lambda}_\imath\left\|\mathfrak{e}^{\odot\lambda}_\imath\right\|^{-1}$, because $\left\langle\mathcal{J}\mathfrak{e}^{\odot\lambda}_\imath\mathrel{\big|} f\right\rangle_{\!{L^2_\chi}}= \left\langle\mathfrak{e}^{\odot\lambda}_\imath\mathrel{\big|} \mathcal{J}^*f\right\rangle$ for any ${f\in H^2_\chi}$. Using Theorem \[irrep1\], for any element ${\psi\in\Gamma}$ with the Fourier coefficients $\hat\psi_{(\lambda,\imath)}=\langle\mathfrak{e}^{\odot\lambda}_\imath\mid\psi\rangle \|\mathfrak{e}^{\odot\lambda}_\imath\|^{-2}$, we obtain $$\mathcal{J}\psi=\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \hat\psi_{(\lambda,\imath)} \frac{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2}{\|\varepsilon^\lambda_\imath\|^2_{L^2_\chi}} \varepsilon^\lambda_\imath\quad\text{where}\quad \frac{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2}{\|\varepsilon^\lambda_\imath\|^2_{L^2_\chi}}= \frac{(\ell(\lambda)-1+|\lambda|)!}{(\ell(\lambda)-1)! |\lambda|!}.$$ In particular, $\mathcal{J}x=\sum\hat{x}_k\varepsilon_k$ for any elements ${x\in\mathsf{E}}$ with the Fourier coefficients $\hat{x}_k={\langle x\mid\mathfrak{e}_k\rangle}$. Moreover, $\|\mathcal{J}x\|_{L^2_\chi}^2=\sum\|\hat{x}_k\|^2=\|x\|^2$. In what follows, we assign to each $x\in\mathsf{E}$ the $L^2_\chi$-valued function $$x_\mathcal{J}\colon\mathfrak{U}\ni u\longmapsto(\mathcal{J}x)(u).$$ \[infty\] The function $\mathcal{J}(1-x)^{-\otimes1}=(1-x_\mathcal{J})^{-1}$ in the variable $u\in\mathfrak{U}$ takes values in $L_\chi^2$ for all ${x\in\mathsf{B}}$ Applying $\mathcal{J}$ to the decompositions and , we obtain $$\label{7} \begin{split} \mathcal{J}(1-x)^{-\otimes1}&=\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\hat{x}^\lambda_\imath\varepsilon^\lambda_\imath}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2}\\ &=\sum_{n\in\mathbb{Z}_+}\Big(\sum_{k\in\mathbb{N}}\hat{x}_k\varepsilon_k\Big)^n=(1-x_\mathcal{J})^{-1} \end{split}$$ where the following orthogonal series with a fixed $n\in\mathbb{N}$, $$\label{8} x_\mathcal{J}^n=\Big(\sum_{k\in\mathbb{N}}\hat{x}_k\varepsilon_k\Big)^n= \sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\hat{x}^\lambda_\imath\varepsilon^\lambda_\imath}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2},$$ is convergent in $L^2_\chi$. Moreover, taking into account the orthogonality, we get $$\begin{aligned} \left\|(1-x_\mathcal{J})^{-1}\right\|_{L^2_\chi}^2&=\sum_{n\in\mathbb{Z}_+} \sum_{(\lambda,\imath)\in\mathbb{Y}_n\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{|\hat{x}^\lambda_\imath|^2}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2}\\ &=\sum_{n\in\mathbb{Z}_+}\Big(\sum_{k\in\mathbb{N}}|\hat{x}_k|^2\Big)^n=\left(1-\|x\|^2\right)^{-1}.\end{aligned}$$ Hence, the function $(1-x_\mathcal{J})^{-1}$ with $x\in\mathsf{B}$ takes values in $L_\chi^2$. Let $f={\sum_n f_n\in{H}^2_\chi}$ with $f_n\in H_\chi^{2,n}$. Then ${\mathcal{J}^* f\in\Gamma}$ and ${\mathcal{J}^* f_n\in\mathsf{E}^{\odot n}}$. Briefly denote $\tilde{f}:=(\mathcal{J}^* f)^*\in H_n^2$ and $\tilde{f}_n:=(\mathcal{J}^* f_n)^*\in H^2$. Thus, $$\begin{aligned} \tilde{f}(x)&=\left\langle(1-x)^{-\otimes1}\mid\mathcal{J}^* f\right\rangle,\qquad x\in\mathsf{B},\\ \tilde{f}_n(x)&=\left\langle{x}^{\otimes n}\mid\mathcal{J}^* f_n\right\rangle,\qquad x\in\mathsf{E}.\end{aligned}$$ \[hard3\] Each Hilbert-Schmidt analytic function $\tilde{f}\in H^2$ has the integral representation $$\label{laplaceA} \tilde{f}(x)=\int_\mathfrak{U}\frac{f\,d\chi}{1-x_\mathcal{J}},\qquad x\in\mathsf{B}$$ and its Taylor coefficients at zero have the form $$\label{TaylorL} \frac{d^n_0\tilde{f}(x)}{n!}=\int_\mathfrak{U}x_\mathcal{J}^nf_n\,d\chi,\qquad x\in\mathsf{E}.$$ The mapping $f\longmapsto \tilde{f}$ produces the linear isometry ${H}^2_\chi\simeq{H}^2$. Consider the Fourier decomposition of $f$ with respect to the basis $\varepsilon^\mathbb{Y}$ and its $\mathcal{J}^*$-image, respectively $$f=\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \hat{f}_{(\lambda,\imath)}{\varepsilon^\lambda_\imath},\qquad \mathcal{J}^*f=\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \bar{\hat{f}}_{(\lambda,\imath)} \frac{\|\varepsilon^\lambda_\imath\|^2_{L^2_\chi}}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2} \mathfrak{e}^{\odot\lambda}_\imath$$ where $\hat{f}_{(\lambda,\imath)}=\|\varepsilon^{\lambda}_\imath\|_{L^2_\chi}^{-2} \int_\mathfrak{U}{f}\,\bar\varepsilon^\lambda_\imath\,d\chi$. Substituting $\hat{f}_{(\lambda,\imath)}$ to $\tilde{f}=(\mathcal{J}^* f)^*$ and using the orthogonal property and the relations and , we obtain $$\begin{aligned} \tilde{f}(x)& =\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\hat{f}_{(\lambda,\imath)} \hat{x}^\lambda_\imath\left\langle \mathfrak{e}^{\odot\lambda}_\imath\mid\mathfrak{e}^{\odot\lambda}_\imath\right\rangle \|\varepsilon^\lambda_\imath\|^2_{L^2_\chi}}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^4}\\ &=\int_\mathfrak{U}\sum_{(\lambda,\imath)\in\mathbb{Y}\times\mathbb{N}^{\ell(\lambda)}_\ast} \frac{\hat{x}^\lambda_\imath\varepsilon^{\lambda}_\imath}{\|\mathfrak{e}^{\odot\lambda}_\imath\|^2}f\,d\chi =\int_\mathfrak{U}\frac{f\,d\chi}{1-x_\mathcal{J}}.\end{aligned}$$ Hence, holds. Using , we similarly obtain $$\label{n} \tilde{f}_n(x)=\left\langle{x}^{\otimes n}\mathrel{\big|} \mathcal{J}^*f_n\right\rangle =\int_\mathfrak{U}x_\mathcal{J}^nf_n\,d\chi.$$ Taking into account and the orthogonal decomposition , we get $$\label{r} \tilde{f}\left(\alpha{x}\right)= \left\langle(1-\alpha{x})^{-\otimes1}\mathrel{\big|}\mathcal{J}^*f\right\rangle= \sum\alpha^n\int_\mathfrak{U}x_\mathcal{J}^nf_nd\chi,\quad {|\alpha|\le1}.$$ Note that $\tilde{f}\left(\alpha{x}\right)$ is analytic in $\alpha$ for all ${x\in\mathsf{B}}$. Differentiating $\tilde{f}\left(\alpha{x}\right)$ at $\alpha=0$ and using the $n$-homogeneity of derivatives, we obtain $$\frac{d^n}{d\alpha^n} \sum\alpha^n\int_\mathfrak{U}x_\mathcal{J}^n{f}_n\,d\chi \mathrel{\Big|}_{\alpha=0}=n!\int_\mathfrak{U}x_\mathcal{J}^n{f}_n\,d\chi.$$ Hence, the functions coincide with the Taylor coefficients at zero of $\tilde{f}$. Finally, since the image of $\varepsilon^\mathbb{Y}$ under $\mathcal{J}^*$ coincides with $\mathfrak{e}^{\odot\mathbb{Y}}$, the mapping $H^2_\chi\ni f\longmapsto \tilde{f}\in H^2$ is an isometry. Radial boundary values ====================== Using , for each $f={\sum_n f_n\in{H}^2_\chi}$ with $f_n\in H_\chi^{2,n}$ we can rewrite as $$\tilde{f}(rx)=\left\langle(1-r{x})^{-\otimes1}\mathrel{\big|}\mathcal{J}^*f\right\rangle =\int_\mathfrak{U}\frac{f\,d\chi}{1-rx_\mathcal{J}},\qquad x\in\mathsf{K},\quad {r\in[0,1)}$$ where $\mathsf{K}=\left\{x\in\mathsf{E}\colon\|x\|\le1\right\}$. \[car:hardy2\] The integral transform $\mathcal{C}_r\colon{f}\longmapsto \mathcal{C}_r[f]$, defined as $$\label{CauchyB} \mathcal{C}_r[f](x):=\int_\mathfrak{U}\frac{f\,d\chi}{1-rx_\mathcal{J}},\qquad x\in\mathsf{K},\quad{r\in[0,1)},$$ belongs to the space of bounded linear operators $\mathscr{L}(H^2_\chi,H^2)$. The radial boundary values of ${\mathcal{C}_r[f]\in H^2}$ are equal to $\tilde{f}\in H^2$ in the following sense: $$\label{boudval} \lim_{r\nearrow1}\big\|\mathcal{C}_r[f]-\tilde{f}\big\|_{H^2}=0.$$ Moreover, the following equality holds, $$\label{h2norm} \|\tilde{f}\|_{H^2}^2=\sup_{r\in[0,1)}\left\|\mathcal{C}_r[f]\right\|^2_{H^2}.$$ Theorem \[hard3\] and imply the equality $\mathcal{C}_r[f]=\sum r^n\tilde{f}_n$ for any ${r\in[0,1)}$. By , we have $\tilde{f}_k\perp\tilde{f}_n$ as $n\noteq k$ in $H^2$. It follows that $$\left\|\mathcal{C}_r[f]\right\|^2_{H^2}=\left\|\sum r^n\tilde{f}_n\right\|^2_{H^2} =\sum r^{2n}\|\tilde{f}_n\|^2_{H^2}= \sum r^{2n}\|f_n\|^2_{L_\chi^2},$$ since $\mathcal{J}^*$ acts isometrically from $H^{2,n}_\chi$ onto the space $\mathsf{E}^{\odot n}$ which is antilinear isometric to $H^2_n$ by definition. Similarly, we obtain that $$\big\|\mathcal{C}_r[f]-\tilde{f}\big\|^2_{H^2}= \sum\left(r^{2n}-1\right)\|f_n\|^2_{L_\chi^2}\longrightarrow0,\qquad r\to1.$$ Moreover, the Cauchy-Schwarz inequality implies that $$\left\|\mathcal{C}_r[f]\right\|^2_{H^2} \le\frac{1}{(1-r^2)^{1/2}}\Big(\sum\left\|f_n\right\|^2_{L_\chi^2}\Big)^{1/2} =\frac{\|f\|_{L^2_\chi}}{(1-r^2)^{1/2}}$$ for all $ f\in H^2_\chi$. Hence, the operator $\mathcal{C}_r$ belongs to $\mathscr{L}(H^2_\chi,H^2)$ for all $r\in [0,1)$. Finally, the equalities $$\sup_{r\in[0,1)}\left\|\mathcal{C}_r[f]\right\|^2_{H^2}=\sup_{r\in [0,1)}\sum r^{2n}\|\tilde{f}_n\|^2_{H^2} =\sum\|\tilde{f}_n\|^2_{H^2}=\|\tilde{f}\|^2_{H^2}$$ give the required formula . [1]{} , *Spectral methods in infinite-dimensional analysis*. Springer, 1995. A. Borodin G. Olshanski, *Harmonic analysis on the infinite-dimensional unitary group and determinantal point processes*. Ann. Math. **161** (2005), 1319–1422. A. Borodin, *Determinantal point processes* in *Oxford Handbook of Random Matrix Theory* (G. Akemann, J. Baik, and P. Di Francesco, eds.) Oxford Univ. Press, 2011. D. Carado, V. Dimand S. Muro, *Coherent sequences of polynomial ideals on Banach spaces*. Math. Nachr. **282**(8) (2009), 1111–1133. B. Cole T.W. Gamelin, *Representing measures and [Hardy]{} spaces for the infinite polydisk algebra*. Proc. London Math. Soc. **53** (1986), 112–142. T.A.W. Dwyer III, *Partial differential equations in Fischer-Fock spaces for the Hilbert-Schmidt holomorphy type*. [Bull. Amer. Math. Soc.]{} **77**(5) (1971), 725–739. T.W. Gamelin, *Analytic functions on [Banach]{} spaces*, in *Complex Function Theory* (Gauthier and Sabidussi eds.) Kluwer, 1994, 187–223. K. Floret, *Natural norms on symmetric tensor products of normed spaces*. Note di Matematica **17**, (1997), 153–188. M. Hervé, *Analyticity in [Infiite]{} [Dimensional]{} [Spaces]{}*, de Gruyter Stud. in Math., vol.‾10, Walter de Gruyter, Berlin, New York, 1989. , *Abstract Harmonic Analysis*, Vol.2, Springer, 1994. O. Lopushansky, *Hardy type space associated with an infinite-dimensional unitary matrix group*. Abst. Appl. An. ID 810735 (2013), 1–7. Yu. A. Neretin, *Hua type integrals over unitary groups and over projective limits of unitary groups*. Duke Math. J. **114**(2) (2002), 239–266. G. Olshanski, *The problem of harmonic analysis on the infinite-dimensional unitary group*. J. Funct. Analysis. **205** (2003), 464–524. , *Hardy spaces in an infinite dimensional setting*, In: [H.D.Doebner (ed.)]{}, *Lie Theory and Its Applications in Physics, H.D.Doebner (ed.)*, [3 – 27]{}, [Word Sci. Publ.]{}, 1998. P. Petersson, *Hypercyclic convolution operators on entire functions of Hilbert-Schmidt holomorphy type*. Ann. Math. Blaise Pascal **8**(2) (2001), 107–114. D. Pickrell, *Measures on infinite-dimensional Grassmann manifolds*. J. Funct. Analysis. **70** (1987), 323–356. D. Pinasco I. Zalduendo, *Integral representations of holomorphic functions on [Banach]{} spaces*, J. Math. Anal. Appl. **308** (2005), 159–174. W. Rudin, *Function [theory]{} in the [unit]{} [ball]{} of $\mathbb{C}^n$*. Springer, 2008. S.Saitoh, *Integral Transforms, Reproducing Kernels and Their Applications*. Pitman Research Notes in Math. Ser. Vol. 369, Longman, 1997. [^1]: Faculty of Mathematics and Natural Sciences, Rzeszów University.
A. Bernicha$^1$, G. López Castro$^2$ and J. Pestieau$^1$\ **Abstract** Based on the S-matrix approach, we introduce a modified formula for the $\pi^{\pm}$ electromagnetic form factor which describes very well the experimental data in the energy region $2m_{\pi} \leq \sqrt{s} \leq 1.1$ GeV. Using the CVC hypothesis we predict $B(\tpp) = (24.75 \pm 0.38)\% $, in excellent agreement with recent experiments. PACS numbers: 13.35.Dx, 14.40.Cs, 11.30.Hv, 11.55.-m **I. Introduction.** The processes $\epem$ and $\tpp$ provide a clean environment for a consistency check of the Conserved Vector Current (CVC) hypothesis \[1\]. Actually, the measurement of the $\pi^{\pm}$ electromagnetic form factor in $e^+e^-$ annihilation is used to predict \[2\] the dominant hadronic decay of the tau lepton, namely $\tpp$. The weak pion form factor involved in $\tau$ decay is obtained by removing the (model-dependent) I=0 contribution (arising from isospin violation and included [*via*]{} $\ro$ mixing) from the measured pion electromagnetic form factor. In a previous paper \[3\] we have applied the S-matrix approach to the $\epem$ data of Ref. \[4\] and determined the pole parameters of the $\rho^0$ resonance. In particular, we have fitted the data of Ref. \[4\] by assuming a constant value for the strength of the $\ro$ mixing parameter and using different parametrizations to account for the non-resonant background. As a result, the pole position of the scattering amplitude was found \[3\] to be insensitive to the specific background chosen to fit the experimental data. The purpose of this Brief Report is two-fold. We first argue that the pole position in $\epem$ is not modified by taking the $\ro$ mixing parameter as a function of the center-of-mass energy, as already suggested in recent papers \[5\]. Then we propose a new parametrization for the scattering amplitude of $\epem$, based on the S-matrix approach, which looks very similar to the Breit-Wigner parametrization with an energy-dependent width. This results into an improvement in the quality of the fits (respect to Ref. \[3\]) while the pole position and $\ro$ mixing parameters remain unchanged (as it should be). Finally, we make use of CVC to predict the $\tpp$ branching ratio, which is found to be in excellent agreement with recent experimental measurements. **II. Energy-dependent $\ro$ mixing.** We start by giving a simple argument to show that the pole position would not be changed if we choose the $\ro$ mixing parameter to be $m^2_{\rho \omega}(s) \propto s$ (namely $m^2_{\rho-\omega}(0) = 0$), where $\sqrt{s}$ is the total center-of-mass energy in $\epem$. Let us consider Eq.(7) of Ref. \[3\] and replace $y \rightarrow y's/s_{\omega}\,^{\S}$, where $s_V = m_V^2 -im_V\Gamma_V$. This yields the following expression for Eq. (7) of Ref. \[3\]: $$\begin{aligned} F_{\pi}(s) &=& \frac{A}{s-s_{\rho}} \left ( 1 + \frac{y' s}{s_{\omega}} \frac{m_{\omega}^2}{s-s_{\omega}} \right) + B(s) \nonumber \\ &=& \frac{A'}{s-s_{\rho}} \left ( 1 + y^{''} \frac{m_{\omega}^2}{s-s_{\omega}} \right) + B(s),\end{aligned}$$ where $A$ and $B(s)$ denote the residue at the pole and non-resonant background terms, respectively. The second equality above follows from the approximations: $$\begin{aligned} A' &\equiv& A \left ( 1 + y'\frac{m_{\omega}^2}{s_{\omega}}\right ) \approx A(1+y'), \\ y^{''} &\equiv& \frac{y'}{1+ y'm_{\omega}^2/s_{\omega}} \approx \frac{y'}{1+y'} \end{aligned}$$ [*i.e.*]{} by neglecting small imaginary parts of order $y'\Gamma_{\omega}/m_{\omega} \approx 10^{-5}$ \[3\]. Thus, since introducing $m^2_{\rho\omega} \propto s$ is equivalent to a redefinition of the residue at the pole and of the $\ro$ mixing parameter, we conclude that the pole position would not be changed if we take a constant or an energy-dependent $\ro$ mixing parameter. **III. Electromagnetic pion form factor.** Next, we consider a new parametrization for the pion electromagnetic form factor. This parametrization is obtained by modifying the pole term in the following way: s-m\_\^2 + im\_\_ () D(s) (s-m\_\^2 + im\_\_ ()), where $\theta (\tilde{s})$ is the step function, with argument $\tilde{s} =s -4 m_{\pi}^2$. Observe that if we chose: x(s) = -m\_ ( ), then Eq. (2) becomes: D(s) = s-m\_\^2 + m\_\_ x(s)(s-4m\_\^2) + im\_ \_(s) which, when inserted in (1), looks very similar to a Breit-Wigner with an energy-dependent width, which we will chose to be: \_(s) = \_ ( )\^[3/2]{}  (s-4m\_\^2) with the obvious identification $\Gamma_{\rho} = \Gamma (m_{\rho}^2)$. Using Eq. (2) we are lead to modified expressions for Eqs. (8), (9) and (15) of Ref. \[3\], namely: $$\begin{aligned} F_{\pi}^{(1)}(s) &=& \left ( -\ \frac{am_{\rho}^2}{D(s)} + b\right) \left( 1 + \frac{ym_{\omega}^2}{s-s_{\omega}} \right) \\ F_{\pi}^{(2)}(s) &=& -\ \frac{am_{\rho}^2}{D(s)}\left( 1 + \frac{ym_{\omega}^2}{s-s_{\omega}} \right) + b\\ F_{\pi}^{(4)}(s) &=& -\ \frac{am_{\rho}^2}{D(s)}\left( 1 + \frac{ym_{\omega}^2}{s-s_{\omega}} \right)\left[ 1 + b\left(\frac{s-m_{\rho}^2}{m_{\rho}^2} \right) \right]^{-1}.\end{aligned}$$ Using Eqs. (6-8), we have repeated the fits to the experimental data of Barkov [*et al.*]{} \[4\] in the energy region $2m_{\pi} \leq \sqrt{s} \leq 1.1\ {\rm GeV}$. As in Ref. \[3\], the free parameters of the fit are $m_{\rho},\ \Gamma_{\rho},\ a,\ b$ and $y$. The results of the best fits are shown in Table 1. From a straightforward comparison of Table 1 and the corresponding results in Ref. \[3\] (see particularly, Eqs. (10), (11), (16) and Table I of that reference), we observe that the quality of the fits are very similar. Furthermore, the pole position, namely the numerical values of $m_{\rho}$ and $\Gamma_{\rho}$, and of the $\ro$ mixing parameter $y$, are rather insensitive to the new parametrizations (as it should be). The major effect of the new parametrizations is observed in the numerical values of $a$ (the residue at the pole) and $b$ (which describes the background). An interesting consequence of the results in Table 1 is an improvement in the value of $F_{\pi}(0)$, which should equal 1 (the charge of $\pi^+$). Indeed, from Eqs. (6-8) and Table 1 we obtain: $$\begin{aligned} F_{\pi}^{(1)}(0) &=& a+b \nonumber \\ &=& 0.997 \pm 0.015\ (0.962 \pm 0.020) \nonumber \\ F_{\pi}^{(2)}(0) &=& a+b \nonumber \\ &=& 0.997 \pm 0.015\ (0.960 \pm 0.017) \\ F_{\pi}^{(4)}(0) &=& \frac{a}{1-b} \nonumber \\ &=& 1.011 \pm 0.010\ (0.987 \pm 0.013) \nonumber\end{aligned}$$ where the corresponding values obtained in Ref. \[3\] are shown in brackets. An evident improvement is observed. Let us close the discussion on this new parametrization with a short comment: using $F_{\pi}^{(4)}(s)$ (with imaginary parts and $y$ set to zero) we are able to reproduce very well the data of Ref. \[6\] in the space-like region $-0.253\ {\rm GeV}^2 \leq s \leq -0.015\ {\rm GeV}^2$. **IV. Prediction for $\tpp$.** Finally, using the previous results on the pion electromagnetic form factor, we consider the decay rate for $\tpp$. As is well known \[2\], the CVC hypothesis allows to predict the decay rate for $\tau^- \rightarrow (2n\pi)^- \nu_{\tau}$ in terms of the measured cross section in $e^+e^- \rightarrow (2n\pi)^0$. Since for the $\tpp$ case the kinematical range extends up to $\sqrt{s}=m_{\tau}$, let us point out that we have verified that our parametrizations for $F_{\pi}(s)$ reproduce very well the data of $\epem$ in the energy region from 1.1 GeV to $m_{\tau}$. The decay rate for $\tpp$ at the lowest order is given by \[2\]: $$\begin{aligned} \Gamma^0(\tpp) = \frac{G_F^2 |V_{ud}|^2m_{\tau}^3}{384 \pi^3} \int_{4m_{\pi}^2}^{m_{\tau}^2} ds \hspace{-.5cm} && \left( 1+\frac{2s}{m_{\tau}^2} \right) \left( 1-\frac{s}{m_{\tau}^2} \right)^2 \nonumber \\ .\hspace{-.8cm}&&\left(\frac{ s-4m_{\pi}^2}{s} \right)^{3/2} |F_{\pi}^{I=1}(s)|^2\end{aligned}$$ where $V_{ud}$ is the relevant Cabibbo-Kobayashi-Maskawa mixing angle. In the above expression we have neglected isospin breaking in the pion masses. The form factor $F_{\pi}^{I=1}(s)$ in the Eq. (10) is obtained from Eqs. (6-8) by removing the I=0 contribution due to $\ro$ mixing (namely, $y=0$). According to Ref. \[7\], after including the dominant short-distance electroweak radiative corrections the expression for the decay rate becomes: () = ( 1 + ) \^0 (). We have not included the effects of long-distance electromagnetic radiative corrections, but we expect that they would not exceed 2.0 % . In order to predict the branching ratio, we use Eqs. (6)-(8) with $y=0$, the results of Table 1 and the following values of fundamental parameters (ref. \[7, 8\]): $$\begin{aligned} m_{\tau} &=& 1777.1 \pm 0.5 \ {\rm MeV} \\ G_F &=& 1.16639(2) \times 10^{-5}\ {\rm GeV}^{-2} \\ |V_{ud}| &=& 0.9750 \pm 0.0007.\end{aligned}$$ With the above inputs we obtain: B() = ( ) { [ll]{} (24.66 0.26)% &\ (24.62 0.26)% &\ (24.96 0.32)% & . or, the simple average B() = (24.75 0.38)% which is in excellent agreement with recent experimental measurements and other theoretical calculations (see Table 2). Eq. (13) includes the errors (added in quadrature) coming from the fit to $\epem$ and the 1 % error in the $\tau$ lifetime \[8\]: $\tau_{\tau} = (295.6 \pm 3.1)\cdot 10^{-15}\ {\rm s}$. In summary, based on the S-matrix approach we have considered a modified parametrization for the $\pi^{\pm}$ electromagnetic form factor, which describes very well the experimental data of $\epem$ in the energy region from threshold to 1.1 GeV. The pole position of the S-matrix amplitude is not changed by this new parametrization. Using CVC, we have predicted the $\tpp$ branching ratio, which is found to be in excellent agreement with experiment. [99]{} S. S. Gershtein and Ya. B. Zeldovich, Sov. Phys. JETP [**2**]{}, 596 (1956); R. P. Feynman and M. Gell-mann, Phys. Rev. [**109**]{}, 193 (1958). See for example: L. Okun, [*Lepton and Quarks*]{}, North Holland Pub. Co., Amsterdam (1982), Chapter 13; F. J. Gilman and S. H. Rhie, Phys. Rev. [**D31**]{}, 1066 (1985). A. Bernicha, G. López Castro and J. Pestieau, Phys. Rev. [**D50**]{}, 4454 (1994). L. M. Barkov [*et al.*]{}, Nucl. Phys. [**B256**]{}, 365 (1985). See for example: H. B. O’Connell, B. C. Pearce, A. W. Thomas and A. G. Williams, Phys. Lett. [**B354**]{}, 14 (1995) and references cited therein for earlier works. S. R. Amendolia [*et al.*]{}, Nucl. Phys. [**B277**]{}, 168 (1986); E. B. Dally [*et al.*]{}, Phys. Rev. Lett. [**48**]{}, 375 (1982). W. Marciano and A. Sirlin, Phys. Rev. Lett. [**71**]{}, 3629 (1993). L. Montanet [*et al.*]{}, [*Review of Particle Properties*]{}, Phys. Rev. [**D50**]{} Part I, (1994). J. Kühn and A. Santamaría, Z. Phys. [**C48**]{}, 443 (1990); W. Marciano in [*Proceedings of the 2nd. Workshop on $\tau$ Physics*]{}, Ed. K. K. Gan, World Scientific, Singapore (1993). R. J. Sobie, Z. Phys. [**C65**]{}, 79 (1995). A. Donnachie and A. B. Clegg, Phys. Rev. [**D51**]{}, 4979 (1995). J. Urheim [*et al.*]{}, CLEO Collab., e-print archive hep-ex/9408003. R. Akers [*et al.*]{}, OPAL Collab., Phys. Lett. [**B328**]{}, 207 (1994). TABLE CAPTIONS 1. Best fits to the pion electromagnetic form factor of Ref. \[4\], using Eqs. (6-8). 2. Summary of recent experimental measurements (Exp.) and theoretical results (Th.) for the $\tpp$ branching ratio. The errors in the first entry arise from use of $\epem$ data, the $\tau$ lifetime and radiative correction effects \[9\], respectively. [|c|c|c|c|c|c|c|]{}\ &$m_{\rho}$ (MeV) & $\Gamma_{\rho}$ (MeV) & $a$ & $b$ & $y (10^{-3})$ & $\chi/d.o.f$\ $F_{\pi}^{(1)}$& $756.74\pm $ & $143.78 \pm $ & $1.236 \pm$ & $-0.239 \pm$ & $-1.91 \pm$ & 0.998\ & $0.82$ & $1.16$ & $0.008$ & $0.013$ & $0.15$ &\ $F_{\pi}^{(2)}$& $756.58\pm $ & $144.05 \pm $ & $1.237 \pm$ & $-0.240 \pm$ & $-1.91 \pm$ & 1.008\ & $0.82$ & $1.17$ & $0.008$ & $0.013$ & $0.15$ &\ $F_{\pi}^{(4)}$& $757.03\pm $ & $141.15 \pm $ & $1.206 \pm$ & $-0.193 \pm$ & $-1.86 \pm$ & 0.899\ & $0.76$ & $1.18$ & $0.008$ & $0.009$ & $0.15$ &\ [|c|c|]{}\ Reference & $B(\tpp)$ (in %)\ Th. \[9\] & $24.58 \pm 0.93 \pm 0.27 \pm 0.50$\ Th. \[10\] & $24.60 \pm 1.40$\ Th./Exp. \[11\] & $24.01 \pm 0.47$\ Exp. \[8\] & $25.20 \pm 0.40$\ Exp. \[12\] & $25.36 \pm 0.44$\ Exp. \[13\] & $25.78 \pm 0.64$\
--- abstract: 'We present new Hubble Space Telescope (HST)-NIC3, near-infrared H-band photometry of globular clusters (GC) around NGC 4365 and NGC 1399 in combination with archival HST-WCPC2 and ACS optical data. We find that NGC 4365 has a number of globular clusters with bluer optical colors than expected for their red optical to near-infrared colors and an old age. The only known way to explain these colors is with a significant population of intermediate-age (2-8 Gyr) clusters in this elliptical galaxy. In contrast, NGC 1399 reveals no such population. Our result for NGC 1399 is in agreement with previous spectroscopic work that suggests that its clusters have a large metallicity spread and are nearly all old. In the literature, there are various results from spectroscopic studies of modest samples of NGC 4365 globular clusters. The spectroscopic data allow for either the presence or absence of a significant population of intermediate-age clusters, given the index uncertainties indicated by comparing objects in common between these studies and the few spectroscopic candidates with optical to near-IR colors indicative of intermediate ages. Our new near-IR data of the NGC 4365 GC system with much higher signal-to-noise agrees well with earlier published photometry and both give strong evidence of a significant intermediate-age component. The agreement between the photometric and spectroscopic results for NGC 1399 and other systems lends further confidence to this conclusion, and to the effectiveness of the near-IR technique.' author: - 'Arunav Kundu, Stephen E. Zepf, Maren Hempel, David Morton, Keith M. Ashman, Thomas J. Maccarone, Markus Kissler-Patig, Thomas H. Puzia, Enrico Vesperini' title: The Ages of Globular Clusters in NGC 4365 Revisited with Deep HST Observations --- Introduction ============ Globular clusters (GC) are invaluable probes of the major star formation episodes in the life of a galaxy because each individual GC has a specific age and metallicity which reflects the physical conditions at the epoch of its formation. Thus, the observed color and spectrum are much easier to interpret than the complex superposed populations seen in integrated light (e.g. Ashman & Zepf 1998). Despite the simple nature of the stellar population of a GC, determining both the age and metallicity of any unresolved population requires overcoming the well known age-metallicity degeneracy that causes both increasing age and increasing metal content to have similar effects on optical colors and spectral features. One way to break this age-metallicity degeneracy is to combine optical and near-infrared colors, as the near-IR is mostly sensitive to the metallicity of the giant branch while optical colors are affected by both metallicity and age. Puzia et al. (2002) (hereafter P02) employed this technique on ground-based, VLT K-band observations in combination with WFPC2 data to study two early type galaxies, NGC 3115 and NGC 4365. P02 found that the globular cluster system of NGC 4365 has a significant intermediate-age (2-8 Gyrs old) component, which has no counterpart in the predominantly old NGC 3115 clusters. The discovery of intermediate age GCs in a fairly typical elliptical such as NGC 4365 is a powerful illustration of the ability of GCs to probe major formation episodes of galaxies. Subsequent spectroscopy of a handful of bright GCs in these galaxies (Larsen et al. 2003, hereafter L03; Kuntschner et al. 2002), and studies of the cluster systems of several other galaxies with both optical to near-IR photometry (Hempel et al. 2003) and spectroscopy (Puzia et al. 2005) agree on the age and metallicity distribution determined by the two techniques. However, a recent spectroscopic analysis of a small sample of NGC 4365 clusters by Brodie et al. (2005) (hereafter B05) suggests that the previously photometrically and spectroscopically identified intermediate age GCs are instead an old population. Given the interest in determining whether there are intermediate age GCs in early type galaxies and its implications on how galaxies form, it seems important to analyze independent data for NGC 4365. We have obtained deep H-band images of NGC 4365 using the NIC3 camera on board the HST in order to study the mass function of its GCs. We present a new, entirely HST-based study of the cluster system of NGC 4365. We compare our analysis to the aforementioned published studies, and a control sample in NGC 1399 using the exact same HST instruments, to comment on the constraints on the age distribution of the intermediate metallicity GC population. Observations ============ We obtained deep, dithered, H-band (F160W), NICMOS-NIC3 observations of 5568s each at three positions in NGC 4365 on 17$^{th}$ Nov, 2003, 15$^{th}$ Jun, 2004 and 17$^{th}$ Jun 2004 for our HST program GO-9878. These observations coincided with the WF chips of archival WFPC2 V (F555W, 2200s) and I (F814W, 2300s) images obtained on 31$^{st}$ May 1996. The galaxy was observed in the g (F475W, 750s), and z (F850LP, 1120s) with the ACS on 6$^{th}$ Jun, 2003. NGC 1399 was imaged in the H (F160W, 384s) with the NIC3 on 18$^{th}$ Dec, 1997, the B (F450W, 5200s) and I (F814W, 1800s) with the WFPC2 on 2$^{nd}$ Jun, 1996, and the g (F475W, 760s) and z (F850LP, 1130s) on 11$^{th}$ Sep, 2004 with the ACS. The NGC 4365 NIC3 observations are sub-pixel dithered at four positions. Using the drizzle algorithm (Fruchter & Hook 2002) we reconstructed high resolution images, alleviating the effect of the undersampled 0.2” NIC3 pixels. Importantly, the dithering placed the center of each GC at different locations with respect to the center of a pixel, thus reducing the effects of intrapixel sensitivity variations in the NIC3 array (Xu & Mobasher 2003). The NICMOS observations of NGC 1399 were obtained with NIC1 and NIC2 in focus. Since NIC3 does not share a common focus with these instruments the NIC3 images are out of focus. However, the instruments on the HST are well studied and characterized; hence we were able to extract valuable information from these data. We both drizzled the NGC 1399 NIC3 images and shifted and added them on the original scale. The photometry determined from each image was in excellent agreement. We chose to use the shifted images to minimize possible uncertainties due to centering issues in out of focus images. Interestingly, the out of focus nature of the NGC 1399 NIC3 image mitigates intrapixel sensitivity effects. The WFPC2 data for both galaxies were dithered by integer pixels. The images were shifted and added to remove cosmic rays and charge traps. After inspection we also analyzed the drizzled, geometric distortion corrected ACS images of both galaxies. Candidate GCs were identified in the WFPC2 and ACS images using the constant S/N detection technique described in Kundu et al. (1999). The lists, and the results described below for each set are in good agreement. We use the ACS selected candidates due to the higher S/N, and the slightly improved ability to distinguish between GCs and contaminating objects. Aperture photometry was performed in each image using zeropoints from the HST data handbook. Aperture corrections from small radii were measured from our data to account for the partially resolved GC profiles in HST images (Kundu et al. 1999). Foreground reddening corrections from Schlegel, Finkbeiner, & Davis (1998) were also applied. [*All*]{} photometry in this paper is reported in the Vegamag system. Where applicable we have transformed ABMAG magnitudes to Vegamag using zeropoint offsets from Sirianni et al. (2005). While a 0.5$''$ aperture was used for photometry in the NGC 4365 NIC3 images, the NIC3 observations of NGC 1399 did not have a small core so a large 5 pixel radius (1$''$) aperture was used for aperture photometry. Aperture corrections were determined from TinyTim models (Krist 1995). Although the NICMOS focus history indicated that the Pupil Alignment Mechanism (PAM) position for best focus for the NGC 1399 NIC3 images should be near -13mm, comparison of point sources with TinyTim models revealed that a PAM position of -10mm provided the best fit to the data. Varying the center of an object within a pixel and allowing PAM positions between -5mm and -16mm changed the correction by 0.04 mag rms. This factor was added in quadrature to the photometric uncertainty. Results & Discussion ==================== In the rest of this analysis we study the GCs in the g-I, I-H plane because both galaxies have been observed in these filters with the same instruments, and this choice of filters provides the largest baseline for both optical and infrared colors. The conclusions of this study are unaffected by the choice of optical and infrared color baselines selected from the filter set available to us. Figure 1 plots the g-I vs I-H colors for 70 GCs in NGC 4365 and 11 GCs in NGC 1399 with photometric uncertainties less than 0.1 mag in each color. The least luminous source in the g, I and H filters plotted in Fig 1 are 25.83$\pm$0.07, 23.40$\pm$0.05, and 21.43$\pm$0.07 in NGC 4365 and 23.62$\pm$0.02, 21.74$\pm$0.02, 20.08$\pm$0.10 in NGC 1399 respectively. Fiducial lines of constant metallicity and constant age from the simple stellar population models of Bruzual & Charlot (2003) (hereafter BC03) are also plotted. It is apparent that there is a significant excess of GCs with blue g-I colors for a given I-H color in NGC 4365 as compared to NGC 1399. Such colors can only be explained by the presence of an intermediate age population of GCs younger than $\approx$8 Gyrs. In contrast the GCs in NGC 1399 appear to be primarily old. The relative paucity of GCs in NGC 1399 is because it represents a single out of focus NIC3 field. Although the lower S/N of the NGC 1399 NIC3 image causes a preferential selection of red, young and/or metal-rich GCs, most of the metal-rich GCs in NGC 1399 appear to be older than 10 Gyr despite this bias. This suggests that the overwhelming majority of GCs in this galaxy are old with a handful of possible young ones. This is completely consistent with the spectroscopic analyses of GCs in NGC 1399 by Kissler-Patig et al. (1998) and Forbes et al. (2001) who found that the majority of their samples of 18 and 10 GCs respectively are old, with a range of metallicities extending to roughly solar. In consonance with our results, each of these studies identified two possible/likely young GCs in their respective samples. The color-color plot of NGC 4365 GCs provides a striking contrast to NGC 1399 with a clear excess of GCs younger than $\sim$8 Gyrs, consistent with the conclusions of P02 and the spectroscopic follow up by L03. Figure 2 compares our data with Anders & Fritze-v. Alvensleben (2003) and Maraston (2005) models. While the models differ in detail due to the choice of stellar track and calibration technique, the conclusion that NGC 4365 has a large population of intermediate age GCs and NGC 1399 does not is independent of the choice of model. Statistical Significance of the Age Distributions ------------------------------------------------- The direct determination of the epochs and efficiency of the major episodes of star formation from color-color plots are complicated by issues like selection effects and photometric errors. We choose instead to apply the modelling technique of Hempel & Kissler-Patig (2004). In brief, we create input models with two populations of clusters using Monte Carlo simulations and stellar models. The cumulative age distribution of a range of input models is then compared to the data to find the distribution that best fits the data. Figure 3 plots the cumulative fraction of GCs that are older than a given age in each of the galaxies, based on BC03 models. This analysis is restricted to GCs with $[Fe/H]\gtrsim-0.4 dex$ to minimize the effects of incompleteness. Figure 3 shows that the NGC 1399 GCs are older on average, with a median age of $\approx$12 Gyr as compared to $\approx$5 Gyr for NGC 4365. Next we fixed the age of the older population to 13 Gyr and conducted the simulations described in Hempel & Kissler-Patig (2004) for a two burst model. Figure 4, which plots the reduced $\chi^{2}$ comparing the model with the data, shows that approximately 60% of the NGC 4365 metal-rich GCs in our field of view were formed about 4 Gyrs ago. The corresponding fraction for NGC 1399 is only about 20%, although given the selection biases in NGC 1399 this is likely an upper limit on the constraints. We note that only a handful of old, metal-poor GCs are observed in NGC 4365. This is not surprising since the optical color magnitude diagram (Kundu & Whitmore 2001) suggests that GCs with colors of V-I$\approx$0.95 corresponding to the typical metal-poor peak are fainter than the other GCs in NGC 4365, as is expected for older clusters. We shall investigate the luminosity and mass functions of NGC 4365 GCs in a future paper. Comparison with Previous Observations ------------------------------------- As discussed above, the inferred age and metallicity distributions of our program galaxies are in good agreement with previous photometric and spectroscopic studies (P02, L03, Kissler-Patig et al. 1998, Forbes et al. 2001). The recent spectroscopic analysis of NGC 4365 by B05 however reaches a different conclusion suggesting that the intermediate age GCs are actually old, intermediate metallicity clusters. Unfortunately, the B05 study has very few GCs with known near-IR and optical colors. Specifically, of the large sample of NGC 4365 GCs with optical and near-IR colors there are 12 shown in Fig. 8 of B05 as having spectroscopic data. Of these, 9 have been observed by L03 providing reasonable overlap with the photometric data. B05 observed 3 GCs with known near-IR and optical colors, including two in L03. The remaining two of the 12 are an object with a given position more than 2$''$ from any candidate in P02 and thus an uncertain photometric counterpart, and one with unreliable colors due to likely cosmic ray events in the WFPC2 image. B05 observed a total of 6 objects with intermediate optical colors; the three above with near-IR colors, and three without such colors to allow an age estimate (see Fig 1). A recent shallow near-IR imaging study by Larsen et al. (2005) observed additional B05 clusters and suggested a broad spread of GC colors consistent with P02 and this study. However, the authors express reservations about their own calibration. An important aspect of the data presented here are the much smaller error bars which both clearly show that GC colors indicate an age spread and are consistent with the shallower data sets. In Figure 5, we plot the g-I, I-H colors of the six matches with the L03 data and the subset of two sources observed by B05 that fall within our fields of view. We note that most of these GCs appear to be of intermediate age with moderate metallicity, in agreement with the results of L03. Given the small overlap with the B05 observations, model uncertainties, and one candidate that lies on the 8 Gyr isochrone and is consistent with an old age, no strong conclusions can be drawn from the B05 data. We also note that B05 found significantly different Lick index values compared to L03 for the three GCs in common between the two studies. The up to $3\sigma$ difference between the indices of these objects in the age sensitive Balmer lines indicates that the uncertainties in these index measurements, and therefore the error in the age and metallicity constraints in one or both spectroscopic studies are underestimated. On the other hand, the photometric analyses of the NGC 4365 GCs agree unambigously. Given the agreement of these and L03 for NGC 4365, and the agreement between the NGC 1399 spectroscopic and photometric results, we believe the preponderance of evidence points towards the presence of intermediate age GCs in NGC 4365. To determine the full spatial extent of this intermediate-age population will require deep near-IR imaging over a wider field. Conclusions =========== Globular clusters provide fossil records of the major star formation episodes in a galaxy. Thus, constraining the ages and metallicities of cluster sub-populations provides important insight into the process of galaxy formation and evolution. The near-IR and optical color photometric technique and spectroscopy measure the ages and metallicities of clusters by probing different physical phenomena and provide independent sanity checks. We have analyzed the cluster system in the inner regions of NGC 1399 and NGC 4365 and conclude that the metal-rich clusters in NGC 1399 are predominantly old while many of the corresponding clusters in NGC 4365 are of intermediate age. These results are in good agreement with previous spectroscopic and photometric studies of these galaxies and hence gives us further confidence in the photometric technique. This research was supported by STScI grant HST-GO-09878.01-A and NASA-LTSA grants NAG5-11319 and NAG5-12975. Anders, P., & Fritze -v. Alvensleben 2003, A&A, 401, 1063 Ashman, K.M., & Zepf, S.E. 1998, “Globular Cluster Systems”, (Cambridge: Cambridge Univ. Press) Brodie, J. P., Strader, J., Denicolo, G., Beasley, M. A., Cenarro, A. J., Larsen, S, S., Kuntschner, H., & Forbes, D. A. 2005, AJ, 129, 2643 Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000 Forbes, D. A., Beasley, M. A., Brodie, J. P., & Kissler-Patig, M. 2001, ApJ, 563, 143 Fruchter, A. S., & Hook, R. N. 2002, PASP, 114, 144 Hempel, M., Hilker M., Kissler-Patig, M., Puzia, T. H., Minniti, D., & Goudfrooij, P. 2003, A&A, 405, 487 Hempel, M., & Kissler-Patig, M. 2004, A&A, 419, 863 Kissler-Patig, M., Brodie, J. P., Schroder, L. L., Forbes, D. A., Grillmair, Carl J., Huchra, John P. 1998, AJ, 115, 105 Kundu, A., & Whitmore, B.C. 2001, AJ, 121, 2950 Kundu, A., Whitmore, B.C., Sparks, W.B., Macchetto, D., Zepf, S. E., & Ashman, K. M. 1999, ApJ, 513, 733 Kuntschner, H., Ziegler, B. L., Sharples, R. M., Worthey, G. & Fricke, K. J., 2002, A&A, 395, 761 Larsen, S. S., Brodie, J. P., Beasley, M. A., Forbes, D. A., Kissler-Patig, Kuntschner, H., & Puzia, T. H. 2003, ApJ, 585, 767 Larsen, S. S., Brodie, J. P., & Strader, J. 2005, A&A, in press Maraston, C. 2005, MNRAS, 362, 799 Puzia, T. H., Kissler-Patig, M., Thomas, D., Maraston, C., Saglia, R. P., Bender, R., Goudfrooij, P. 2002, & Hempel, M. 2005, A&A, 439, 997 Puzia, T. H., Zepf, S. E., Kissler-Patig, M., Hilker, M., Minniti, D., Goudfrooij, P. 2002, A&A, 391, 453 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 Sirianni, M. et al. 2005, PASP Xu, C., & Mobasher, B. 2003, NICMOS Instrument Science Report 2003-009 \
--- abstract: 'Further, we found that the $\nu$ dependency of the incompressible patterns is, in turn, destroyed by a large imposed current during the deep QH effect breakdown. These results demonstrate the ability of our method to image the microscopic transport properties of a topological two-dimensional system.' author: - 'T. Tomimatsu' - 'K. Hashimoto' - 'S. Taninaka' - 'S. Nomura' - 'Y. Hirayama' --- A two-dimensional electron system (2DES) subjected to strong magnetic fields forms a quantum Hall (QH) insulating phase with a state lying in a gap between quantized Landau levels (LLs). This gapped , the so-called incompressible , prevents backscattering between the metallic gapless (compressible) counter-propagating along both sides of the 2DES edges [@chakraborty2013QHtextbook]. This is the key microscopic aspect of nondissipative chiral transport of the integer QH effect, which is characterized by universal quantized Hall conductance protected by a topological invariant [@TopologicalQHThouless1982; @TopologicalQHHatsugai1993]. Topological phases are attracting renewed attention due to the recent discovery of exotic topological materials such as insulators [@TopoInsulatorKane2005; @TopoInsulator2Kane2005; @QSHEprediction2006; @QAHEexperiment2013], superconductors [@superMayoranaTheorem2008], and Weyl semimetals [@weyl2011]. The formation of incompressible and compressible in the QH regime originates from the interplay between Landau quantization and the Coulomb interaction [@CSG1992], which drives nonlinear screening [@wulfGerhardts1988nonlinear; @efros1993nonlinear]. spatial configuration depends on the potential landscape. the edge confinement potential, accompanied by strong bending of the LLs, forms spatially alternating unscreening and screening regions due to the Fermi-level pinning at the gap and LLs. These regions respectively result in alternating incompressible and compressible strips near the 2DES edge. The innermost incompressible strip moves and spreads to the bulk as the LL filling factor $\nu$ reduces to an from a higher $\nu$. the edge strips has been microscopically investigated using imaging techniques such as Hall-potential imaging [@AhlswedeWeis2001Hallpotential; @Weis2011], microwave impedance imaging [@lai2011MicrowaveImpedance], capacitance imaging [@suddards2012capacitance], and scanning gate imaging [@SGMwoodside; @SGMIhnQHE; @paradiso2012QPCprl; @pascherIhn2014QPC; @GhrapheneSGM; @SGM_HgTe; @SGMgrapheneDirac], a topological spin-Hall insulator. In the bulk incompressible region , the disorder potential plays an important role in giving rise to isolated compressible puddles that result in QH localization [@LocalizationReview; @InteractionTheoryIllani; @RudoNewJPhys2007]. [@ilani2004SETprobe; @Ashioori2005Capacitance], and they were demonstrated to undergo phase transition to a delocalized state with a scanning tunneling microscope [@Hashi2008QHT], which accounts for the transition from nondissipative to dissipative transport [@LocalizationReview; @InteractionTheoryIllani; @RudoNewJPhys2007]. for a practical sample such as a Hall bar, microscopic pictures of the QH effect not fully understood [@Klitzing2019essay]. For instance, nondissipative transport is [@SiddikiGerhardts2004]. To understand the transport properties inherent to the QH effect, it is important to elucidate the microscopic aspects of QH transport Here, we present the ability of a novel scanning-gate method incorporating a nonequilibrium transport technique to evolution of the incompressible QH from the edge strip to bulk localization, . To probe , we used a powerful tool (the scanning gate microscope (SGM)), which ![(a) Schematic of the experimental setup of the SGM. $V_{\rm{x}}$ is recorded as the tip is scanned at $V_{\rm{tip}}$ with $I_{\rm{sd}}$ and $B$ respectively applied in the $x$ and $z$ directions. (b) Schematic of inter-LL tunneling (marked by red arrows) between two LL subbands near the edge of the higher $\mu_{\rm{chem}}$ side under the nonequilibrium condition, which is derived from the deviation between the Fermi energies in the edge ($E_{\rm{f,edge}}$) and bulk ($E_{\rm{f}}$) compressible regions. The incompressible and compressible strips are indicated by “IS” and “CS,” respectively. “Edge” and “Bulk” indicate the edge strips and 2DES bulk region, respectively. (c) SGM image of a tip-induced $V_{\rm{x}}$ change ($\Delta V$) at $\nu =2.27$, $I_{\rm{sd}}= 3.1\ \mu$A, and $B = 4$ T; dashed lines denote the Hall bar edges. The line noise was removed 2D Fourier filtering.[]{data-label="SGM"}](fig1_0311.pdf){width="1.0\linewidth"} to minimize global perturbation, the tip voltage was set to $V_{\rm{tip}} \sim 0.2$ V, corresponding to the value of the contact potential mismatch between the tip and the sample [@Supple]. To address the QH and obtain a local signal without applying a large tip voltage, we incorporated nonequilibrium transport. We investigated a 2DES that was confined in a 20-nm-wide GaAs/Al$_{0.3}$Ga$_{0.7}$As quantum well located 165 nm beneath the surface. The wafer was processed into a 10-$\mu$m-wide Hall bar. The mobility of the 2DES was $\mu=130 \ \rm{m^{2}V^{-1}s^{-1}}$ at an electron density $n = 1.8\times10^{15}\rm\ m^{-2}$. Figure \[SGM\](b) depicts the alternating compressible and incompressible regions formed along an edge of the Hall bar. The local $\nu$ ($\nu_{\rm{L}}$) of an incompressible strip is , while the bulk $\nu$ is modified by sweeping $B$ or $n_{\rm{s}}$ tuned by a back gate voltage ($V_{\rm{bg}}$). To achieve the nonequilibrium condition, we increased the source–drain current ($I_{\mathrm{sd}}$) until the Hall voltage deviated from the QH condition. The imposed Hall voltage inter-LL tunneling from the edge to the bulk through the innermost incompressible strip. dissipative current [@Eaves1986; @panos2014SHM_QHBD], and thus a nonzero longitudinal resistance. visualize the innermost incompressible . Figure \[SGM\](c) shows a typical SGM image obtained by capturing $\Delta V$ at $\nu=2.27$ ($B = 4$ T) conditions at $I_{\rm{sd}}=$ 3.1 $\mu$A. A distinct line-like pattern can be seen extending in the $x$ direction along a Hall bar edge (left dashed line), which corresponds to the side with the higher chemical potential ($\mu_{\rm{chem}}$) across the $y$ direction of the Hall bar. This $\mu_{\rm{chem}}$ dependency, confirmed by reversing the direction [@Supple], can be explained by the fact that $\mu_{\rm{chem}}$ mainly drops at the higher-$\mu_{\rm{chem}}$ incompressible strip in a nonequilibrium condition [@panos2014SHM_QHBD], where a higher rate of inter-LL scattering [@HashiSNRM] and thus SGM sensitivity with respect to the corresponding incompressible strip. To minimize the influence of $I_{\rm{sd}}$ on the incompressible patterns, $I_{\rm{sd}}$ at which the position of the strip shows no significant $I_{\rm{sd}}$ dependence in the entire measurement region of $\nu$. Otherwise, there is a deviation associated with QH breakdown, as discussed in the Supplementary Material [@Supple]. ![SGM measurements near $\nu = 2$ and 4. (a), (b) Representative $\Delta V$ images at different $\nu$ tuned by $n$ at (a) $B = 4$ T near $\nu=2$ and (b) $B = 2$ T near $\nu=4$ . The scale bar is 4 $\mu$m. Dashed lines denote the Hall bar edges. The line noise was removed using 2D Fourier filtering. As $\nu$ decreases toward integer , the patterns are further enhanced, such that the full scale of contrast is appropriately optimized for clarity. (c), (d) Position (dot) and width (bar) of the $\Delta V$ peak as a function of $\nu$ near (c) $\nu=2$ and (d) 4 . The position and width are respectively determined by the distance from the Hall bar edge ($y_{\rm{k}}$) and the full width at half maximum ($W_{\rm{FWHM}}$). These are extracted from the cross-sectional $\Delta V$ profile spatially averaged over the $x$ region ($\overline{\Delta V}$), as indicated in the inset in (c) \[obtained from the image for $\nu=2.10$ in (a)\]. Here, $\nu=2.0$–2.4 and 4.0–4.8 are selected the same $n_{\rm{s}}$ range $1.93\times10^{15}$–$2.32\times10^{15} \rm{m^{-2}}$. The gray area denotes the incompressible regions determined by LSDA calculation for (c) $\nu_{\rm{L}}$=2 and (d) 4. []{data-label="nu2map"}](fig2_0307a.pdf){width="1.0\linewidth"} examined the $\nu$ dependence of the SGM patterns. The measurements were performed at $I_{\rm{sd}}$, which was tuned to maintain offset $V_{\rm{x}}$ ($V_{\rm{x}}\sim 1$ mV) at each $\nu$ (for details regarding the conditions, see the Supplementary Material [@Supple]). Figure \[nu2map\](a) displays representative SGM images captured near $\nu=2$ and tuned with the gate-controlled $n_{\rm{s}}$ at constant $B$ ($B=4$ T). Decreasing $\nu$ from 2.17, the position of the line pattern shifts and widens to the bulk of 2DES. The same tendency of the $\nu$-dependent patterns was also observed in the same area near $\nu=4$ at $B=2$ T, as shown in Fig. \[nu2map\](b). We extracted the positions ($y_{\rm{k}}$) and width ($W_{\rm{FWHM}}$) of the line patterns, which were respectively defined as the first moment (for details, see the Supplementary Material [@Supple]) and the full width at half maximum in a $\Delta V$ profile after spatially averaging over the 8.5-$\mu$m range in the $x$ direction ($\overline{\Delta V}$), as shown in the inset of Fig. \[nu2map\](c). For a quantitative comparison of the observed $\Delta V$ peak positions, we performed a calculation in the Landau gauge based on the local spin-density approximation (LSDA) [@nomura2004optical; @nomura2015NanoLett] using a typical potential profile [@GuvenGerhardts2003] in the QH regime ($I_{\rm{sd}}=0$ A) (for details regarding the calculations, see the Supplementary Material [@Supple]). The $\nu$-dependent , experimentally determined from $y_{\rm{k}}$ (dots) and $W_{\rm{FWHM}}$ (bars), is compared with the innermost QH incompressible calculated by the LSDA for $\nu_{\rm{L}}=2$ in Fig. \[nu2map\](c) and $\nu_{\rm{L}}=4$ in Fig. \[nu2map\](d). We found good agreement between the experimental results and the LSDA calculation for both values of $\nu$. Additionally, a closer examination of the line pattern shows local fluctuation in the same region at both values of $\nu$, e.g., in the bottom half of the images taken at $\nu=2.17$ in Fig. \[nu2map\](a) and at $\nu=4.43$ in Fig. \[nu2map\](b). This implies that the edge incompressible strip meanders along the equipotential line disturbed by potential disorder. The same technique was further applied to the spin-gap emerging at odd $\nu_{\rm{L}}$. Figure \[nu1map\](a) displays SGM images captured near $\nu=1$ at $B=8$ T. The $\nu_{\rm{L}}=1$ incompressible strip was observed as a straight line that moves from the higher-$\mu_{\rm{chem}}$ edge (at $\nu=1.090$) to the interior of the Hall bar (at $\nu=1.022$). As seen in Fig. \[nu1map\](b), the measured $y_{\rm{k}}$ shifts, and $W_{\rm{FWHM}}$ widens with decreasing $\nu$, which is again consistent with the incompressible region evaluated with the LSDA calculations that considered the exchange enhancement of the spin gap. ![SGM measurements near $\nu = 1$. (a) Representative $\Delta V$ images taken at different $\nu$ tuned by $n$ at $B = 8$ T. The scale bar is 4 $\mu$m. Dashed lines denote the Hall bar edges. The line noise was removed by 2D Fourier filtering. (b) Positions $y_{\rm{k}}$ (dots) and widths $W_{\rm{FWHM}}$(bars) of the $\overline{\Delta V}$ profile as a function of $\nu$. Gray area: the calculated incompressible region for $\nu_{\rm{L}}=1$. []{data-label="nu1map"}](fig3_0614.pdf){width="0.9\linewidth"} To examine the incompressible bulk , we focused on the (here, $\nu=1$) in which the incompressible region is expected to extend over the entire bulk region, as expected by the LSDA calculation and as shown in the gray region in Fig. \[nu1map\](b). $\nu=1.015$ and 1.008 , we found a closed-loop pattern in the incompressible region. The same tendency was also observed in a wider spatial region at different $B$—namely, $B=6$ T, as shown in Figs. \[CurrentInful\](a)–(c). In particular, Fig. \[CurrentInful\](c) shows distinct closed loop patterns (around white crosses) over the entire Hall bar in the $x$ direction. The observed loop structure is attributed to an incompressible barrier encircling compressible puddles [@ilani2004SETprobe] where electrons accumulate to screen the potential valley, as depicted in Fig. \[CurrentInful\](f) . The average distance between the structures was estimated to be about 3 $\mu$m, which is of the same order as the separation of the potential-disorder-related states (a few $\mu$m) observed in a similar modulation-doped quantum well [@Hayakawa2013]. ![ electrical current evolution of at $B=6$ T . taken at low $I_{\rm{sd}}$ (0.6–1.0 $\mu$A) for $\nu=1.05$, 1.01, and 1.00. at high $I_{\rm{sd}}$ (2.2–2.5 $\mu$A) for $\nu=1.05$ and 1.01. The scale bar is 4 $\mu$m; dashed lines denote the Hall bar edges. The line noise was removed 2D Fourier filtering. (f) Schematic density profile along a red line across a closed-loop pattern in (c). (g) $V_{\rm{x}}$–$I_{\rm{sd}}$ curves for $\nu=1.05$, 1.01, and 1.00. Crosses mark the measurement conditions for (a)–(e). []{data-label="CurrentInful"}](fig4_1215.pdf){width="0.9\linewidth"} To explore how QH systems are microscopically broken by a larger imposed current, we examined the current-induced evolution of the patterns up to 2–5 times higher than those used for the low-current images (Figs. \[CurrentInful\](a)–(c)), as indicated by the $V_{\rm{x}}$–$I_{\rm{sd}}$ curves (Fig. \[CurrentInful\](g)). As shown in the Supplementary Movies, both the local incompressible patterns near the edge (Fig. \[CurrentInful\](a)) and in the bulk region (Fig. \[CurrentInful\](b)) gradually expand with increasing $I_{\rm{sd}}$ to the compressible region, eventually covering the entire region and exhibiting a dense filament pattern independent of $\nu$ (Figs. \[CurrentInful\](d)–(e)). The observed vanishing of the $\nu$ dependency in the patterns clearly indicates a breakdown of the $\nu$-dependent QH effect. Compared with the QH incompressible pattern enclosing the compressible puddle (Fig. \[CurrentInful\](c)), the observed filament pattern shows a wider distribution and corrugates at a shorter length scale. Notably, the observed filaments partially surround the positions marked by crosses (Fig. \[CurrentInful\](e)), which correspond to the positions (crosses in Figs \[CurrentInful\](c)) of the disorder-induced QH compressible puddles. These indicate that the filament pattern is correlated with the potential disorder whose potential slope may not be fully screened owing to less compressibility induced by the heating effect [@screeningMachida1996; @screeningKato2009] in the dissipative QH breakdown regime [@Komiyama1996; @HashiSNRM]. This implies that inter-LL scattering arises along the potential disorder [@disorde-assistedBreakdownGuven2002] over the sample in the deep QH breakdown regime. In conclusion, using a nonequilibrium-transport-assisted SGM technique, we the nondissipative transport in a Hall bar. In the deep QH breakdown regime, the observed $\nu$-dependent characteristics vanish and are unified into a disorder-related pattern, suggesting that microscopic breakdown arises along the potential disorder of the sample. In our future research, we shall use this powerful method a microscopic understanding of nondissipative transport, in the fractional QH effect and other topological edge-transport effects of topological insulators. Our method can probe local properties of topological protection, e.g., by imaging the backscattering sites from the helical edge channel to electron puddles [@backscatteringPaddle]. the suppression of the quantized conductance of the quantum spin Hall effect [@SGM_HgTe; @QSHE_TMD] and the nonzero longitudinal resistance of the anomalous quantum Hall effect [@QAHEbackscattering; @KawamuraQAHE2017], which can be caused by backscattering. . We thank K. Muraki and NTT for supplying high-quality wafers, K. Sato and K. Nagase for sample preparation, M.H. Fauzi for helpful discussion, and Y. Takahashi for figure preparation. K.H. and T.T. acknowledge the JSPS for financial support: KAKENHI 17H02728 and 18K04874, respectively. Y.H. acknowledges support from the JSPS (KAKENHI 15H05867, 15K21727, and 18H01811) K.H. and Y.H. thank Tohoku University’s GP-Spin program for support.
--- abstract: 'The IceCube Neutrino Observatory is a 1 $km^{3}$ detector currently under construction at the South Pole. Searching for high energy neutrinos from unresolved astrophysical sources is one of the main analysis strategies used in the search for astrophysical neutrinos with the IceCube Neutrino Observatory. A hard energy spectrum of neutrinos from isotropically distributed astrophysical sources could contribute to form a detectable signal above the atmospheric neutrino background. A reliable method of estimating the energy of the neutrino-induced lepton is crucial for identifying astrophysical neutrinos. An analysis is underway using data from the half completed detector conÞguration taken during its 2008-2009 science run.' address: 'University of Wisconsin - Madison' author: - Sean Grullon for the IceCube Collaboration bibliography: - 'LLWIproceedingsGrullon.bib' title: Searching for High Energy Diffuse Astrophysical Muon Neutrinos with IceCube --- Neutrinos As Cosmic Messengers ============================== There are many objects in our universe that involve extremely high energy processes. These processes involve the accretion of matter into black holes at the centres of active galaxies, supernovae, and gamma-ray bursts, where an enormous amount of energy is released over time scales as short as a few seconds. Understanding these astrophysical objects and the underlying physics involves observing high energy radiation. The three particle messengers involved in high-energy astronomy are charged cosmic rays (protons and nuclei), gamma-rays, and neutrinos. Cosmic ray and gamma-ray astrophysics have been extremely successful fields, however the nature of their sources is still not completely understood. Neutrinos may elucidate the fundamental connection between the sources of high energy cosmic rays and gamma-rays. Cosmic rays - high energy protons and nuclei - have been well studied with both space and ground based instruments. Their major astronomical disadvantage is that they are charged particles and thus are deflected by magnetic fields subsequently losing their directionality. High energy gamma-rays have been detected for many galactic and extra galactic objects, but their effectiveness as cosmic messengers over long distance scales is limited by their absorption by extra-galactic background light. Neutrinos could provide a fundamental connection between cosmic rays and gamma-rays. If a gamma-ray source was found to be a neutrino source, then a hadronic accelerator central engine might be simultaneously driving cosmic ray, gamma-ray, and neutrino production in one astrophysical object [@learnedmannheim:2000]. A next generation kilometer scale neutrino observatory, IceCube, [@performancepaper:2006] is currently under construction at the geographic South Pole. When construction is completed in January 2011, IceCube will consist of an in-ice cubic kilometer neutrino detector as well as a square kilometer cosmic ray air shower array at the surface called IceTop. The in-ice detector consists of photomultiplier tubes deployed in an array of strings. IceTop consists of an array of stations with two tanks filled with ice and four phototubes at every station. Construction began at the South Pole during the austral summer 2004-05, with 1 in-ice string and 4 IceTop stations deployed [@performancepaper:2006]. Finally, IceCube will consist of 86 strings giving a total of 5160 phototubes and 80 IceTop stations with 4 phototubes per station. The backgrounds to search for a flux of high-energy astrophysical neutrinos are atmospheric muons and neutrinos from the interaction of cosmic rays in the Earth’s atmosphere [@honda:2006]. Atmospheric muons can be eliminated by looking for events moving upward through the detector. A fraction of the downgoing muon flux will be falsely reconstructed in the upward direction, but can be removed by tight requirements on the fitted track. An event selection based on these tight requirements results in a rather pure sample of atmospheric neutrinos. This flux of atmospheric neutrinos seen in the IceCube array is the main background to an astrophysical search. Astrophysical Neutrino Point Source Search ========================================== Searching for resolved sources of astrophysical muon neutrinos is one of the main science goals of the IceCube neutrino observatory. A search for point sources of neutrinos is made by looking for an excess of events from a specific direction in the sky. After elimination of background atmospheric muons by using the Earth as a filter and making a high energy cut, the skymap of the arrival directions of the neutrino-induced muons is analyzed for evidence of astrophysical neutrinos. Astrophysical neutrinos are expected to show up as an excess over background which is found by looking at the average rate of events in the same declination band. A significance of the observation is found by comparing the number of events in the on-source bin to that predicted from the off-source band. A full maximum likelihood method is used to search for an excess. The method uses the measured event by event directional likelihood resolution as a parameter in the likelihood fit. The reconstructed event energy is also used in the fit to allow to determine the slope of the energy spectrum of a neutrino source. Several types of searches are performed [@dumm:2009]. The most general is an all-sky search where every point across a fine grid in the sky is taken as a possible source position. The likelihood is used to find the best fit to the number of astrophysical neutrinos and the spectral shape. Other searches include looking for an excess amongst a pre-specified list of possible sources, or to stack a class of objects to see if the superposition of these objects gives significant excess. The data from the half completed detector (40 string) was analyzed to produce a sky map covering the full sky range from 0 to 180 degrees in zenith. After analysis, no evidence for sources was seen as shown in Fig. \[skymap\]. The most significant spot in the sky had a significance consistent with a random fluctuation. ![Skymap for the IceCube detector in the 40 string configuration for one year of data taken during 2008. []{data-label="skymap"}](IC40_Fix_skymap_superfine_eventsoverlay.png) Diffuse Astrophysical Neutrino Flux Search ========================================== If individual sources of astrophysical neutrinos are unresolved, then we may still perform a search for astrophysical neutrinos by looking for an excess of events over the whole sky above the expected atmospheric neutrino background rate. A superposition of all neutrino sources in the universe would give rise to an extra-terrestrial flux that has a harder energy spectrum than that of atmospheric neutrinos. Since predictions for the astrophysical flux go as $dN/dE\sim E^{-2}$ [@learnedmannheim:2000], one looks for higher energy events in the detector to separate them from the steeper atmospheric neutrino spectrum that goes as $dN/dE\sim E^{-3.7}$ [@honda:2006]. In contrast to the point source search, the diffuse search requires a detailed understanding of the atmospheric neutrino background and detector sensitivity through Monte Carlo simulation in order to correctly interpret the observed neutrino events. Event Energy Reconstruction --------------------------- Since extra-terrestrial source of neutrinos are expected to have harder energy spectra than the atmospheric neutrino backgrounds, a reliable method for reconstructing the energy of the event is crucial. Historically this measure was a simple one - counting the number of array photomultiplier tubes that detected light. This was a reasonably powerful energy estimator, however research continued on better algorithms to improve the correlation between true and reconstructed energy [@grulloninternalreport:2008]. The reconstruction algorithm used here models a muon with constant energy loss per unit length. The energy is adjusted to minimize the log-likelihod of the observed phototube charge densities given the expectation from the model. Analysis Method --------------- ![Methodology of a diffuse search, demonstrated with a random simulated data set drawn from respective simulation samples. The observed reconstructed energy distribution (left) is interpreted as allowed regions of the number of astrophysical and prompt neutrinos (right) using a parametrized maximum likelihood fit.[]{data-label="fitexample"}](DiscoverDistribution.png "fig:") ![Methodology of a diffuse search, demonstrated with a random simulated data set drawn from respective simulation samples. The observed reconstructed energy distribution (left) is interpreted as allowed regions of the number of astrophysical and prompt neutrinos (right) using a parametrized maximum likelihood fit.[]{data-label="fitexample"}](em2discover_2.png "fig:") After data quality cuts remove poorly reconstructed downgoing muons, the reconstructed energy distribution of the candidate neutrino events in the detector is analyzed. A likelihood framework fits contributions from conventional atmospheric neutrinos (resulting from pion and kaon decays), prompt atmospheric neutrinos (resulting from charmed meson decays) and an astrophysical signal flux to the data. The systematic uncertainties of the detector are treated as nuisance parameters in the likelihood fitting procedure. These nuisance parameters are allowed to float within their understood error ranges in the fit. The resulting confidence regions of the physics parameters are then examined to see if the fit favors a background only hypothesis or a signal hypothesis. An example is given in Fig. \[fitexample\]. Here, a random sample of events is drawn according to Poisson probabilities from the simulated data distribution. This sample is then analyzed as if it were real data. The confidence regions for the contributions of prompt and astrophysical neutrinos are also shown in Fig. \[fitexample\]. These confidence regions show the discovery potential of IceCube in the 40 string configuration for the case of two physics parameters. These two parameters are the number of prompt atmospheric neutrinos and the number of astrophysical neutrinos, respectively. At this time, to avoid possible biases in the analysis, the highest energy events in the dataset are kept hidden as the method is being developed and the sources of systematic uncertainty understood. Characterizing how the response of the detector is affected by the optical properties of the South Pole ice is especially important for the systematics. The sensitivity of this analysis for the 40 string detector is shown in Fig. \[limits\]. ![Diffuse neutrino fluxes and experimental limits. The sensitivity of IC40 is just below the Waxman-Bahcall bound. References for the curves are given in [@becker:2007]. []{data-label="limits"}](AllFlavorFluxes_New.png) Outlook ======= The construction of a kilometer scale, high energy neutrino telescope is nearing completion at the South Pole. IceCube will hopefully lead to new discoveries about the nature of the high energy universe. Searches for point sources and diffuse fluxes of astrophysical neutrinos will either result in some these discoveries or set very stringent limits on the underlying physics of high energy neutrinos in the cosmos. References ==========
--- abstract: | Unit propagation (which is called also Boolean Constraint Propagation) has been an important component of every modern CDCL SAT solver since the CDCL solver was developed. In general, unit propagation is implemented by scanning sequentially every clause over a linear watch-list. This paper presents a new unit propagation technique called core first unit propagation. The main idea is to prefer core clauses over non-core ones during unit propagation, trying to generate a shorter learnt clause. Here, the core clause is defined as one with literal block distance less than or equal to 7. Empirical results show that core first unit propagation improves the performance of the winner of the SAT Competition 2018, MapleLCMDistChronoBT. SAT solvers, unit propagation, Boolean Constraint Propagation author: - Jingchao Chen bibliography: - 'coreFirstIJCAI19.bib' title: Core First Unit Propagation --- Introduction ============ Since the GRASP solver was envisioned in 1996 [@GRASP], Conflict-Driven Clause Learning (CDCL) SAT solving has been achieved great success in many fields. Unit propagation (which is called also Boolean Constraint Propagation) is not only an important component of every modern CDCL SAT solver, but also an important one of some proof checkers [@GRAT]. To our best knowledge, so far this component has not been studied yet. This paper focuses on this problem. A CDCL SAT solver works on a CNF (Conjunctive Normal Form) formula, which is defined as a finite conjunction of clauses, and also can be denoted by a finite set of clauses. A clause is a disjunction of literals, also written as a set of literals, which is either a variable or the negation of a variable. A clause is said to be a unit clause if it consists only of literals assigned to value 0 (false) and one unassigned literal. BCP (Boolean Constraint Propagation) fixes the unassigned literal in a unit clause to the value 1 (true) to satisfy that clause. This variable assignment is referred to as an implication. BCP carries out repeatedly the identification of unit clauses and the creation of the associated implications until either no more implications are found or a conflict (empty clause) is produced. It is generally accepted that BCP is implemented by scanning sequentially every clause over a linear watch-list. This implementation is called a standard BCP. By our empirical observation, we found that the standard BCP implementation is not efficient in some cases. Therefore, we decided to propose a new unit propagation technique called core first unit propagation. The basic idea of this technique is to prefer core clauses over non-core ones during unit propagation, trying to generate a shorter learnt clause. Here£¬ the core clause is defined as one with literal block distance less than or equal to 7. This definition is consistent with that of Ref. [@LBD3tier]. Empirical results show that core first unit propagation improves the performance of the winner of the SAT Competition 2018, MapleLCMDistChronoBT [@CBT1; @CBT2]. Core First Unit Propagation =========================== The idea of CFUP ( Core First Unit Propagation) is to classify clauses as core or non core, and prefer core clauses over non-core ones during unit propagation. A clause is core if it is a learnt clause and its LBD (Literal Block Distance) value is less than 7. LBD is defined as the number of decision variables in a clause [@LBD]. Our core concept corresponds to the concept of non local in the CoMiniSatPS solver that classifies learnt clauses into three categories [@LBD3tier]. References [@GRAT; @TreeRat] have similar concepts. But they are different from the core concept used in this paper. In [@GRAT; @TreeRat], core clauses refer to marked or visited ones, and have nothing to do with LBD. Our CFUP uses a single watchlist, not two separate watchlists. We implement to select core clauses first by moving core clauses ahead of non-core clauses during unit propagation. When watchlists are built initially, core clauses are not in front of non-core clauses. Like the standard BCP, the goal of CFUP is to search for all unit clauses. This can be done by repeating the following process until either no more implications are found or a conflict (empty clause) is produced: Remove the first unvisited literal $l$ from $T$; get new implications from clauses watched by $l$; and add the new implications to $T$, where $T$ is a trail stack of decision literals and implications. The core priority strategies of CFUP embodies in the update of watchlists. Algorithm \[alg1\] shows CFUP. The pseudo-code of CFUP shown in Algorithm \[alg1\] assumes that a full literal watch scheme (a full occurrence list of all clauses) is used, If using a two literal watch scheme [@SATO], The statement “Append $W[l]-C$ to the end of $C$ " in Algorithm \[alg1\] can be modified as follows. $W[l]$: set of clauses watched by literal $l$ $D :=\emptyset $ $D := D \cup \{W[l][k]\}$ $W[\overline{s}] = D \cup \{W[l][k]\}$\ where $s$ is unwatched and unassigned literal Append $W[l]-C-D$ to the end of $C$ *T*: trail stack of decisions and implications\ $W[l]$: set of clauses watched by literal $l$ $\beta := null $ $l := T[q]$\ $C :=\emptyset$, where $C$ is used to store core clauses $u:=$ the unassigned literal of $W[l][k]$\ Push $u$ to the end of $T$\ $C := C \cup \{W[l][k]\}$ $\beta := W[l][k]$ [**break**]{} Append $W[l]-C$ to the end of $C$\ $W[l] := C$\ [**if**]{} [$\beta \neq null $]{} [**then return**]{} $\beta$ $null$ *T*: trail stack of decisions and implications\ **$\gamma$**: a learnt clause $ conflict\_cls := $ **BCP**() $ conflict\_cls := $ **CFUP**() $No\_of\_conflict := No\_of\_conflict +1 $ $(1uip, \gamma) :=$ **ConflictAnalysis**$(conflicting\_cls)$ UNSAT Push $1uip$ to $T$ **Backtrack**(current decision level-1) Decide and push the decision to $T$ **return** SAT Removing the statement “$C := C \cup \{W[l][k]\}$" in CFUP yields a standard BCP. In the real implementation, we do not use a list to store core clauses during unit propagation. instead of it, we do it by swapping two elements in $W[l]$. In details, let $W[l][0..m]$ and $W[l][m+1..k-1]$ be core and non core clause zone, respectively. if $W[l][k]$ is a core clause, we swap $W[l][k]$ and $W[l][m+1]$. Otherwise, we do nothing. A general CDCL solver has two watchlists: binary and non binary. We adopt the core priority strategy only on a non-binary watchlist. By our empirical observation, adopting always the core priority strategy is not good choice. A better policy is that when the number of conflicts is less than $2\times 10^6$, **CFUP** is called, Otherwise, **BCP** is called. The high-level algorithm CDCL combining **CFUP** and **BCP** are shown in Algorithm \[alg2\]. CDCL given in Algorithm \[alg2\] uses a loop to reach a status where either all the variables are assigned (SAT) or an empty clause is derived (UNSAT). Inside the loop, based on whether the number of conflicts is greater than $\theta$, it decides to invoke either **CFUP** or **BCP**. Here **BCP** is considered a unit propagation without any priority strategy. If there is a conflict, **CFUP** or **BCP** returns a falsified conflicting clause. Otherwise, a new decision is taken and pushed to the trail stack. Conflict analysis learns a new 1UIP clause $\gamma$. CDCL asserts the unassigned 1UIP literal and pushes it to the trail stack. Empirical evaluation ==================== All experiments were conducted under the following platform: Intel core i5-4590 CPU with speed of 3.3 GHz. The timeout for each solving was set to 5000 seconds. We have added **CFUP** to MapleLCMDistChronoBT [@CBT1; @CBT2], which was the winner of the main track in the SAT Competition 2018 [@SAT18]. Table \[Tab\] shows briefly the runtime and solved instances of the default Maple-LCMDistChronoBT vs. the best configuration in CFUP mode, $ \theta = 2 \times 10^6 $, as well as two vicinity configurations $ \theta = 10^6 $ and $ \theta = 3 \times 10^6 $. As seen in Table \[Tab\], $ \theta = 2 \times 10^6 $ outperforms the default MapleLCMDistChronoBT in terms of both the number of solved instances and the runtime. It solves 5 more instances and is faster by 5682 seconds. The number of core clauses increases with the increase of the number of conflicts. When the number of core clauses is large, CFUP is identical to BCP. Compared with BCP, the cost of CFUP is higher than that of BCP. So $\theta$ should not be set to very large. It is easy to see that CFUP has a certain extent advantage on satisfiable instances in some configurations. Base $ \theta = 10^6 $ $ \theta = 2 \times 10^6 $ $ \theta = 3 \times 10^6 $ ------------------ -------- ------------------- ---------------------------- ---------------------------- -------- Solved 138 134 142 141 \[0pt\][SAT]{} Time 99104 78397 91136 95723 Solved 102 102 103 103 \[0pt\][UNSAT]{} Time 66845 70338 69131 72962 Solved 240 236 245 244 \[0pt\][ALL]{} Time 165949 148735 160267 168685 : Runtime (in seconds) and solved instances of MapleLCMDistChronoBT on SAT competition 2018 instances []{data-label="Tab"} ![MapleLCMDistChronoBT on SAT[]{data-label="scaterFig1"}](satScatter.eps){height="6.8cm"} ![MapleLCMDistChronoBT on UNSAT[]{data-label="scaterFig2"}](unsatScatter.eps){height="6.8cm"} Figures \[scaterFig1\] and \[scaterFig2\] shows a log-log scatter plot comparing the running times of the default MapleLCMDistChronoBT vs. the overall winner $ \theta = 2 \times 10^6 $ on satisfiable and unsatisfiable instances, respectively. Each point corresponds to a given instance. A point at line $y=5000$ (resp., $x=5000$) means that the instances on that point were not solved by default version (resp., $ \theta = 2 \times 10^6 $). As shown in Figure \[scaterFig1\], the points that appear over the diagonal are more than ones below the diagonal. Figure \[scaterFig1\] shows that in many cases, $ \theta = 2 \times 10^6 $) is faster than the default configuration. Among the instances given in Figure \[scaterFig1\], the unsolved instances of the default configuration and the best configuration are 12 and 8, respectively. That is, the best configuration solves 4 more satisfiable instances than the default configuration. Figure \[scaterFig2\] demonstrates that although in almost all the cases the speed of the best configuration is the same as that of the default configuration, the best configuration solves 1 more unsatisfiable instance than the default. Conclusions =========== Implementing CFUP is a trivial task. It can be done by making a little modification to BCP of the solver. We have added CFUP into the main track winner of the SAT Competition 2018, MapleLCMDistChronoBT. Empirical results show that CFUP improves the overall performance of the solver in some configurations. In theory, when analyzing a conflicting clause, using short LBD clauses should be more beneficial than using long LBD clauses. That is, replacing completely the standard BCP with CFUP should be the best choice. However, in fact, combining CFUP and the standard BCP is a good choice. Its reason is well worth studying in future.
--- abstract: 'We use transfer-matrix and finite-size scaling methods to investigate the location and properties of the multicritical point of two-dimensional Ising spin glasses on square, triangular and honeycomb lattices, with both binary and Gaussian disorder distributions. For square and triangular lattices with binary disorder, the estimated position of the multicritical point is in numerical agreement with recent conjectures regarding its exact location. For the remaining four cases, our results indicate disagreement with the respective versions of the conjecture, though by very small amounts, never exceeding $0.2\%$. Our results for: (i) the correlation-length exponent $\nu$ governing the ferro-paramagnetic transition; (ii) the critical domain-wall energy amplitude $\eta$; (iii) the conformal anomaly $c$; (iv) the finite-size susceptibility exponent $\gamma/\nu$; and (v) the set of multifractal exponents $\{ \eta_k \}$ associated to the moments of the probability distribution of spin-spin correlation functions at the multicritical point, are consistent with universality as regards lattice structure and disorder distribution, and in good agreement with existing estimates.' author: - 'S.L.A.' bibliography: - 'biblio.bib' title: 'On locations and properties of the multicritical point of Gaussian and $\pm J$ Ising spin glasses ' --- INTRODUCTION {#intro} ============ In this paper we investigate quenched random-bond Ising spin - $1/2$ models on regular two-dimensional lattices, namely square \[SQ\], triangular \[T\], and honeycomb \[HC\]. For suitably low concentrations of antiferromagnetic bonds, it is known that such systems exhibit ferromagnetic order at low temperatures. We consider only nearest-neighbor couplings $J_{ij}$, with strengths extracted from identical, independent probability distribution functions (PDFs). We specialize to the following two forms for the latter: $$\begin{aligned} P(J_{ij})= p\,\delta (J_{ij}-J_0)+ (1-p)\,\delta (J_{ij}+J_0)\ \qquad (\pm J)\ ; \nonumber \\ P(J_{ij})= \frac{1}{\sqrt{2\pi}\sigma}\,\exp\left(-\frac{(J_{ij}-J_0)^2}{2\sigma^2}\right)\quad \ {\rm (Gaussian)}\ .\quad \label{eq:1}\end{aligned}$$ Our units are such that $J_0 \equiv 1$ in the former case, and $\sigma \equiv 1$ in the latter. A critical line on the $T - p$ ($\pm J$), or $T - J_0$ (Gaussian), plane separates paramagnetic and ferromagnetic phases; a spin-glass phase for comparable amounts of plus and minus couplings is absent here, on account of the systems under consideration being two-dimensional. For general space dimensionality $d \geq 2$ there is a second line of interest on the temperature-disorder plane, along which the internal energy has a simple analytic expression, and several exact results have been derived, known as the [*Nishimori line*]{} (NL) [@nish81; @nish01]. The shape of the NL is known exactly, and given by $$\begin{aligned} e^{-2/T} = \frac{1-p}{p}\qquad\ (p> \frac{1}{2}) \qquad (\pm J)\ ; \nonumber \\ T = \frac {1}{J_0}\qquad\qquad {\rm (Gaussian)}\ . \label{eq:2}\end{aligned}$$ The intersection of the ferro-paramagnetic boundary with the NL is a multicritical point[@ldh88], the [*Nishimori point*]{} (NP). A conjecture regarding the possibly exact location of the NP has been put forward, which invokes the effects of duality and gauge symmetry arguments on the replicated partition function of quenched random $Z_q$ models [@nn02; @mnn03; @tn04]. With further extensions to consider non self-dual lattices [@tsn05; @no06], numerically exact predictions have been produced, for the $Z_2$ (Ising) model, for all lattices and interaction distributions considered here. Versions of the conjecture adapted for hierarchical lattices have been considered as well [@hb05]. Locations of the NP predicted by the conjecture generally agree very well with results obtained by other means. However, the remaining discrepancies provide compelling evidence that, at least in some cases, the conjecture may not be exact. First, on the SQ lattice, several very accurate numerical estimates for the $\pm J$ coupling distribution place the conjectured location [@nn02], $p_c=0.889972\dots$, outside the corresponding error bars (though it differs from the central value typically by less than $0.1\%$). One has: $p_c=0.8906(2)$ (Refs. ), $0.8907(2)$ (Ref. ), $0.89081(7)$ (Ref. ). For a Gaussian distribution, the conjecture gives [@nn02; @mnn03] $J_{0c}=1.021770$, while Ref.  finds $J_{0c}=1.02098(4)$. Second, it has been shown that the exact renormalization-group solution for three pairs of mutually dual hierarchical lattices disagrees with the pertinent form of the conjecture, by up to $2\%$ [@hb05]. Very recently, these issues have been addressed via the proposal of an improved conjecture, first applied to hierarchical lattices [@onb08], and later extended to regular ones [@ohzeki08]. Broadly, this corresponds to considering duality properties applied to a (usually small) cluster of sites on the lattice under examination [@onb08; @ohzeki08], as opposed to the original conjecture which deals only with the partition function of a single bond (the principal Boltzmann factor) [@nn02]. The improved conjecture predicts the location of the NP to be well within the error bars of recent numerical work for the SQ $\pm J$ case: an average over four slightly differing implementations gives $p_c=0.89079(6)$, though so far disagreement persists for the Gaussian distribution, as the improved estimate is $J_{0c}=1.021564$ [@ohzeki08]. For hierarchical lattices, the gap between conjecture and exact renormalization-group solutions has essentially been bridged by the new approach [@onb08]. Existing numerical results for T and HC lattices ($\pm J$ distribution only) [@tsn05; @dq06] broadly agree with an early form of the original conjecture, applicable to pairs of dual lattices [@tsn05]: with the binary entropy $$H(p) \equiv -p\,\log_2 p - (1-p)\,\log_2 (1-p)\ , \label{eq:h(p)}$$ it is predicted that, for a pair of mutually-dual lattices 1 and 2, $$H_{12} \equiv H(p_{1c}) + H(p_{2c}) =1\ . \label{eq:earlyconj}$$ Ref.  finds $p_c=0.835(5)$ and $0.930(5)$, respectively for T and HC, which implies $0.981 < H_{12} < 1.042$; these estimates were refined in Ref.  to $p_c=0.8355(5)$ \[T\] and $0.9325(5)$ \[HC\], giving $H_{12}= 1.002(3)$. Further developments [@no06; @ohzeki08] enabled the production of pairs of individual predictions (always obeying Eq. (\[eq:earlyconj\]), with a suitably-adapted form of Eq. (\[eq:h(p)\]) for the Gaussian case). In the framework of the original conjecture, these are: $p_c=0.835806$ \[T\] and $0.932704$ \[HC\] ($\pm J$) [@no06]; $J_{0c}=0.798174$ \[T\] and $1.270615$ \[HC\] (Gaussian) [@ohzeki08]. For the improved conjecture ($\pm J$ only), two slightly differing implementations give the pairs: $p_c=0.835956$ \[T\] and $0.932611$ \[HC\]; $p_c=0.835985$ \[T\] and $0.932593$ \[HC\] [@ohzeki08]. Here, we numerically estimate the location and critical properties of the NP on the T and HC lattices. For the $\pm J$ case, we refine the results given in Ref. , checking our data against the more stringent predictions of Refs. ; for the Gaussian distribution, we are not aware of any existing results, apart from those given in Ref.  for the conjectured location of the NP. For completeness, and to provide consistency checks of our methods, we revisit the SQ lattice problem, investigating both distributions. We apply numerical transfer-matrix (TM) methods to the spin–$1/2$ random-bond Ising model, on strips of SQ, T, and HC lattices, of widths $4 \leq N \leq 14$ sites (SQ), $4 \leq N \leq 13$ sites (T) and $4 \leq N \leq 14$ sites (even values only, HC). We take long strips, usually of length $M=2 \times 10^6$ columns (pairs of columns for HC, because two iterations of the TM are needed to restore periodicity). For each of the quantities evaluated here, averages $\langle {\cal Q} \rangle$ are taken over, and fluctuations $\langle \Delta {\cal Q} \rangle_{\rm rms}$ calculated among, $N_s$ independently-generated samples, each of length $M$. As discussed extensively elsewhere [@dqrbs96], the sample-to-sample fluctuations $\langle \Delta{\cal Q} \rangle_{\rm rms}$ vary with $M^{-1/2}$, and are essentially $N_s$–independent, provided that $N_s$ is not very small. The averaged values $\langle {\cal Q} \rangle$ themselves still fluctuate slightly upon varying $N_s$, but the corresponding fluctuations $\Delta \langle {\cal Q} \rangle$ die down with increasing $N_s$. We found that, for $M$ as above, making $N_s=10$ already gives $\Delta \langle {\cal Q} \rangle / \langle \Delta {\cal Q} \rangle_{\rm rms} \lesssim 0.1$, thus this constitutes an adequate compromise between accuracy and CPU time expense. Typical upper bounds for $\langle \Delta {\cal Q} \rangle_{\rm rms}/\langle {\cal Q} \rangle$ are $10^{-4}$ for free energies, $10^{-3}$ for domain-wall energies (see Section \[sec:2\] below for definitions). We scanned suitable intervals of $p$ or $J_0$ along the NL, spanning conjectured and (when available) numerically-calculated positions of the NP, as shown in Table \[tpar\]. For a given lattice and interaction distribution, we took samples at $N_p=N_p (N)$ equally-spaced positions for each lattice width $N$, generally starting with $N_p \geq 18$ for small $N$, and decreasing to $N_p=9$ for $N \geq 8$, giving the totals denoted by ${\cal N}_p = \sum_N N_p(N)$ in Table \[tpar\]. 0.2cm ------------- ---------------------------- -------------- Type $\Delta p$, $\Delta J_{0}$ ${\cal N}_p$ SQ, $\pm J$ $0.8868$ – $0.8948$ $123$ SQ, G $1.00125$ – $1.03875$ $140$ T, $\pm J$ $0.830$ – $0.842$ $126$ HC, $\pm J$ $0.9266$ – $0.9386$ $86$ T, G $0.7794$– $0.8169$ $134$ HC, G $1.254$ – $1.287$ $94$ ------------- ---------------------------- -------------- : \[tpar\] Intervals $\Delta p$, $\Delta J_0$ scanned along the NL in our calculations, for lattices and coupling distributions \[binary ($\pm J$) and Gaussian (G)\] specified in column 1. ${\cal N}_p$ gives total number of pairs $(p,N)$ or $(J_0,N)$ at which quantities of interest were calculated. See text. The Mersenne Twister random-number generator [@mt] was used in all calculations described below. In all calculations pertaining to the $\pm J$ disorder distribution, a canonical ensemble was used, i.e. for a given nominal concentration $p$ of positive bonds, these were drawn from a reservoir initially containing $\alpha_i pNM$ units ($\alpha_i=2,3,3$ respectively for $i=$ SQ,T, HC). This way, one ensures that fluctuations in calculated quantities are considerably smaller than if a grand-canonical implementation were used [@ahw98; @php06]. In Sec. \[sec:2\], domain-wall energies are computed, and their finite-size scaling allows us to estimate both the location of the NP along the NL, and the correlation-length index, $y_t \equiv 1/\nu$ which governs the spread of ferromagnetic correlations upon crossing the ferro-paramagnetic phase boundary. The conformal anomaly, or central charge, is evaluated in Sec. \[sec:2a\]. In Sec. \[sec:susc\], uniform susceptibilites are calculated, and the associated exponent ratio $\gamma/\nu$ is evaluated (for Gaussian coupling distributions only). In Sec. \[sec:cf\], we specialize to T and HC lattices, with Gaussian disorder distributions, and investigate the moments of assorted orders of the probability distributions of spin-spin correlation functions. Finally, in Sec. \[conc\], concluding remarks are made. Domain-wall scaling {#sec:2} =================== For a strip of width $L$, in lattice parameter units, of a two-dimensional spin system, the domain-wall free energy $\sigma_L$ is the free energy per unit length, in units of $T$, of a seam along the full length of the strip. For Ising spins, $\sigma_L =f^A_L-f^P_L$, with $f^P_L$ ($f^A_L$) being the corresponding free energy for a strip with periodic (antiperiodic) boundary conditions across. Within a TM description of disordered systems, $\sigma_L = -\ln (\Lambda_0^A / \Lambda_0^P)$ where $\ln \Lambda_0^P$, $\ln \Lambda_0^A$ are the largest Lyapunov exponents of the TM, respectively with periodic and antiperiodic boundary conditions across. The duality between correlation length $\xi$ and interface tension $\sigma$ is well-established [@watson] for pure two-dimensional systems, and carries over to disordered cases. In a finite-size scaling (FSS) context [@barber], this means that $\sigma_L$ must scale with $1/L$ at criticality, a fact which has been used in previous studies of disordered systems [@mm84], including investigations of the NP [@hpp01; @dq06; @php06]. From conformal invariance [@car84] one has, at the critical point: $$L\,\sigma_L = \pi\eta\ , \label{eq:sigeta}$$ where, for pure systems, $\eta$ is the same exponent which characterizes the decay of spin-spin correlations. In the presence of disorder, however, the scaling indices of the disorder correlator (i.e., the interfacial tension) differ from those of its dual, the order correlator (namely, spin-spin correlations) [@mc02b]. Nevertheless, the constraints of conformal invariance still hold, thus the amplitude of the domain wall energy remains a [*bona fide*]{} universal quantity [@mc02b]. For the NP, recent estimates on the SQ lattice ($\pm J$ couplings) give $\eta=0.691(2)$ [@hpp01; @mc02a; @mc02b]. We have calculated $\Lambda_0^P$, $\Lambda_0^A$ for strips of SQ, T and HC lattices. Recalling that both $L$ in Eq. (\[eq:sigeta\]) and the correlation length $\xi$ (of which the surface tension is the dual) are actual physical distances in lattice parameter units [@pf84; @bww90; @bn93; @dq00], one finds (see Ref. ) that, in terms of the number of sites $N$ across the strip, the appropriate expressions for the scaled domain-wall energy are of the form: $\eta_N =\eta_N(T,z) =\zeta_i\,(N/\pi)(\left(\ln\Lambda_0^P (T,z) -\ln \Lambda_0^A (T,z)\right)$, with $\zeta_i=1,\, 2/\sqrt{3},\, \sqrt{3}/2$ respectively for $i=$ SQ, T, HC, and where $z=p$ ($\pm J$) or $J_0$ (Gaussian). At $(T_c,z_c)$ one must have $\lim_{N\to\infty} \eta_N=\eta$, the latter being a universal quantity. Close to the multicritical NP, the scaling directions are respectively the NL itself, and the temperature axis [@ldh88; @ldh89; @has08]. Therefore (neglecting corrections to scaling), along the NL the single relevant variable corresponds to $z-z_c$. According to finite-size scaling [@barber], the curves of scaled domain-wall energy calculated for different values of $N, T, z$ along the NL should then coincide when plotted against $x \equiv N^{1/\nu}\,(z-z_c)$. Bearing in mind that corrections to scaling may be present [@php06; @has08], we allow for their effect from the start. Thus, we write [@has08]: $$\eta_N=f [ N^{1/\nu}\,(z-z_c)]+ N^{-\omega} g [N^{1/\nu}\,(z-z_c) ]\ , \label{eq:fss}$$ where $\omega >0$ is the exponent associated to the leading irrelevant operator. Close enough to the NP the scaling functions in Eq. (\[eq:fss\]) should be amenable to Taylor expansions. One has: $$\eta_N=\eta+ \sum_{j=1}^{j_m} a_j\,(z-z_c)^j\,N^{j/\nu}+ N^{-\omega} \sum_{k=0}^{k_m} b_k\,(z-z_c)^k\,N^{k/\nu}\ . \label{eq:fss2}$$ We adjusted our TM data to Eq. (\[eq:fss2\]), by means of multiparametric nonlinear least-squares fits. The goodness of fit was measured by the (weighted) $\chi^2$ per degree of freedom ($\chi^2_{\rm \ d.o.f.}$). We tested several assumptions on $k_m$, $j_m$, $\omega$, via their effect on: (i) the resulting $\chi^2_{\rm \ d.o.f.}$, (ii) the stability of the final estimates for $z_c$, $\eta$, $1/\nu$, and (iii) the broad compatibility of estimates for $\eta$ and $1/\nu$ with existing results for assorted two-dimensional lattices and coupling distributions (under the reasonable assumption of universality, which is however provisional, and must be weighted against the bulk of available evidence). We found that: \(1) a parabolic form, $j_m=2$, is adequate for the description of the broad features of data, similarly to conclusions from the Monte-Carlo study of Ref. ; \(2) Neglecting corrections to scaling (all $b_k \equiv 0$) generally gave a $\chi^2_{\rm \ d.o.f.}$ at least one order of magnitude larger than if such corrections are incorporated; \(3) Fixing $k_m=0$ and allowing $\omega$ to vary gave a final estimate $\omega \sim 0.1-0.2$, which is too low to qualify as a [*bona fide*]{} correction-to-scaling exponent; the same happens if one allows $k_m \geq 1$ with a variable $\omega$; \(4) For fixed $\omega$, using $k_m=1$ reduces the $\chi^2_{\rm \ d.o.f.}$ by a factor of 2–3 compared with making $k_m=0$, while no noticeable improvement is forthcoming from allowing $k_m >1$, again in line with Ref. ; \(5) For fixed $\omega$ between $1$ and $2$, results for $\eta$ and $1/\nu$ are in fair accord with point (iii) above; ; also, for this range of $\omega$, $\chi^2_{\rm \ d.o.f.}$ is minimized, at $\simeq 0.1-0.2$, compared to any alternative combination of fixed and variable parameters described in this paragraph. The coexistence of these facts indicates that, within the assumed scenario of describing corrections to scaling via a single (effective) exponent, the range of $\omega$ just quoted is the one that optimizes a universality-consistent picture. Thus, we kept $j_m=2$, $k_m=1$, allowing $1 \lesssim \omega \lesssim 2$ in what follows. Results for $\omega=1.5$ are shown in Table \[tpc\]. 0.2cm ------------- ------------------ ----------------- ----------- ------------ ------------------------- Type conj. $p_c$, $J_{0c}$ $1/\nu$ $\eta$ $\chi^2_{\rm \ d.o.f.}$ SQ, $\pm J$ $0.889972$  $0.89061(6)$ $0.64(2)$ $0.689(2)$ $15/116$ [ ]{} $0.89079(6)$[^1] [ ]{} [ ]{} [ ]{} [ ]{} SQ, G $1.021770$  $1.0193(3)$ $0.65(3)$ $0.680(2)$ $28/133$ [ ]{} $1.021564$  [ ]{} [ ]{} [ ]{} [ ]{} T, $\pm J$ $0.835806$  $0.83583(6)$ $0.65(2)$ $0.691(2)$ $18/119$ [ ]{} $0.83597(2)$[^2] [ ]{} [ ]{} [ ]{} [ ]{} HC, $\pm J$ $0.932704$  $0.93297(5)$ $0.65(1)$ $0.702(2)$ $15.5/79$ [ ]{} $0.93260(1)$$^b$ [ ]{} [ ]{} [ ]{} [ ]{} T, G $0.798174$  $0.7971(2)$ $0.66(2)$ $0.689(2)$ $17/127$ HC, G $1.270615$  $1.2689(3)$ $0.64(3)$ $0.690(2)$ $11/87$ ------------- ------------------ ----------------- ----------- ------------ ------------------------- : \[tpc\] TM estimates of critical quantities $z_c$ ($z=p$, $J_0$), $1/\nu$, and $\eta$ for lattices and coupling distributions \[binary ($\pm J$) and Gaussian (G)\] specified in column 1. Column 2 gives conjectured values of $z_c$; quotations from Refs. , and (for T and HC lattices with G distribution) Ref.  refer to original conjecture, while all others refer to improved conjecture. All fits used $\omega=1.5$ (fixed), see Eq. () and text. Since the error bars quoted in the Table only reflect uncertainties intrinsic to the fitting procedure, we now illustrate (see Table \[tpc2\] below) the quantitative effects of relaxing some of the assumptions specified above. This is especially important as regards $z_c$, whose calculated fractional uncertainty is one to two orders of magnitude smaller than those for $1/\nu$, $\eta$. Additional checks on the robustness of such narrow error bars are therefore in order. For instance, considering the T, $\pm J$ case, fixing $\omega =1$, $2$ gives respectively $p_c= 0.83611(8)$, $0.83565(6)$, with $\chi^2_{\rm \ d.o.f.}$ varying by less than $10\%$ against its value for $\omega=1.5$. Overall, it seems that a realistic error bar should at least include the fitted values of $z_c$ obtained for $\omega =1$ and $2$. Table \[tpc2\] shows such estimates, denoted by $z_c^{\rm ave}$, where the associated uncertainties reflect the spread between these extreme values (their own intrinsic uncertainties generally being somewhat smaller, see above and Table ). A remarkable exception is the SQ, G case, for which the estimate of $J_{0c}$ is virtually unchanged as $\omega$ varies in the range described. This instance is also an exception in that the amplitude of the correction term $b_0$ (column 3 of the Table) is much smaller than for all other cases; consequently, neither $J_{0c}$ nor the $\chi^2$ (resp. columns 4 and 5) change appreciably when corrections to scaling are ignored. The latter is not true for any of the other cases studied. 0.2cm ------------- ----------------- ------------- ------------- ------------- -------------------------------- Type $z_c^{\rm ave}$ $b_0$ $b_1$ $z_c^{(0)}$ $\chi^{2\,(0)}_{\rm \ d.o.f.}$ SQ, $\pm J$ $0.89065(20)$ $-0.126(3)$ $-3.7(4)$ $0.8898(1)$ $416/118$ SQ, G $1.0193(4)$ $0.009(3)$ $-0.35(18)$ $1.0195(1)$ $30/135$ T, $\pm J$ $0.83588(23)$ $-0.145(3)$ $-1.4(4)$ $0.8348(1)$ $520/121$ HC, $\pm J$ $0.93300(15)$ $-0.142(3)$ $-2.5(4)$ $0.9322(1)$ $499/81$ T, G $0.7972(6)$ $-0.106(3)$ $-0.31(18)$ $0.7948(3)$ $163/129$ HC, G $1.2691(10)$ $-0.152(4)$ $-0.64(15)$ $1.2635(9)$ $325/89$ ------------- ----------------- ------------- ------------- ------------- -------------------------------- : \[tpc2\] For lattices and coupling distributions \[binary ($\pm J$) and Gaussian (G)\] specified in column 1, column 2 gives critical quantities $z_c^{\rm ave}$ ($z=p$, $J_0$), averaged over values from fits for $\omega=1$ and $2$ (see text). Coefficients $b_0$, $b_1$ (see Eq. ()) fitted for $\omega=1.5$; the index $(0)$ for the last two columns denotes quantities obtained from fits where corrections to scaling were neglected. Our assessment of the estimates quoted in Table \[tpc2\] for the location of the NP is as follows. For SQ, $\pm J$ our results are in agreement with the improved conjecture [@ohzeki08], and with numerical data from Refs. . For T, $\pm J$ our range of estimates is roughly consistent with the conjecture, both in its original [@no06] and improved [@ohzeki08] versions. It is also at the upper limit of the early estimate $p_c=0.8355(5)$ [@dq06]. For all remaining cases, our numerical data indicate that the conjecture fails to hold, albeit by rather small amounts, $0.2\%$ at most. For the $\pm J$ distribution, our results for both SQ and HC indicate that the conjectured position of the NP lies in the paramagnetic phase (for SQ, this is true only for the original conjecture). On the other hand, for the Gaussian distribution and all three lattices, according to our estimates the conjecture places the NP slightly inside the ordered phase. For HC, $\pm J$ the result in Table \[tpc2\] is again at the upper end of the range given in Ref. , $p_c=0.9325(5)$. Note also that our estimate for SQ, G lies farther from the conjecture than the numerical value given in Ref. , namely $J_{0c}=1.02098(4)$ (thus, this latter also places the conjectured location of the NP inside the ordered phase). The above estimates of $p_c$ and $J_{0c}$ for T and HC lattices, when plugged into Eq. (\[eq:earlyconj\]), using Eq. (\[eq:h(p)\]) and its counterpart for Gaussian distributions [@ohzeki08], result in: $$\begin{aligned} H(p_{1c}) + H(p_{2c}) =0.9986(12)\qquad\ \ (\pm J)\ ; \nonumber \\ H(J_{0c1}) + H(J_{0c2}) =1.0014(10)\quad ({\rm Gaussian})\ , \label{eq:hest}\end{aligned}$$ both narrowly missing the conjecture of Eq. (\[eq:earlyconj\]). As regards the correlation-length exponent and the critical amplitude $\eta$, we found that, for each lattice and coupling distribution, the error bars quoted in Table \[tpc\] are wide enough to accommodate the variations in central estimates, both when one sweeps $\omega$ between $1$ and $2$ as above, and when $z_c$ is varied between the limits established in Table \[tpc2\]. No evidence emerges from the data which justifies challenging our earlier assumption of universality. From unweighted averages over the respective columns of Table \[tpc\], we quote $\nu=1.53(4)$, $\eta=0.690(6)$. These are to be compared to the recent results $\nu=1.50(3)$ (SQ, Ref. ), $1.48(3)$ (SQ, Ref. ), $1.49(2)$ (T and HC, Ref. ), $1.527(35)$ (SQ, Ref. ), all for $\pm J$ distributions; see also $\nu=1.50(3)$ (SQ, Ref.), Gaussian. For the critical amplitude, we recall (all for $\pm J$): $\eta=0.691(2)$ [@hpp01; @mc02a; @mc02b] (SQ); $0.674(11)$ (T), $0.678(15)$ (HC), both from Ref.  . The overall quality of our scaling plots is illustrated in Figures \[fig:dwtsc\] and \[fig:dwhcsc\]. We chose to display data for T and HC, Gaussian distributions, because for these there are fewer data available in the literature. As the last column of Table \[tpc\] shows, the $\chi^2_{\rm d.o.f.}$ remains very much in the same neighborhood for all cases studied. Central charge {#sec:2a} ============== We used the free-energy data generated in Section \[sec:2\] also to estimate the conformal anomaly, or central charge $c$, at the NP. This is evaluated via the finite-size scaling of the free energy on a strip with periodic boundary conditions across [@bcn86], $$f(T_{c},N)=f(T_{c},\infty)+\frac{\pi c}{6N^{2}}+ \frac{d}{N^{4}}+{\cal O}\left(\frac{1}{N^6}\right) \label{eq:cc}$$ where $f(T_{c},\infty)=\lim_{L\rightarrow \infty}\,f(T_{c},L)\,$ is a regular term which corresponds to the bulk system free energy. For disordered systems, Eq. (\[eq:cc\]) is expected to hold when the configurationally-averaged free energy is considered, with $c$ taking the meaning of an *effective* conformal anomaly [@lc87; @jc98; @php06] . By writing only even powers of $N^{-1}$ in Eq. (\[eq:cc\]), it is assumed that only analytic corrections come up [@car86]. While this is true, e.g., for pure Ising systems, a counterexample is the three-state Potts ferromagnet for which free-energy corrections in $N^{-2\omega_0}$, $N^{-3\omega_0}\ \dots$, $\omega_0=4/5$, are present [@nienhuis82; @dq00]. Although not much is known about the operator structure at the NP, existing central charge estimates in this case have been derived via Eq. (\[eq:cc\]) so far with fairly consistent results, namely c=0.464(4) [@hpp01; @php06], 0.46(1) [@ldq06]. We shall return to this point at the end of this Section. We have evaluated free energies at the predicted locations of the NP given in Table \[tpc2\], both at the central estimates and at either end of the respective error bars. We found that such values can be calculated with sufficient accuracy, via interpolation of those already computed at the sets of equally-spaced points used originally in Section \[sec:2\]. Results for the central charge are displayed in Table \[tcc\], where error bars for all quantities mostly reflect uncertainties intrinsic to the fitting procedure itself, as our estimates are rather stable along the predicted intervals of location of the NP. Indeed, it is expected [@php06] that at criticality the calculated conformal anomaly passes through a maximum as a function of position along the NL. 0.2cm ------------- ------------- ------------ ------------------- Type $c$ $d$ $c\,[d \equiv 0]$ SQ, $\pm J$ $0.463(3)$ $0.13(1)$ $0.478(2)$ SQ, G $0.461(4)$ $0.14(3)$ $0.476(2)$ T, $\pm J$ $0.459(3)$ $0.01(1)$ $0.461(1)$ HC, $\pm J$ $0.457(5)$ $0.02(2)$ $0.462(2)$ T, G $0.454(4)$ $0.06(3)$ $0.461(1)$ HC, G $0.468(15)$ $-0.05(6)$ $0.459(5)$ ------------- ------------- ------------ ------------------- : \[tcc\] Conformal anomaly $c$ and non-universal higher-order coefficent $d$, from fits of critical free-energy data to Eq. (). Last column gives fitted values of $c$ under the asssumption that $d \equiv 0$ in Eq. (). In line with earlier findings [@ldq06], one sees that for SQ and both coupling distributions, ignoring the fourth-order term in Eq. (\[eq:cc\]) shifts the final estimate of $c$ by some $4-5$ error bars, away from the expected universal value $\sim 0.46$. On the other hand, for T and HC the fitted $d$ is much closer to zero than for SQ; furthermore, for these latter lattices, results obtained fixing $d=0$ appear generallly more consistent with universality, and with less spread, than those found with $d$ kept as a free parameter. Overall, we interpret the above results as indicating that: (i) there is no evidence for universality breakdown as regards the conformal anomaly; taking this as true, (ii) there appears to be no unusual (non-analytic) free-energy scaling correction $N^{-\omega_0}$ with $2 < \omega_0 < 4$; and (iii) it is possible that the fourth-order term is $d \equiv 0$ for T and HC, similarly to the case of pure Ising systems [@dq00]. Uniform susceptibilities {#sec:susc} ======================== We calculated uniform zero-field susceptibilities along the NL, for SQ, T and HC lattices and only for Gaussian distributions, as done in previous investigations for $\pm J$ [@dq06; @sbl3]. For the finite differences used in numerical differentiation, we used a field step $\delta h=10^{-4}$ in units of $T$. We swept the same respective intervals of $J_0$ quoted in Table \[tpar\]. Finite-size scaling arguments [@barber] suggest a form $$\chi_N = N^{\gamma/\nu}\,f [\,N^{1/\nu}(J_0-J_{0c})\,]\ , \label{eq:chisc}$$ where $\chi_N$ is the finite-size susceptibility, and $\gamma$ is the susceptibility exponent. In order to reduce the number of fitting parameters, we kept $1/\nu$ and $J_{0c}$ fixed at their central estimates obtained in Sec. \[sec:2\], and allowed $\gamma/\nu$ to vary. Again, we took corrections to scaling into account. Following Ref. , we write: $$\begin{aligned} \ln \chi =\frac{\gamma}{\nu} \ln N + \sum_{j=1}^{j_m} a_j\,(J_0-J_{0c})^j\,N^{j/\nu}+ \nonumber \\ + N^{-\omega} \sum_{k=0}^{k_m} b_k\,(J_0-J_{0c})^k\,N^{k/\nu}\ . \label{eq:fss3}\end{aligned}$$ Similarly to Section \[sec:2\] above, we found that choosing $j_m=2$, $k_m=1$ enables one to obtain good fits to numerical data, with $\chi^2_{\rm d.o.f.} \simeq 01.-0.2$. The consequences of keeping $\omega$ as a free parameter or, on the other hand, fixing its value during the fitting procedure, can be seen in Table \[tchi\]. 0.2cm ------- -------------------- -------------- --------------------------- -------------------------- --------------------------------------- Type $\omega^{\rm fit}$ $\gamma/\nu$ $\chi^{2}_{\rm \ d.o.f.}$ $(\gamma/\nu)^{\rm ave}$ $\chi^{2\, {\rm ave}}_{\rm \ d.o.f.}$ SQ, G $1.3(3)$ $1.79(2)$ $0.127$ $1.793(6)$ $0.129$ T, G $0.7(3)$ $1.81(1)$ $0.118$ $1.814(6)$ $0.128$ HC, G $0.4(6)$ $1.79(4)$ $0.17$ $1.804(7)$ $0.18$ ------- -------------------- -------------- --------------------------- -------------------------- --------------------------------------- : \[tchi\] For the zero-field susceptibility and lattices as specified in column 1 (all with Gaussian coupling distributions), columns 2, 3 , 4 give the leading correction-to scaling exponent as fitted, and the corresponding $\gamma/\nu$ and $\chi^{2}_{\rm \ d.o.f.}$; columns 5, 6 give the two latter quantities, now taken by keeping $\omega$ fixed during the fitting procedure, and averaging over resulting values for $\omega=1$ and $2$ (see text). While the fitted value of $\omega$ for SQ looks acceptable, the same cannot be said of that for HC, as the associated error bar allows even slightly negative values (the result for T being half-way between the other two). Also, by keeping $\omega$ as a free parameter, one gets an error bar for $\gamma/\nu$ that is at least twice that obtained if $\omega$ is kept fixed between $1$ and $2$, without any noticeable improvement in the $\chi^{2}_{\rm \ d.o.f.}$. On the other hand, using fixed $\omega$ above this latter range results in a slow but steady loss of quality: for example, for the T lattice, $\omega=4$ gives $\chi^{2}_{\rm \ d.o.f.} \simeq 0.2$. Thus, although the idea of allowing $\omega$ to vary freely seems, in principle, the correct thing to do, the results in this particular case do not appear to be obviously more reliable than those averaged for fixed $\omega$ between $1$ and $2$. We then decided to use these latter as our main reference. Taking an unweighted average over the three estimates for $(\gamma/\nu)^{\rm ave}$ gives the final value $\gamma/\nu=1.804(16)$. This is to be compared to the following (all for $\pm J$ distributions): $1.80(2)$ [@sbl3] \[SQ\]; $1.795(20)$ \[T\] and $1.80(4)$ \[HC\], both from Ref. ; $1.80$–$1.82$[@php06] \[SQ\]. Correlation functions {#sec:cf} ===================== Our study of correlation functions is based on previous work for SQ [@dqrbs03], T and HC lattices [@dq06] ($\pm J$ only). In this Section, we specialize to the Gaussian distribution, for T and HC lattices only. On the NL, the moments of the PDF for the correlation function between Ising spins $\sigma_i$, $\sigma_j$ are equal two by two [@nish81; @nish01; @nish86; @nish02]: $$[\,\langle \sigma_i \sigma_j\rangle^{2\ell -1}] = [\,\langle \sigma_i \sigma_j \rangle^{2\ell} ]\ , \label{eq:momsc}$$ where angled brackets indicate thermal average, square brackets stand for configurational averages over disorder, and $\ell = 1, 2, \dots$. At the NP, conformal invariance [@cardy87] is expected to hold, provided suitable averages over disorder are considered [@ludwig90; @dq95; @dq97; @hpp01; @mc02a; @mc02b; @dqrbs03; @dq06]. On a strip of width $L$ of a square lattice, with periodic boundary conditions across, the disorder-averaged $k$-th moment of the correlation function PDF between spins located respectively at the origin and at $(x,y)$ behaves at criticality as: $$[\,\langle \sigma_i \sigma_j\rangle^{k}] \sim z^{-\eta_k}\ ,\ z \equiv [\,\sinh^2 (\pi x/L)+ \sin^2 (\pi y/L)\,]^{1/2}\ . \label{eq:conf-inv}$$ For the T and HC lattices, the same is true, provided that the actual, i.e., geometric site coordinates along the strip are used in Eq. (\[eq:conf-inv\]). Details are given in Ref. . Note that Eq. (\[eq:momsc\]) implies $\eta_{\,2\ell-1}=\eta_{\,2\ell}$. As in earlier work [@dqrbs03; @dq06], we concentrate on short-distance correlations, i.e., where the argument $z$ is strongly influenced by $y$. Such a setup is especially convenient in order to probe the angular dependence predicted in Eq. (\[eq:conf-inv\]), which constitutes a rather stringent test of conformal invariance properties. Following Refs. , we extract the decay-of-correlations exponents $\eta_{\,k}$, via least-squares fits of our data to the form $m_{\,k} \sim z^{-\eta_{\,k}}$. We also consider the exponent $\eta_0$ which characterizes the zeroth-order moment of the correlation-function distribution [@ldq06], i.e. it gives the typical, or most probable, value of this quantity (see, e.g., Ref.  and references therein). One has, in the bulk, $$G_0 (R) \equiv \exp\left[ \ln \langle \sigma_{0}\sigma_{R} \rangle \right]_{\rm av} \sim R^{-\eta_0}\ . \label{eq:eta0}$$ Calculations on strips of the $\pm J$ SQ lattice, at the early conjectured location of the NP [@nn02], gave the estimate $\eta_0=0.194(1)$ [@ldq06] . As seen in earlier work [@dqrbs03], for strip widths $N=10$ or thereabouts, finite-width effects are already mostly subsumed in the explicit $L$ (i.e., $N$) dependence of Eq. (\[eq:conf-inv\]). However, some detectable (albeit tiny) variations in the calculated values of averaged moments of the correlation function PDF may still persist upon varying $N$. These are of course minimized at the critical point where the bulk correlation length diverges. We have calculated correlation functions for $N \leq 12$, for values of $J_0$ within the error bars given for the location of the NP in Table \[tpc2\] . We have seen that along these intervals of $J_0$, the trend followed by such averaged moments against $N$– variation is as follows: for T, it cannot be distinguished from stability within error bars, while for HC it is slightly downward (of the order of one error bar from $N=10$ to $N=12$). For fixed $N$ and $J_0$, one error bar associated to intrinsic fluctuations is $\lesssim 1\%$. 0.2cm ------- ------------ ------------ -------------- ------------ -------------------------------- $k$ T (G) HC (G) T ($\pm J$)  HC ($\pm SQ J$)  $0$ $0.185(3)$ $0.184(3)$ – – $0.194(1)$  1 $0.178(2)$ $0.178(2)$ $0.181(1)$ $0.181(1)$ $0.1854(17)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.1854(19)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.183(3)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.1848(3)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.1818(2)$ \[G\]  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.180(5)$  3 $0.250(2)$ $0.252(2)$ $0.251(1)$ $0.252(1)$ $0.2556(20)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.2561(26)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.253(3)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.2552(9)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.2559(2)$ \[G\] [[@php06]]{} 5 $0.296(2)$ $0.300(5)$ $0.297(2)$ $0.296(2)$ $0.300(2)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.3015(30)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.3004(13)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.3041(2)$ \[G\]  7 $0.331(4)$ $0.336(6)$ $0.330(2)$ $0.329(3)$ $0.334(3)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.3354(34)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.3341(16)$  [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $0.3402(2)$ \[G\]  ------- ------------ ------------ -------------- ------------ -------------------------------- : \[teta\] Estimates of exponents $\eta_{\,k}$, from least-squares fits of averaged moments of correlation-function distributions to the form $m_{k} \sim z^{-\eta_{\,k}}$, for $z$ defined in Eq. (). For columns 2 and 3 (this work), central estimates and error bars reflect averages between results for $N=10$ and $12$, as well as variations from scanning $J_0$ along the error bars for locations of NP given in Table . Columns 4, 5, 6 quote existing data for comparison. For SQ, all results are for $\pm J$ coupling distribution, unless otherwise noted. Table \[teta\] gives numerical results of the fits for $k=0$, and odd $k>1$ (we have also calculated even moments for $k \geq 2$ and checked that Eq. (\[eq:momsc\]) holds). One sees that T and HC estimates are quite consistent with each other for all $k$. On the other hand, for $k=0$, $1$, and $3$ they fall slightly below their existing counterparts, given in columns 4, 5, and 6 of the Table. For $k=5$ and $7$, as a consequence of generally wider error bars, all estimates are broadly compatible with one another. Physically, obtaining (via least-squares fits) a smaller \[larger\] than expected value for the decay-of-correlations exponent would indicate that it is being evaluated inside the ordered \[paramagnetic\] phase, instead of right at the critical point. Applying these ideas to the $k=0$ case, we recall that the result of Ref.  for SQ, $\pm J$ was calculated at the originally conjectured position of the NP [@nn02]. By now, it seems well established that this point is in the disordered phase (see Table \[tpc\]). Therefore, the value of Ref.  should be taken as an upper bound, which is obeyed by our present estimates. For $k=1$ and $3$, one might use the same argument as above to argue that the result of Ref.  is too large, as it was calculated at the same point as that of Ref. . On the other hand, this cannot be said of the additional estimates quoted in the Table, all of which are also larger than ours (though in some cases the respective error bars overlap, or at least touch each other). Using the reasoning described above, one would infer that for T and HC with Gaussian distribution, the ranges of locations for the NP given in Table \[tpc2\] are in fact both inside the ordered phase. Since these latter, in their turn, put the conjectured NP position also inside the ordered phase, the final conclusion would be that the actual location of the NP differs from the conjecture by an amount larger than predicted by domain-wall (DW) scaling: $J_{0c}^{\,\rm real} < J_{0c}^{\,\rm DW} < J_{0c}^{\,\rm conj.}$. The slight downward trend against increasing $N$, reported above for HC, would be consistent with this scenario. However, we have not seen a similar trend for T. One should note also that all the discrepancies remarked upon are rather small: the single worst case, as regards central estimates, is that of the present result $\eta_1=0.178$ against $\eta_1=0.1854$ [@hpp01; @dqrbs03], amounting to $4\%$, or $\simeq 3.5$ times the respective error bar. Given that the quoted values (especially those for the associated uncertainties) are likely to depend on details of the respective fitting procedures, the resulting picture looks mixed. In conclusion, existing evidence does not seem strong enough to state that our estimates from Sec. \[sec:2\] for the location of the NP on T and HC lattices (Gaussian distribution) are definitely inside the ordered phase. Discussion and Conclusions {#conc} ========================== We have used domain-wall scaling techniques in Sec. \[sec:2\] to determine the location of the Nishimori point of Ising spin glasses. In the analysis of our data we allowed for the existence of corrections to scaling, see Eqs. (\[eq:fss\]) and  (\[eq:fss2\]). Results for the SQ lattice, $\pm J$ distribution, show that such corrections play a crucial role in the finite-size scaling of domain-wall energies. Indeed, when they are taken into account, the estimated position of the NP is $p_c=0.89065(20)$, in excelent agreement with recent and very accurate numerical work [@php06; @has08]. One can see from the two last columns of Table \[tpc2\] that, if corrections to scaling are ignored, the value of $p_c$ which minimizes the $\chi^2_{\rm d.o.f.}$ (though at a level $\sim 30$ times that obtained when corrections are incorporated) is instead $p_c=0.8898(1)$, very close to the original conjecture and incompatible with the above-mentioned body of numerical evidence. In retrospect, one sees that the domain-wall scaling result of Ref.  for this case, namely $p_c=0.8900(5)$, essentially suffers from the effect of ignoring corrections to scaling (though even so it still picks out the correct exponent, $1/\nu =1.45(8)$ [@dq06]). Going over to SQ, Gaussian, domain-wall scaling for strips of widths $4 \leq N \leq$ 14 sites gives $J_{0c}=1.0193(4)$, lower than both the conjecture (original and improved) and the result of Ref. , $J_{0c}=1.02098(4))$. In that Reference, the mapping of the spin problem to a network model (described in Ref. ) enabled the authors to reach significantly larger lattice sizes than here. The result just quoted was obtained by extrapolation of $11 \leq N \leq 24$ crossing-point data, without explicit account of corrections to scaling (which, as those authors show, do produce a trend reversal around $N = 8$, and are expected to have negligible effect for the large widths used in the extrapolation). It may be that our own data fail to incorporate an underlying trend which only comes about for larger systems. Nevertheless, the stability of our results for this particular case is remarkable, as pointed out in the initial discussion of Table \[tpc2\]. Our results for T and HC, $\pm J$ distribution, are marginally compatible with, but more accurate than, the earlier ones of Ref. ; though for T they are also broadly compatible with the conjecture in both original and improved versions, for HC our estimate in Table \[tpc2\] lies at least two error bars away from the conjecture. For T and HC, Gaussian distribution, in both cases the discrepancy between our results and the conjecture is again of the order of two error bars. Consequently, as shown in Eq. (\[eq:hest\]), we predict the duality-based conjecture of Eq. (\[eq:earlyconj\]) to be narrowly missed, for both $\pm J$ and Gaussian cases, though on opposite sides of the hypothesized equality. As regards the exponent $\nu$ and critical amplitude $\eta$ (see Eq. (\[eq:sigeta\])) which are also estimated via domain-wall scaling, we have found no evidence of nonuniversal, lattice– or disorder distribution– dependent, behavior. Therefore, from unweighted averages over all six cases studied, we quote $\nu=1.53(4)$, $\eta=0.690(6)$. Both are in very good agreement with existing numerical results (see the end of Sec. \[sec:2\] for detailed comparisons). The conformal anomaly values calculated in Sec. \[sec:2a\] are in good agreement among themselves and with previous estimates [@hpp01; @php06; @ldq06]. Our fits for the non-universal coefficient $d$ of the fourth-order correction to the critical free energy suggest that $d \equiv 0$ for T and HC lattices (while definitely $d \neq 0$ for SQ). This would be similar to the lattice-dependent structure of corrections for pure Ising systems [@dq00]. An unweighted average of values from Table \[tcc\] (using results of fits with $d \neq 0$ for SQ, and with $d \equiv 0$ for T and HC) gives $c=0.461(5)$. In Sec. \[sec:susc\] we evaluated uniform zero–field susceptibilities, by direct numerical differentiation of the free energy against external field. Only Gaussian distributions were considered, for SQ, T, and HC. Though our results show some lattice-dependent spread, the error bars for $(\gamma/\nu)^{\rm ave}$ still overlap in pairs. It is known that susceptibility calculations are prone to larger fluctuations than, e.g., domain-wall energy ones [@php06]. In the Monte Carlo simulations of Ref. , this effect was reduced by considering the quantity $\chi/\xi^2$ (where $\xi$ is the finite-size correlation length), which behaves more smoothly than $\chi$ on its own. Our final estimate (averaged over results for all three lattices), $\gamma/\nu=1.804(16)$, compares favorably (albeit somewhat close to marginally) with the corresponding one from Ref. , $\gamma/\nu=1.820(5)$. Finally, in Sec. \[sec:cf\] we applied conformal-invariance concepts to the statistics of spin-spin correlation functions, extracting the associated multifractal scaling exponents [@hpp01; @mc02a; @mc02b; @php06]. We only examined T and HC lattices, for Gaussian coupling distributions. The overall picture summarized in Table \[teta\] points towards universality of the exponents $\{ \eta_k \}$, though some small discrepancies remain. The case $k=1$ is especially relevant, on account of its connection with the uniform susceptibility via the scaling relation $\gamma/\nu=2-\eta_1$. While our result $\eta_1=0.178(2)$ is somewhat lower than existing data from direct calculations of correlation functions, it gives $\gamma/\nu=1.822(2)$ when inserted in the scaling relation. This agrees very well with the above-quoted estimate [@has08], $\gamma/\nu=1.820(5)$. In summary, we have produced estimates of the location of the NP on SQ, T and HC lattices, and for $\pm J$ and Gaussian coupling distributions. Though these are consistent with existing conjectures for SQ and T (both $\pm J$), they appear to exclude the respective conjectured values for the remaining cases. However, the discrepancies are very small, amounting to $0.2\%$ in the worst case (SQ, Gaussian). Furthermore, we have assessed several critical quantities (amplitudes and exponents), and found an overall picture consistent with universality as regards lattice structure and disorder distribution. The author thanks Hidetoshi Nishimori, Marco Picco, Andrea Pelissetto, Francesco Parisen Toldin, and Masayuki Ohzeki for many enlightening discussions. This research was partially supported by the Brazilian agencies CNPq (Grant No. 30.6302/2006-3), FAPERJ (Grants Nos. E26–100.604/2007 and E26–110.300/2007), CAPES, and Instituto do Milênio de Nanotecnologia–CNPq. [^1]: average over four values from improved conjecture, Ref.  [^2]: average over two values from improved conjecture, Ref.
--- bibliography: - 'main.bib' --- Concerns continue to grow about the prevalence of misinformation on social media platforms [@Lazer-fake-news-2018; @vosoughi2018spread], including during the recent COVID-19 pandemic [@yang2020prevalence]. These types of content often exploit people’s tendency to prefer pro-attitudinal information [@hart2009feeling], which can be exacerbated by platform content recommendations [@BakshyAdamic; @chen2020neutral]. In this paper, we explore a possible algorithmic approach to mitigate the spread of misinformation and promote content with higher journalistic standards online. Social media platform recommendation algorithms frequently amplify bias in human consumption decisions. Though the information diets of Americans are less slanted in practice than many assume, the people who consume the most political news are most affected by the tendency toward selective exposure [@guess18]. As a result, the news audience is far more polarized than the public as a whole [@guessnd2; @flaxman2016filter]. Although the prevalence of so-called “fake news” online is rather limited and concentrated among relatively narrow audiences [@allcott2017social; @guess18; @grinberg2019fake; @guess2019less; @allen2020evaluating; @guess2020exposure], content that generally appeals to these tendencies — which include low-quality or false news — may generate high levels of readership or engagement [@vosoughi2018spread], prompting algorithms that seek to maximize engagement to distribute them more widely. Prior research indicates that existing recommendation algorithms tend to promote items that have already achieved popularity [@goel2010anatomy; @Nikolov2018biases]. This bias may have several effects on the consumption of low-quality and false news. First, sorting the news by engagement can exacerbate polarization by increasing in-group bias and discouraging consumption among outgroup members [@shmargad2020sorting]. Second, it may contribute to information cascades, amplifying differences in rankings from small variations or random fluctuations and degrading the overall quality of information consumed by users [@Salganik854; @hogg2015disentangling; @ciampaglia2018how; @Germano:2019:FSC:3308558.3313693; @Macyeaax0754]. Third, exposure to engagement metrics makes users more likely to share and less likely to fact-check highly engaging content from low-credibility sources, increasing vulnerability to misinformation [@Fakey2020]. Finally, popularity bias in recommendation systems can create *socio-algorithmic vulnerabilities* to threats such as automated amplifiers, which exploit algorithmic content rankings to spread low-quality and inflammatory content to like-minded audiences [@shao2018spread; @stella2018bots]. Given the speed and scale of social media, assessing directly the quality of every piece of content or the behavior of each user is infeasible. Online platforms are instead seeking to include signals about news quality in their content recommendation algorithms. A vast literature examines how to assess the credibility of online sources [@gupta2014tweetcred; @cho2015survey] and the reputations of individual online users [@golbeck2005computing; @Adler:2007:CRS:1242572.1242608], which could in principle bypass the problem of checking each individual piece of content. Unfortunately, many of these methods are hard to scale to large groups and/or depend upon context-specific information about the type of content being generated (e.g., wikis). As a result, they are not easily applied to news content recommendations on social media platforms. Another approach is to try to evaluate the quality of websites directly [@zhang2018structured], but scaling such an approach would likely be costly and cause lags in the evaluation of novel content. Similarly, while crowdsourced website evaluations have been shown to be generally reliable in distinguishing between high and low quality news sources [@Pennycook2521], such signals are vulnerable to manipulation as well as to delays in evaluating new sources. Building on the literature about the benefits of diversity at the group level [@hong2004groups; @shi2019wisdom], we instead propose using the partisan diversity of the audience of a news source as a signal of its quality. This approach has two key advantages. First, audience partisan diversity can be computed at scale given that information about the partisanship of users is available or can be inferred in a reliable manner. Second, because diversity is a property of the audience and not of its level of engagement, it is less susceptible to manipulation if one can detect inauthentic partisan accounts [@varol2017online; @Yang2019botometer; @Yang2020botometer-lite]. These two conditions (inferring partisanship reliably and preventing abuse by automated amplification/deception) could easily be met by the major social media platforms, which routinely collect a wealth of signals about their users and their authenticity. We evaluate the merits of our proposed approach using data from two sources: a comprehensive data set of web traffic history from [6,890]{} Americans, collected along with surveys of self-reported partisan information from respondents in the YouGov Pulse survey panel, and a data set of 3,765 news source reliability scores of web domains compiled by trained experts in journalism and provided by NewsGuard [@newsguard]. We first establish that domain pageviews are not associated with overall news reliability, highlighting the potential problem with algorithmic recommendation systems that rely on popularity and related metrics of engagement. We next define measures of audience partisan diversity and show that these measures correlate with news reliability better than popularity does. Finally, we study the effect of incorporating audience partisan diversity into algorithmic ranking decisions. When we create a variant of the standard collaborative filtering algorithm that explicitly takes audience partisan diversity into account, our new algorithm provides more trustworthy recommendations than the standard approach with only a small loss of relevance. These results demonstrate that diversity in audience partisanship can serve as a useful signal of news reliability at the domain level, a finding that has important implications for the design of content recommendation algorithms used by online platforms. Although the news recommendation technologies deployed by platforms are more sophisticated than the approach tested here, our results highlight a fundamental weakness of algorithmic ranking methods that prioritize content that generates engagement and suggest a new metric that could help improve the reliability of the recommendations that are provided to users. \[sec:results\] Popularity does not predict news reliability {#popularity-does-not-predict-news-reliability .unnumbered} -------------------------------------------- ![Relationship between audience size and news reliability by domain (Pearson $r=0.05$, two-sided $p=0.12$). Reliability scores provided by NewsGuard [@newsguard].[]{data-label="fig:traffic_vs_quality"}](figs/fig1_traffic_vs_quality.png){width=".5\textwidth"} To motivate our study, we first demonstrate that the popular news content that algorithmic recommendations often highlight is not necessarily reliable. To do so, we examine the relationship between audience size and news reliability in the YouGov Pulse data. Due to skew in audience size among domains, we use a logarithmic scale for the size. Fig. \[fig:traffic\_vs\_quality\] shows that the amount of traffic that a website attracts is not associated with its news reliability, which we measure using NewsGuard scores (see Methods \[sec:data\]). We do find a significant association if we consider websites with predominantly Democratic audiences (Pearson $r=0.08$, two-sided $p=0.02$) separately from those with predominantly Republican audiences (Pearson $r=-0.06$, two-sided $p=0.34$), but the strength of association between the two variables is very weak overall (Pearson $r=0.05$, two-sided $p=0.12$). Audience partisan diversity is a signal for high-reliability news {#audience-partisan-diversity-is-a-signal-for-high-reliability-news .unnumbered} ----------------------------------------------------------------- In contrast, we observe that sites with greater audience partisan diversity tend to have higher NewsGuard scores while those with lower levels of diversity, and correspondingly more homogeneous partisan audiences, tend to have lower reliability scores. Fig. \[fig:average\_slant\_vs\_variance1\] shows how NewsGuard scores vary with both mean audience partisanship and variance in audience partisanship. The latter is our primary measure of audience partisan diversity at the website level (see Methods \[sec:ad\]). ![Average audience partisanship versus variance. Left panel: user level. Right panel: pageview level. Domains for which we have NewsGuard reliability scores [@newsguard] are shaded in blue (where darker shades equal lower scores). Domains with no available score are plotted in gray.[]{data-label="fig:average_slant_vs_variance1"}](figs/fig2_slant_vs_variance_pane_ng_unweighted.png){width="\textwidth"} As Fig. \[fig:average\_slant\_vs\_variance1\] indicates, unreliable websites with very low NewsGuard scores are concentrated in the tails of the distribution, where partisanship is most extreme and audience partisan diversity is, by necessity, very low. This relationship is not symmetrical: low-reliability websites (whose markers are darker shades of blue in the figure) are especially concentrated in the right tail, which corresponds to websites with largely Republican audiences. The data in Fig. \[fig:average\_slant\_vs\_variance1\] also suggests that the reliability of a website may be associated not just with the variance of the distribution of audience partisanship slants, but also with its mean. To account for this, we compute the coefficient of partial correlation between NewsGuard reliability scores and the variance of audience partisanship given the mean audience partisanship of each website. Compared with popularity, we find a stronger (and significant) correlation regardless of whether mean partisanship and audience partisan diversity are calculated by weighting individual audience members equally (user level, left panel: partial correlation $r = 0.38$, two-sided $p < 10^{-4}$) or by how often they visited a given site (pageview level, right panel: partial correlation $r = 0.22$, two-sided $p < 10^{-4}$). We study the diversity–reliability relationship in more detail in Fig. \[fig:average\_slant\_vs\_variance2\], which differentiates between websites with audiences that are mostly Republican and those with audiences that are mostly Democratic. Consistent with what we report above, Fig. \[fig:average\_slant\_vs\_variance2\] shows that audience partisan diversity is positively associated with news reliability. Again, this relationship holds both when individual audience members are weighted equally (user level, left panel) and when they are weighted by their number of accesses (pageview level, right panel), though the association is stronger at the user level (standardized OLS coefficient: $\beta = 0.28\,(0.02)$ at user level; $\beta = 0.17\,(0.03)$ at pageview level). In addition, we find that the relationship is stronger for sites whose average visitor identifies as a Republican (standardized OLS coefficient of Republican domains: $\beta = 0.41\,(0.05)$ at user level; $\beta = 0.30\,(0.07)$ at pageview level) versus those whose average visitor identifies as a Democrat (standardized OLS coefficient of Democrat domains: $\beta = 0.16\,(0.03)$ at user level; $\beta = 0.04\,(0.03)$ at pageview level), which is consistent with Fig. \[fig:average\_slant\_vs\_variance1\]. Full regression tables can be found in Supplementary Materials. ![Relationship between audience partisan diversity and news reliability for websites whose average visitor is a Democrat or a Republican. Left panel: variance computed at user level. Right panel: variance computed at pageview level. News reliability scores from NewsGuard [@newsguard].[]{data-label="fig:average_slant_vs_variance2"}](figs/fig3_variance_quality_slant.png){width="\textwidth"} Of course, variance in audience partisanship is not the only possible way to define audience partisan diversity; alternative definitions can be used (e.g., entropy; see Methods \[sec:ad\]). As a robustness check, we therefore consider a range of alternative definition of audience partisan diversity and obtain results that are qualitatively similar to the ones presented here, though results are strongest for variance (see Supplementary Materials). Audience partisan diversity produces trustworthy, relevant recommendations {#audience-partisan-diversity-produces-trustworthy-relevant-recommendations .unnumbered} -------------------------------------------------------------------------- To understand the potential effects of incorporating audience partisan diversity into algorithmic recommendations, we next consider how recommendations from a standard user-based collaborative filtering (CF) algorithm [@resnick1994grouplens; @konstan1997grouplens] change if we include audience partisan diversity as an additional signal. We call this modified version of the algorithm CF+D, which stands for Collaborative Filtering + Diversity (see Methods \[sec:cf+d\] for formal definition). In classic CF, users are presented with recommendations drawn from a set of items (in this case, web domains) that have been “rated” highly by those other users whose tastes are most similar to theirs. Lacking explicit data about how a user would “rate” a given web domain, we use a quantity derived from the number of user pageviews to a domain (based on TF-IDF; see also Methods \[sec:cf+d\]) as the rating. To evaluate our method, we follow a standard supervised learning workflow. We first divide web traffic data for each user in the YouGov Pulse panel into training and testing sets by domain (see Methods \[sec:supervised\]). We then compute similarities in traffic patterns between users for all domains in the training set (not just news websites) and use the computed similarities to predict the aforementioned domain-level pageviews metric on the test set. The domains that receive the highest predicted ratings (i.e., expected TF-IDF-transformed pageviews) are then selected as recommendations. Note that if a user has not visited a domain, then the number of visits for that domain will be zero. In general, due to the long tail in user interests [@goel2010anatomy], we cannot infer that the user has a negative preference toward a website just because they have not visited it. The user may simply be unaware of the site. We therefore follow standard practice in the machine learning literature in only evaluating recommendations for content for which we have ratings (i.e., visits in the test set), though in practice actual newsfeed algorithms rank items from a broader set of inputs, which typically includes content the user may not have seen (for example, content shared by friends [@BakshyAdamic]). To produce recommendations for a given user, we consider all the domains visited by the user in the test set for which ratings are available from one or more respondents in a neighborhood of most similar users (domains with no neighborhood rating are discarded since neither CF nor CF+D can make a prediction for them, see Methods \[sec:cf+d\]) and for which we have a NewsGuard score (i.e., a reliability score). We then rank those domains by their rating computed using either CF or CF+D. This process produces a ranked list of news domains and reliability scores from both the standard CF algorithm and the modified CF+D algorithm, which incorporates the audience partisan diversity signal. We evaluate these lists using two different measures of trustworthiness, that are computed for the top $k$ domains in each list: either the mean score (a number in the 0–100 range), or the proportion of domains with a score of 60 or higher, which NewsGuard classifies as indicating that a site “generally adheres to basic standards of credibility and transparency” [@newsguard] (see Methods \[sec:trust\]). ![Trustworthiness of recommended domains by length of ranked list $k$. Left: Trustworthiness based on scores from NewsGuard [@newsguard]. Right: proportion of domains labeled as ‘trustworthy’, also by NewsGuard. Actual visits $v$ are normalized using TF-IDF (see Methods \[sec:cf+d\]). Each bin represents the average computed on the top-$k$ recommendations for all users in the YouGov panel with $\ge k$ recommendations in their test sets. Bars represent the standard error of the mean. The values of $k$ are capped so that each bin has $\ge 100$ users in it (see Supplementary Materials for plot with all values of $k$). In this figure, both CF and CF+D compute the similarity between users using the Kendall $\tau$ correlation coefficient (see Methods \[sec:cf+d\]). We obtain qualitatively similar results using the Pearson correlation coefficient (see Supplementary Materials).[]{data-label="fig:trust"}](fig4_left_tw-k-score-kendall.png "fig:"){width="49.50000%"} ![Trustworthiness of recommended domains by length of ranked list $k$. Left: Trustworthiness based on scores from NewsGuard [@newsguard]. Right: proportion of domains labeled as ‘trustworthy’, also by NewsGuard. Actual visits $v$ are normalized using TF-IDF (see Methods \[sec:cf+d\]). Each bin represents the average computed on the top-$k$ recommendations for all users in the YouGov panel with $\ge k$ recommendations in their test sets. Bars represent the standard error of the mean. The values of $k$ are capped so that each bin has $\ge 100$ users in it (see Supplementary Materials for plot with all values of $k$). In this figure, both CF and CF+D compute the similarity between users using the Kendall $\tau$ correlation coefficient (see Methods \[sec:cf+d\]). We obtain qualitatively similar results using the Pearson correlation coefficient (see Supplementary Materials).[]{data-label="fig:trust"}](fig4_right_tw-k-binary-kendall.png "fig:"){width="49.50000%"} By varying the number of top domains $k$, we can evaluate how trustworthiness changes as the length of the list of recommendations increases. In Fig. \[fig:trust\] we plot the trustworthiness of the recommended domains as a function of $k$. We restrict values of $k$ to 1–28, the values for which there are at least $100$ users in each bin (plots spanning the full range are available in Supplementary Materials). Each panel compares the average trustworthiness of domains ranked by CF and CF+D with, as a baseline, the trustworthiness of websites users visited in the test set ranked by their TF-IDF-transformed number of visits (i.e., pageviews). This baseline captures the trustworthiness of the websites the users actually visited after adjusting for the fact that more popular websites tend to attract more visits in general. We observe in Fig. \[fig:trust\] that the trustworthiness of recommendations produced by CF+D is significantly better than both standard CF recommendations and baseline statistics from user behavior. In particular, CF produces less trustworthy rankings than the user visit baseline (for small values of $k$ the difference is within the margin of error), while CF+D produces rankings that are more trustworthy than both CF and the baseline across different levels of $k$. These results suggest that audience partisan diversity can provide a valuable signal to improve the reliability of algorithmic recommendations. Of course, the above exercise would be meaningless if our proposed algorithm recommended websites that do not interest users. Because CF+D alters the set of recommended domains to prioritize those visited by more diverse partisan audiences, it may be suggesting sources that offer counter-attitudinal information or that users do not find relevant. In this sense, CF+D could represent an audience-based analogue of the topic diversification strategy from the recommender systems literature [@ziegler2005improving]. If so, a loss of predictive ability would be expected. ![Accuracy of domain recommendations by length of ranked list $k$. Left: Precision (proportion of correctly ranked sites) by length of ranked list $k$ (higher is better). Right: RMSE (root mean squared error) of predicted pageviews for top $k$ ranked domains by length of ranked list $k$ (lower is better). Each bin represents the average computed on the top-$k$ recommendations of all users with $\ge k$ recommendations in their test sets. Bars represent the standard error of the mean. The values of $k$ are capped so that each bin has $\ge 100$ users in it (see Supplementary Materials for plot with all values of $k$). In this figure, both CF and CF+D compute the similarity between users using the Kendall $\tau$ correlation coefficient (see Methods \[sec:cf+d\]). We obtain qualitatively similar results using the Pearson correlation coefficient (see Supplementary Materials).[]{data-label="fig:precision"}](fig5_left_pr-k-kendall.png "fig:"){width="49.50000%"} ![Accuracy of domain recommendations by length of ranked list $k$. Left: Precision (proportion of correctly ranked sites) by length of ranked list $k$ (higher is better). Right: RMSE (root mean squared error) of predicted pageviews for top $k$ ranked domains by length of ranked list $k$ (lower is better). Each bin represents the average computed on the top-$k$ recommendations of all users with $\ge k$ recommendations in their test sets. Bars represent the standard error of the mean. The values of $k$ are capped so that each bin has $\ge 100$ users in it (see Supplementary Materials for plot with all values of $k$). In this figure, both CF and CF+D compute the similarity between users using the Kendall $\tau$ correlation coefficient (see Methods \[sec:cf+d\]). We obtain qualitatively similar results using the Pearson correlation coefficient (see Supplementary Materials).[]{data-label="fig:precision"}](fig5_right_rmse-k-kendall.png "fig:"){width="49.50000%"} Fig. \[fig:precision\] compares the accuracy of CF+D in predicting user visits to domain in the test set with that of CF. To evaluate accuracy, we compute two metrics: the fraction of correctly predicted domains (precision) and root mean squared error (RMSE), both as a function of the number of recommended domains $k$ (see Methods \[sec:eval\] for definitions). Note that precision increases with $k$ by definition because we are comparing an increasingly large set of recommendations with a list of fixed size. Because each bin averages over users with at least $k$ domains in their test set, when $k$ reaches the maximum size of the recommendation list we can make, the precision necessarily becomes 100%. The plots in Fig. \[fig:precision\] do not reach this level — they include only bins with at least $100$ users in them — but trend upward with $k$. (In the Supplementary Materials, we show plots that include results for all values of $k$.) Our results are generally encouraging. In both cases, precision is low and RMSE is high for low values of $k$, but error levels start to stabilize around $k = 10$, which suggests that making correct recommendations for shorter lists (i.e. $k< 10$) is more challenging than for longer ones. Moreover, comparing CF+D with CF, we see that though accuracy declines slightly for CF+D relative to CF, the difference is not statistically significant for all but small values of $k$, suggesting that CF+D is still capable of producing relevant recommendations. Audience partisan diversity mitigates selective exposure to misinformation {#audience-partisan-diversity-mitigates-selective-exposure-to-misinformation .unnumbered} -------------------------------------------------------------------------- The results above demonstrate that incorporating audience partisan diversity can increase the trustworthiness of recommended domains while still providing users with relevant recommendations. However, we know that exposure to unreliable news outlets varies dramatically across the population. For instance, exposure to untrustworthy content is highly concentrated among a narrow subset of highly active news consumers with heavily slanted information diets [@grinberg2019fake; @guess2020exposure]. We therefore take advantage of the survey and behavioral data available on participants in the Pulse panel to consider how CF+D effects vary by individual partisanship (self-reported via survey), behavioral measures such as volume of news consumption activity and information diet slant, and contextual factors that are relevant to algorithm performance such as similarity with other users. In this section, we again produce recommendations using either CF or CF+D and measure their difference in trustworthiness with respect to a baseline based on user visits (specifically the ranking by TF-IDF-normalized number of visits $v$; see Methods \[sec:cf+d\]). However, we analyze the results differently than those reported above. Rather than considering recommendations for lists of varying length $k$, we create recommendations for different subgroups based on the factors of interest and compare how the effects of the CF+D approach vary between those groups. To facilitate comparisons in performance between subgroups that do not depend on list length $k$, we define a new metric to summarize the overall trustworthiness of the ranked lists obtained with CF and CF+D over all possible values of $k$. Since users tend to pay less attention to items ranked lower in the list [@10.1145/3130332.3130334], it is reasonable to assume that lower-ranked items ought to contribute less to the overall trustworthiness of a given ranking. Let us consider a universe of domains $\mathcal D$ as the set of items to rank. Inspired by prior approaches on stochastic processes based on ranking [@fortunato2006scale], we consider a discounting method that posits that the probability of selecting domain $d\in\mathcal D$ from a given ranked recommendation list decays as a power law of its rank in the list: $$\label{eq:ranking} \Pr\left\{X = d\right\} = \frac{r_d^{-\alpha}}{\sum_{h}r_h^{-\alpha}}$$ where $X\in\mathcal D$ is a random variable denoting the probabilistic outcome of the selection from the ranked list, $r_d \in \mathbb N$ is the rank of a generic $d\in\mathcal D$, and $\alpha \ge 0$ is the exponent of power-law decay (when $\alpha = 0$, all domains are equally likely; when $\alpha > 0$, top-ranked domains are more likely to be selected). Let us now consider probabilistic selections from two different rankings, represented by random variables $X$ and $X'$, where $X$ is the random variable of the ranking produced by one of the two recommendation algorithms (either CF or CF+D) and $X'$ is the selection from the baseline ranking based on user visits. Using Eq. \[eq:ranking\], we compute the expected change in trustworthiness $Q$ from switching the selection from $X'$ to $X$, $$\label{eq:change} \Delta Q = \mathsf{E}\left[Q(X)\right] - \mathsf{E}\left[Q(X')\right]$$ where the expectations of $Q(X)$ and $Q(X')$ are taken with regard to the respective rankings (see Methods \[sec:rankingmodel\]). A value of $\Delta Q > 0$ indicates that algorithmic recommendations are more trustworthy than what users actually accessed. If $\Delta Q < 0$, the trustworthiness of a ranked list is lower than the baseline from user visits. (To ensure that the results below are not affected by the discounting method we employ, we report qualitatively similar results obtained without any discounting for a selection of values of $k$ in the Supplementary Materials.) Applying Eq. \[eq:change\], we find that CF+D substantially increases trustworthiness for users who tend to visit sources that lean conservative (Fig. \[fig:tw\_vs\](a)) and for those who have the most polarized information diets (in either direction; see Fig. \[fig:tw\_vs\](c)), two segments of users who are especially likely to be exposed to unreliable information [@allcott2017social; @grinberg2019fake; @guess2020exposure]. In both cases, CF+D achieves the greatest improvement among the groups where CF reduces the trustworthiness of recommendations the most, which highlights the pitfalls of algorithmic recommendations for vulnerable audiences and the benefits of prioritizing sources with diverse audiences in making recommendations to those users. Note that even though the YouGov sample includes self-reported information on both party ID and partisanship of respondents, for stratification we use only the former (Fig. \[fig:tw\_vs\](b)) but not the latter, to avoid circularity given the definition of CF+D, which does rely on the latter; instead, in Figs \[fig:tw\_vs\](a) and \[fig:tw\_vs\](c) we chose to stratify on an external measure of news diet slant (calculated from a large sample of social media users; see Methods \[sec:stratification\]). ![Effect of CF and CF+D (versus actual visits baseline) on trustworthiness by user characteristics and behavior. (a) Ideological slant of visited domains (terciles using scores from @BakshyAdamic [@BakshyAdamic]). (b) Self-reported party ID from YouGov Pulse responses as measured on a 7-point scale (1–3: Democrats including people who lean Democrat but do not identify as Democrats, 4: Independents, 5–7: Republicans including people who lean Republican but do not identify as Republicans). (c) Absolute slant of visited domains (terciles using scores from @BakshyAdamic). (d) Total online activity (TF-IDF-transformed pageviews; terciles). (e) Distinct number of domains visited (terciles). (f) Average user-user similarity with nearest $n=10$ neighbors in training set (terciles) (g) Trustworthiness of domains visited by users (in training set; terciles). Bars represent the standard error of the mean of each stratum. Change in trustworthiness $\Delta Q$ based on scores from NewsGuard [@newsguard].[]{data-label="fig:tw_vs"}](figs/fig6_tw_vs_all_discounted.png){width="\textwidth"} We also observe that CF+D has strong positive effects for users who identify as Republicans or lean Republican (Fig. \[fig:tw\_vs\](b)) and for those who are the most active news consumers in terms of both total consumption (Fig. \[fig:tw\_vs\](d)) and number of distinct sources (Fig. \[fig:tw\_vs\](e)). Furthermore, since the two recommendation schemes considered here (CF and CF+D) are predicated on identifying similar users according to their tastes and behaviors, we also segment the users of the YouGov sample according to the degree of similarity with their nearest neighbors (identified based on Kendall’s rank correlation coefficient between user vectors; see Methods \[sec:cf+d\]). Stratifying on the average of nearest neighbor similarities, we find that CF+D results in improvements for the users whose browsing behavior is most similar to others in their neighborhood and who might thus be most at risk of “echo chamber” effects (Fig. \[fig:tw\_vs\](f)). Finally, when we group users by the trustworthiness of the domains they visit, we find that the greatest improvements from the CF+D algorithm occur for users who are exposed to the least trustworthy information (Fig. \[fig:tw\_vs\](g)). By contrast, the standard CF algorithm often recommends websites that are less trustworthy than those that respondents actually visit ($\Delta Q < 0$). Methods {#methods .unnumbered} ======= Data {#sec:data} ---- Our analysis combines two sources of data. The first is the NewsGuard News Website Reliability Index [@newsguard], a list of web domain reliability ratings compiled by a team of professional journalists and news editors. The data that we licensed for research purposes includes scores of 3,765 web domains on a 100-point scale based on a number of journalistic criteria such as editorial responsibility, accountability, and financial transparency.[^1] NewsGuard categorizes web domains into four main groups: “Green” domains, which have a score of 60 or more points and are considered reliable; “Red” domains, which score less than 60 points and are considered unreliable; “Satire” domains, which should not be regarded as news sources regardless of their score; and “Platform” domains like Facebook or YouTube that primarily host content generated by users. The mean reliability score for domains in the data is 69.6; the distribution of scores is shown in Fig. \[fig:ng\_distribution\]. ![Distribution of NewsGuard scores ($N = 3{,}726$) by trustworthiness rating. Domains that score below 60 points (i.e., untrustworthy) on the rubric used by NewsGuard [@newsguard] are shown in white. Those that score 60 or above are shown in green. The bin width is 5; the bin containing score 60 also includes a few domains with lower scores. The dashed line indicates the average score in the data.[]{data-label="fig:ng_distribution"}](figs/fig7_ng_distribution.png "fig:"){width=".75\textwidth"}\ The second data source is the YouGov Pulse panel, a sample of U.S.-based Internet users whose web traffic was collected in anonymized form with their prior consent. This traffic data was collected during seven periods between October 2016 and March 2019 (see Table \[tab:pulse\_resp\]). A total of [6,890]{} participants provided data. In addition to their web traffic logs, participants reported their partisanship on a seven-point scale in online surveys. We perform a number of pre-processing steps on this data. We combine all waves into a single sample. We pool web traffic for each domain that received thirty or more unique visitors. Finally, we use the self-reported partisanship of the visitors to estimate mean audience partisanship and audience partisan diversity, which we estimate using different measures described next and evaluated in the Supplementary Materials. **Respondents** **Domains** **Pageviews** --------------- ----------------- ------------- --------------- ------------ Oct. 7, 2016 Nov. 14, 2016 3,251 158,706 26,715,631 Oct. 25, 2017 Nov. 21, 2017 2,100 104,513 14,247,987 Jun. 11, 2018 Jul. 31, 2018 1,718 108,953 15,212,281 Jul. 12, 2018 Aug. 2, 2018 2,000 74,469 9,395,659 Oct. 5, 2018 Nov. 5, 2018 3,332 98,850 19,288,382 Nov. 12, 2018 Jan. 16, 2019 4,907 117,510 21,093,638 Jan. 24, 2019 Mar. 11, 2019 2,000 113,700 27,482,462 Definition of audience partisan diversity {#sec:ad} ----------------------------------------- To measure audience partisan diversity, first define $N_j$ as the count of participants who visited a web domain and reported their political affiliation to be equal to $j$ for $j=1,\ldots,7$ (where 1 $=$ strong Democrat and 7 $=$ strong Republican). The total number of participants who visited the domain is thus $N = \sum_j N_j$, and the fraction of participants with a partisanship value of $j$ is $p_j = N_j / N$. Denote the partisanship of the $i$-th individual as $s_i$. We calculate the following metrics to measure audience partisan diversity: Variance : $\sigma^2 = N^{-1}\sum(s_i - \overline{s})^2$, where $\overline{s}$ is average partisanship; Shannon’s entropy : $S = - \sum p(j) \log p(j)$, where $p(j)$ is estimated in the following three different ways: (*i*) $p(j) = p_j$ (maximum likelihood); (*ii*) $p(j) = \frac{N_j + \alpha}{N + 7\alpha }$ (mean of the posterior distribution of Dirichlet prior with $\alpha= 1$); and (*iii*) the method of @nemenman2001entropy [@nemenman2001entropy], which uses a mixture of Dirichlet priors (NSB prior). Complementary Maximum Probability : $1 - \max_j \left\{p_j\right\}$; Complementary Gini : $1 - G$ where $G$ is the Gini coefficient of the count distribution $\left\{N_j\right\}_{j=1\ldots 7}$. The above metrics all capture the idea that the partisan diversity of the audience of a web domain should be reflected in the distribution of its traffic across different partisan groups. Each weighs the contribution of each individual person who visits the domain equally; they can thus be regarded as user-level measures of audience partisan diversity. However, the volume and content of web browsing activity is highly heterogeneous across internet users [@montgomery2001identifying; @guessnd2], with different users recording different numbers of pageviews to the same website. To account for this imbalance, we also compute the weighted variants of the above audience partisan diversity metrics where, instead of treating all visitors equally, each individual visitor is weighted by the number of pageviews they made to any given domain. As a robustness check, we compare the strength of association of each of these metrics to news reliability in the Supplementary Materials. We find that all variants correlate with news reliability, but the relationship is strongest for variance. Incorporating audience partisan diversity into collaborative filtering recommendations {#sec:cf+d} -------------------------------------------------------------------------------------- In general, a recommendation algorithm takes a set of users $\mathcal U$ and a set of items $\mathcal D$ and learns a function $f: \mathcal U \times \mathcal D \rightarrow \mathbb{R}$ that assigns a real value to each user-item pair $\left(u,d\right)$ representing the interest of user $u$ in item $d$. This value denotes the estimated rating that user $u$ will give to item $d$. In the context of the present study, $\mathcal D$ is a set of news sources identified by their web domains (e.g., `nytimes.com`, `wsj.com`), so from now on we will refer to $d\in\mathcal D$ interchangeably as either a web domain or a generic item. Collaborative filtering is a classic recommendation algorithm in which some ratings are provided as input and unknown ratings are predicted based on those known input ratings. In particular, the *user-based* CF algorithm, which we employ here, seeks to provide the best recommendations for users by learning from others with similar preferences. CF therefore requires a user-domain matrix where each entry is either known or needs to be predicted by the algorithm. Once the ratings are predicted, the algorithm creates a ranked list of domains for each user that are sorted in descending order by their predicted ratings. To test the standard CF algorithm and our modified CF+D algorithm, we first construct a user-domain matrix $V$ from the YouGov Pulse panel. The YouGov Pulse dataset does not provide user ratings of domains, so we instead count the number of times $\pi_{u,d}\in \mathbb Z^+$ a user $u$ has visited a domain $d$ (i.e., pageviews) and use this variable as a proxy [@10.1145/3130332.3130334]. Because this quantity is known to follow a very skewed distribution, we compute the rating as the TF-IDF of the pageview counts: $$\label{eq:rating} v_{u,d} = \frac{\pi_{u,d}}{\sum_{h} \pi_{u,h}}\log\left(\frac{\pi}{\sum_u \pi_{u,d}}\right)$$ where $\pi = \sum_u\sum_d \pi_{u,d}$ is the total number of visits. Note that if a user has never visited a particular domain, then $v_{u,d} = 0$. Therefore, if we arrange all the ratings into a user-domain matrix $V\in\mathbb R^{|\mathcal U|\times|\mathcal D|}$, such that $(V)_{u,d} = v_{u,d}$, we will obtain a sparse matrix. The goal of any recommendation task is to complete the user-domain matrix by predicting the missing ratings, which in turn allows us to recommend new web domains to users that they may not have seen. In this case, however, we lack data on completely unseen domains. To test the validity of our methods, we therefore follow the customary practice in machine learning of setting aside some data to be used purely for testing (see Methods \[sec:supervised\]). Having defined $V$, the next step of the algorithm is to estimate the similarity between each pair of users. To do so, we use either the Pearson correlation coefficient or the Kendall rank correlation of their *user vectors*; i.e., their corresponding row vectors in $V$ (i.e., zeroes included). For example, if $\tau(\cdot, \cdot)\in \left[-1,1\right]$ denotes the Kendall rank correlation coefficient between two sets of observations, then the corresponding coefficient of similarity between $u\in\mathcal U$ and $u'\in\mathcal U$ can be defined as: $$\mathrm{sim}(u, u') = \frac{\tau(V_u, V_{u'}) + 1}{2} \label{eq:sim}$$ where $V_u,V_{u'} \in\mathbb R^{1\times|\mathcal U|}$ are the row vectors of $u$ and $u'$, respectively. A similar definition can be used for Pearson’s correlation coefficient in place of $\tau$. These similarity coefficients are in turn used to calculate the predicted ratings. In the standard user-based CF, the predicted rating of a user $u$ for a domain $d$ is calculated as: $$\label{eq:cf} \hat{v}^{\rm CF}_{u,d} = \bar{v}_u + \frac{\sum_{u' \in N_{u_d}}\mathrm{sim}(u,u')(v_{u',d}-\bar{v}_{u'})}{\sum_{u' \in N_{u_d}}\mathrm{sim}(u,u')}$$ where $N_{u_d} \subseteq \mathcal U$ is the set of the $n=10$ most similar users to $u$ who have also rated $d$ (i.e., the neighbors of $u$), $v_{u',d}$ is the observed rating (computed with Eq. \[eq:rating\]) that neighboring user $u'$ has given to domain $d$, $\bar{v}_u$ and $\bar{v}_{u'}$ are the average ratings of $u$ and $u'$ across all domains they visited, respectively, and $\mathrm{sim}(u,u')$ is the similarity coefficient (computed with Eq. \[eq:sim\]) between users $u$ and $u'$ based on either the Pearson or the Kendall correlation coefficient. Having defined the standard CF in Eq. \[eq:cf\], we now define our variant CF+D, which incorporates audience partisan diversity of domain $d\in\mathcal D$ as a re-ranking signal in the following way: $$\label{eq:cf+d} \hat{v}^{\rm CF+D}_{u,d} = \hat{v}^{\rm CF}_{u,d}+ g\left(\delta_d\right)$$ where $g\left(\delta_d\right)$ is the re-ranking term of domain $d$, obtained by plugging the audience partisan diversity $\delta_d$ (for example, we use the variance of the distribution of self-reported partisan slants of its visitors, $\delta_d = \sigma^2_d$) into a standard logistic function: $$\label{eq:logistic} g(\delta) = \frac{a}{1 + \exp\big(-\left(\delta - t\right)\,/\,\psi\big)}.$$ In Eq. \[eq:logistic\], parameters $a$, $\psi$, and $t$ generalize the upper asymptote, inverse growth rate, and location of the standard logistic function, respectively. For the results reported in this study we empirically estimate the location as $t = \bar{\delta}$, the average audience partisan diversity across all domains, which corresponds to the value of $\bar{\delta} = 4.25$ since we measure diversity as the variance of the distribution of self-reported partisan slants. For the remaining parameters, we choose $a = 1$, $\psi = 1$. As a robustness check, we re-ran all analyses with a larger value of $a$ and obtained qualitatively similar results (available upon reasonable request). Supervised learning evaluation workflow {#sec:supervised} --------------------------------------- To evaluate both recommendation algorithms, we follow a standard supervised learning workflow. We use precision and root mean squared error (RMSE), two standard metrics used to measure the relevance and accuracy of predicted ratings in supervised learning settings. We define these two metrics elsewhere (see Methods \[sec:eval\]). Here, we instead describe the workflow we followed to evaluate the recommendation methods. Since our approach is based on supervision, we need to designate some of the user ratings (i.e., the number of visits to each domain, which are computed using Eq. \[eq:rating\]) as ground truth to compute performance metrics. For each user, we randomly split the domains they visited into a training set (70%) and a testing set (30%). This splitting varies by user: the same domain could be included in the training set of a user and in the testing set of another. Then, given any two users, their training set ratings are used to compute user-user similarities using Eq. \[eq:sim\] (which is based on Kendall’s rank correlation coefficient; a similar formula can be defined using Pearson’s correlation). If, in computing user-user similarities with Eq. \[eq:sim\], a domain is present for a user but not for the other, then the latter rating is assumed to be zero regardless of whether the domain is present in testing or not. This assumption, which follows standard practice in collaborative filtering algorithm, ensures that there is no leaking of information between the test and training sets. Finally, using either Eq. \[eq:cf\] or Eq. \[eq:cf+d\], we predict ratings for domains in the test set and compare them with the TF-IDF of the actual visit counts in the data. Trustworthiness metrics {#sec:trust} ----------------------- In addition to standard metrics of accuracy (Precision and RMSE; see Methods \[sec:eval\]), we define a new metric called *trustworthiness* to measure the news reliability of the recommended domains. It is calculated using the NewsGuard scores in two ways: either using the numerical scores or the set of binary indicators for whether a site meets or exceeds the threshold score of 60 defined by NewsGuard as indicating that a site is generally trustworthy [@newsguard]. Let $d_1,d_2,\dots,d_k$ be a ranked list of domains. Using numerical scores, the trustworthiness is the average: $$\label{eq:tw} \frac{1}{k}\sum\limits_{r=1}^{k}Q(d_{r})$$ where $Q(d) \in \left[0, 100\right]$ denotes the NewsGuard reliability score of $d\in\mathcal D$. If instead we use the binary indicator of trustworthiness provided by Newguard, then the trustworthiness of domains in a list is defined as the fraction of domains that meet or exceed the threshold score. Note that, unlike precision and RMSE, the trustworthiness of a list of recommendations does not use information on the actual ratings $v_{u,d}$. Instead, using Eq. \[eq:tw\], we compute the trustworthiness of the domains in the test set ranked in decreasing order of user visits $v_{u,d}$. We then compare the trustworthiness of the rankings obtained with either CF or CF+D against the trustworthiness of this baseline. Accuracy metrics {#sec:eval} ---------------- Given a user $u$, let us consider a set $\mathcal D$ of web domains for which $|\mathcal D| = D$. For each domain $d \in \mathcal D$, we have three pieces of information: the two predicted ratings $\hat{v}^{\mathrm{CF}}_{u,d}$ and $\hat{v}^{\mathrm{CF+D}}_{u,d}$ produced by CF and CF+D and the actual rating $v_{u, d}$ (defined elsewhere; see Methods \[sec:cf+d\]). In the following, we omit the subscript $u$ of the user, which is fixed throughout, and the CF/CF+D superscript unless it is not obvious from context. Let us consider a given recommendation method (either CF or CF+D) and denote with $r(d)$ (respectively, $r'(d)$) the rank of $d$ when the domains are sorted by decreasing order of recommendation and actual ratings, respectively. Given a recommendation list length $0 < k \le D$, let us define the set of predicted domains as: $$P_k = \{d \in \mathcal D : r(d) \le k \}$$ and the set of actual domains as: $$A_k = \{d \in \mathcal D : r'(d) \le k \}.$$ Then the *precision* for a given value of $k$ is given by the fraction of correctly predicted domains: $$\mathrm{Precision} = \frac{|P_k \cap A_k|}{|P_k|}.$$ Similarly, the *root mean squared error* for a given value of $k$ between the two ranked lists of ratings is computed as: $$\mathrm{RMSE} = \sqrt{\frac{1}{k} \sum_{r = 1}^{k} \left(\hat{v}_{\rho(r)} - v_{\rho'(r)}\right) ^ 2 }$$ where $\rho:[D]\mapsto\mathcal D$ (respectively $\rho'$) is the inverse function of $r(\cdot)$ (respectively, $r'(\cdot)$); that is, the function that maps ranks back to their domain by the recommendation method (respectively, by actual visits). Note that, in the summation, $\rho(r)$ and $\rho'(r)$ do not generally refer to the same web domain: the averaging is over the two ranked lists of ratings, not over the set of domains in common between the two lists. Discounting via ranking {#sec:rankingmodel} ----------------------- To measure the effect of CF+D on the trustworthiness of rankings, we must select a particular list length $k$. Although Fig. \[fig:trust\] shows improvements for all values of $k$, one potential problem when stratifying on different groups of users is that the results could depend on the particular choice of $k$. To avoid dependence on $k$, we consider a probabilistic model of a hypothetical user visiting web domains from a ranked list of recommendations (Eq. \[eq:ranking\]) and define overall trustworthiness as the expected value of the trustworthiness of domains selected from that list (i.e., discounted by probability of selection). This procedure allows us to compute, for any given user, the effect of a recommendation method (either CF or CF+D) simply as the difference between its expected trustworthiness and the trustworthiness of the ranking obtained by sorting the domains visited by the user in decreasing order of pageviews (see Eq. \[eq:change\]). In practice, to compute Eq. \[eq:change\], let $d_1,d_2,\dots,d_k$ and $d'_1,d'_2,\dots,d'_k$ be two ranked lists of domains, $d_r,d'_r\in\mathcal D~\forall r=1,\ldots,k$, generated by a recommendation algorithm and by actual user pageviews, respectively, and let us denote with $Q(d)$ the NewsGuard reliability score of $d\in\mathcal D$ (see Methods \[sec:trust\]). Recall that Eq. \[eq:ranking\] specifies the probability of selecting a given domain $d\in\mathcal D$ from a particular ranked list as a function of its rank. Even though any pair of equally-ranked domains will be different across these two lists (that is, $d_r \neq d_r'$ in general), their probability will be the same because Eq. \[eq:ranking\] only depends on $r$. We can thus calculate the expected improvement in trustworthiness as: $$\Delta Q = \sum_{r=1}^{k} P(r) \left(Q\left(d_{r}\right) - Q\left(d'_{r}\right)\right) \label{eq:expected}$$ where $P(r)$ is the probability of selecting a domain with rank $r$ from Eq. , which we computed setting $\alpha = 1$. Stratification analysis {#sec:stratification} ----------------------- Recall that we use the self-reported partisanship of respondents in the YouGov Pulse panel as the basis for our diversity signal (see Methods \[sec:ad\]). To avoid the circular reasoning in stratifying on the same source of data, Fig. \[fig:tw\_vs\](a) and Fig. \[fig:tw\_vs\](c) group these users according to the slant of their actual news consumption, which may not necessarily reflect their self-reported partisanship (e.g., a self-reported Democrat might access mostly conservative-leaning websites). We determined this latter metric using an external classification originally proposed by @BakshyAdamic [@BakshyAdamic], who estimated the slant of 500 web domains focused on hard news topics. In practice, @BakshyAdamic based their classification on how hard news from those domains were shared on Facebook by users who self-identified as liberal or conservative in their profile. For almost all domains, @BakshyAdamic reported a value $s\in [-1,1]$ with a value of $s = +1$ for domains that are shared almost exclusively by conservatives, and a value of $s = -1$ for those shared almost exclusively by liberals. (These values could technically vary over $[-2,2]$ but only 1% of domains fell outside $[-1,1]$ using the measurement approach described by @BakshyAdamic [@BakshyAdamic].) In Fig. \[fig:tw\_vs\](c), respondents are grouped according to the absolute slant $\left| s \right|$ of the visited domains where a value of $\left|s \right| = 0$ denotes domains with a perfectly centrist slant and a value of $\left|s \right| = 1$ indicates domains with extreme liberal or conservative slants (i.e., they are almost exclusively shared by one group and not the other). Author Contributions {#author-contributions .unnumbered} ==================== All authors designed the research. S.B. and S.Y. performed data analysis. All authors wrote, reviewed, and approved the manuscript. Ethics Statement {#ethics-statement .unnumbered} ================ This study was reviewed by the IRB under protocols \#HUM00161944 (University of Michigan) and \#STUDY000433 (University of South Florida). Code and Data Availability {#sec:datastmt .unnumbered} ========================== Data necessary to reproduce the findings in the manuscript are available, in aggregated and anonymized format, at <https://github.com/glciampaglia/InfoDiversity/> along with the associated source code. To reproduce the findings in the Supplementary Materials, additional code and data are available upon reasonable request. The raw data that support the findings of this study are available from NewsGuard Technology, Inc. but restrictions apply to the availability of these data, which were used under license for the current study and thus cannot be made publicly available. Data are however available from the authors upon reasonable request subject to licensing from NewsGuard. The data used in this study were current as of November 12, 2019 and do not reflect NewsGuard’s regular updates of the data. Acknowledgements {#acknowledgements .unnumbered} ================ We thank NewsGuard for licensing the data and acknowledge Andrew Guess and Jason Reifler, Nyhan’s coauthors on the research project that generated the web traffic data used in this study. This work was supported in part by the National Science Foundation under a collaborative award (NSF Grant No. 1915833, 1949077). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. [^1]: These data were current as of November 12, 2019 and do not reflect subsequent updates; see for more information.
--- abstract: 'A modified zero forcing decoder (MZF) for Multi-Input Multi-Output (MIMO) systems in case of ill-conditioned channels is proposed. The proposed decoder provides significant performance improvement compared to the traditional zero forcing for ill-conditioned channels. The proposed decoder is the result of reformulating the $QR$ decomposition of the channel matrix by neglecting the elements which represent the correlation in the channel. By combining the traditional ZF with the MZF decoders, a hybrid decoder can be formed that alternates between the traditional ZF and the proposed MZF based on the channel condition. We will illustrate through simulations the significant improvement in performance with little change in complexity over the traditional implementation of the zero forcing decoders.' author: - | Ibrahim Al-Nahhal$^{(1)}$, Masoud Alghoniemy$^{(2)}$, Adel B. Abd El-Rahman$^{(3)}$, and Zen Kawasaki$^{(4)}$\ $^{(1)},^{(3)}$Department of Electronics and Communications Engineering, Egypt-Japan University of Science and Technology,\ Alexandria, 21934 Egypt (e-mail: {Ibrahim.al-nahhal},{adel.bedair}@ejust.edu.eg).\ $^{(2)}$Department of Electrical Engineering, University of Alexandria, 21544 Egypt (e-mail: alghoniemy@alexu.edu.eg).\ $^{(4)}$Department of Information and Communication Technology, Graduate School of Engineering, Osaka University, Suita,\ Osaka, Japan (e-mail: zen@comm.eng.osaka-u.ac.jp). title: 'Modified Zero Forcing Decoder for Ill-conditioned Channels' --- MIMO systems, sphere decoder, zero forcing. Introduction ============ In multi-input multi-output (MIMO) systems with additive Gaussian noise, the Maximum-likelihood (ML) decoder is the optimum receiver. However, due to the high complexity of the ML decoders, lattice-based decoding algorithms such as the sphere decoding algorithms have been proposed [@key-1]. Although the sphere decoder (SD) achieves near-ML performance, its complexity is highly dependent on the received signal to noise ratio (SNR). In order to achieve ML performance at low SNR, the SD algorithm requires severe pre-processing operations that involve iterative column vector ordering [@key-2]. On the other hand, the zero-forcing decoder has a very low complexity at the expense of poor performance due to its inherent noise enhancement. In this paper, we propose a modified zero forcing decoder which provides significant performance improvement by limiting the inherent noise enhancement. The proposed algorithm performs a simple reformulation of the $QR$ factorization of the channel by considering the diagonal elements of the upper-triangular matrix $R$, which represent the well-condition part of the channel. Consider the complex-valued baseband MIMO model in Rayleigh fading channels with $M$ transmit and $N$ received antennas. Let the $N\times1$ received signal vector $\dot{y}$ [@key-2], $$\dot{y}=\dot{H}\dot{x}+\dot{w}$$ with the transmitted signal vector $\dot{x}\in Z_{C}^{M}$ whose elements are drawn from q-QAM constellation and $Z_{C}$ is the set of complex integers, the $N\times M$ channel matrix $\dot{H}$ whose elements $h_{ij}$ represent the Rayleigh complex fading gain from transmitter $j$ to receiver $i$ with $h_{ij}\sim CN(0,1)$ . In this paper, it is assumed that channel realization is known to the receiver through preamble and/or pilot signals, and $N\ge M$. The $N\times1$ complex noise vector $\dot{w}$ has independent complex Gaussian elements with variance $\sigma^{2}$ per dimension. Throughout the paper, we will consider the real model of $(1)$ $$y=Hx+w\label{rmodel}$$ where, $m=2M$, $n=2N$, then $y=[R(\dot{y})\,\,I(\dot{y})]^{T}$ $\in R^{n}$, $x=[R(\dot{x})\,\,I(\dot{x})]^{T}\in Z^{m}$, $w=[R(\dot{w})\,\,I(\dot{w})]^{T}\in R^{n}$, and $H=\left(\begin{array}{cc} R(\dot{H}) & -I(\dot{H})\\ I(\dot{H}) & R(\dot{H}) \end{array}\right)\in R^{n\times m}$, where $R(.)$ and $I(.)$ are the real and imaginary parts, respectively. The ML solution, $\hat{X}_{ML}$, that minimizes the 2-norm of the residual error is found by solving the following integer-least squares $$\hat{X}_{ML}=arg\,\underset{x\subset\Lambda}{min}\left\Vert y-Hx\right\Vert ^{2}\label{ML}$$ where $\Lambda$ is the lattice whose points represent all possible codewords at the transmitter and $Z^{m}$ is the set of integers of dimension $m$. It should be noted that (\[ML\]) is, in general, NP-hard [@key-4]. Traditional Decoders ==================== In this section, we provide a brief overview of the sphere decoder (SD) and the zero forcing decoders. Sphere Decoder -------------- The sphere decoder (SD) reduces the search complexity by limiting the search space inside a hyper-sphere of radius $\rho$ centered at the received vector $y$ [@key-4; @key-5]. In particular, the solution should satisfy $$\left\Vert y-Hx\right\Vert ^{2}<\rho^{2}\label{radius}$$ The sphere decoder transforms the closest-point search problem into a tree-search problem by factorizing the channel matrix $H=QR$, where $Q$ is a $n\times m$ unitary matrix which represents the orthonormal bases of the channel $H$, and $R$ is an upper triangular matrix of size $m\times m$ that represents the correlation in the channel. The energy of the residual error be written recursively as [@key-4] $$\left\Vert y-Hx\right\Vert ^{2}=\left\Vert y-QRx\right\Vert ^{2}=\left\Vert Q^{*}y-Rx\right\Vert ^{2}<\rho^{2}\label{SD}$$ The SD traverses the tree and computes the path metric for each node in the tree. Any branch that has a path metric exceeding the pre-defined pruning constraint $\rho$ will be discarded. Thus, only a subset of the tree is visited and the complexity is reduced. Zero Forcing ------------- The zero forcing (ZF) decoder provides low complexity through decorrelating the channel by directly inverting the channel matrix $H$ [@key-4; @key-9]. In particular, for non square MIMO systems, multiplying both sides of (2) by the pseudo-inverse of the channel, the residual error can be written as $$w_{ZF}=H^{+}w.$$ Where $H^{+}=(H^{*}H)^{-1}H^{*}$ is the pseudo-inverse of the channel. It is clear that if the channel is ill-conditioned, then noise enhancement is significant. The ZF solution, $\hat{X}_{ZF}$, minimizes the 2-norm of the residual error. In particular [@key-4], $$\hat{X}_{ZF}=\lfloor H^{+}y\rceil$$ where $\lfloor.\rceil$ is a slicing operation. The Modified Zero Forcing ========================= In order to reduce the noise enhancement which is a result of ill-conditioned channels, the MZF only considers the diagonal elements of the upper-triangular matrix $R$, in the $QR$ factorization of the channel matrix, $H=QR$. It should be noted that since $Q$ is an orthogonal matrix, then $cond(H)=cond(R)$ [@key-12]. In particular, let $r_{ij}$ be the $ij^{th}$ element of $R$, $j\ge i$; and let $R=\hat{R}R_{D}$ where $\hat{R}$ is a unit upper triangular matrix with elements $\hat{r}_{ij}={r_{ij}}/{r_{jj}},\,\,j>i$, $\hat{r}_{ii}=1$ and $R_{D}$ is a diagonal matrix whose diagonal elements are $r_{ii}$. It should be noted that the strict upper-diagonal elements $\hat{r}_{ij}$ represent the correlation elements which are responsible for channel bad conditioning. On the other hand, the diagonal elements $r_{ii}$ represent the energy of the channel. In order to see this, figure $1$ illustrates the effect of decomposing the channel matrix $H$ on the condition number. In particular, the horizontal access represents the condition number of the full channel matrix $H$, and the horizontal access represents the condition umber of the factored matrices, $\hat{R},R_{D}$. The blue curves represent the condition number of $R_{D}$ while the red curves represent the condition number of $\hat{R}$ for channel matrices of size $2\times2$ and $4\times4$. It is clear that even for highly ill-conditioned channel matrix, the condition number of $R_{D}$ remains low. It should be noted that figure 1 is generated by averaging $10,000$ runs. Using the above results, the MZF decoder solves $$\hat{X}_{MZF}=\lfloor R_{D}^{-1}Q^{H}y\rceil$$ ![The effect of decomposition of the channel matrix H on the condition number.](fig\lyxdot 1) Hybrid Decoder ============== Based on the previous findings, a hybrid decoder (HD) can be formed by alternating between the MZF and the ZF decoders based on channel condition number. In particular, the MZF decoder provides performance improvement in case of ill-conditioned channels due to considering only the well conditioned elements of the channel. On the other hand, in case of well-conditioned channels, the ZF decoder can be used without loss of performance. Hence, the hybrid decoder alternates between the traditional ZF decoder and the MZF according to the state of the channel and can be described by the following pseudo-code. 1. Compute the condition number of the channel $cond(H)$. 2. Set a threshold $\gamma$ &gt; 1. 3. If $cond(H)<\gamma$, then $\hat{X}_{ZF}=\lfloor H^{+}y\rceil$. 4. If $cond(H)>\gamma$, then $\hat{X}_{MZF}=\lfloor R_{D}^{-1}Q^{H}y\rceil$. Complexity Analysis =================== The complexity of the proposed algorithm can be measured by computing the number of floating point operations (flops) consumed for execution; which also can be converted into the execution time. It should be noted that $(7)$ is solved using the $QR$ factorization algorithm [@key-13]. So $(7)$ can be re-written as $$R\hat{X}_{ZF}=Q^{H}y$$ Which can be solved using back substitution method with cost [@key-13]: $$flps_{ZF}\approx2nm^{2}-\frac{2}{3}m^{3}$$ Similarly, for the MZF decoder, $$R_{D}\hat{X}_{MZF}=Q^{H}y$$ The cost of the previous algorithm is the same as the cost of the algorithm of eq. $(10)$ without the cost of back substitution. Where the cost of back substitution is $m^{2}$, the number of flops of MZF algorithm is: $$flps_{MZF}\approx flps_{ZF}-m^{2}$$ In HD algorithm; the number of flops is greater than the average between $flps_{ZF}$ and $flps_{MZF}$ by the number of flops consumed in channel condition number calculations. Simulation Results ================== For comparison purpose, the performance of the proposed decoders ,(MZF) and (HD), are compared to the SD and the ZF for uncoded systems. It is assumed that the transmitted power is independent of the number of transmit antennas, $M$, and equals to the average symbols energy. We have assumed that the channel is Rayleigh fading channel. Let $P$ be the percentage of ill-conditioned channels in the runs. Figures 2, 3, and 4 illustrate the performance of the traditional and proposed decoders for $N=M=2$ ,$16-QAM$ for well-conditioned channels $P=0\%$, ill-conditioned channels with $cond(H)=10^{3}$ for $P=100\%$ and $P=50\%$ versus different values of signal to noise ratio. Figure 5 illustrates the performance for different values of channel condition numbers for $SNR=15dB$. As it is clear from the figures, the SD is superior while the ZF and the MZF decoders are inferior. It should be noted that in the well-conditioned channel case, as illustrated in figure 2, the MZF decoder performance has an error floor. This is a natural byproduct of neglecting of one of the well-conditioned channel component, $\hat{R}$, as expected. Thus, the HD typically follows the ZF because it acts as a ZF in well-conditioned channels, $P=0\%$, as indicated before. Similarly, for the ill-conditioned channels, $P=100$, illustrated in figure 3, the performance of the ZF decoder produces an error floor. This is expected due to the noise enhancement inherent in the ZF decoder [@key-12]. Figure 4 shows that as the percentage of ill-conditioned channels increases, the performance of HD is very close to SD performance especially in low SNR. Also, as shown in figures 5 and 6 the performance of the HD approaches the SD performance with the increase of the channel condition number especially in low SNR as shown in figure 6 . The complexities of the SD, ZF, MZF, and HD are measured by the execution time in finding the solution. In particular, figures 7 illustrates the complexity for the $2\times2$ MIMO with $16-QAM$ modulation as a function of the SNR. It is clear that the SD has high complexity while the MZF and the ZF decoders has low complexity. As it is clear from $(12)$, the complexity of MZF decoder is less than the complexity of the ZF decoder by a small margin according. ![Performance of 2$\times$2 MIMO 16-QAM well-conditioned channels (P = 0%)](1-0ill_ber) ![Performance of 2$\times$2 MIMO 16-QAM ill-conditioned channels with cond(H) = $10^{3}$ (P = 100%)](2-1ill_ber) ![Performance of 2$\times$2 MIMO 16-QAM (P = 50%)](3-2ill_ber) ![Channel performance varies with a channel condition number for 2$\times$2 MIMO 16-QAM at SNR = 15dB](5-15ber_condition) ![Channel performance varies with a channel condition number for 2$\times$2 MIMO 16-QAM at SNR = 5dB](6-5ber_condition) ![Average Execution Time per Bit for 2$\times$2 MIMO 16-QAM](7-0ill_comp) Conclusions =========== A Hybrid MIMO decoder that is based on neglecting the cross correlation elements of the channel correlation matrix, $R$, in ill-conditioned channels and acting as ZF in well-conditioned channels has been proposed. The proposed decoder has better performance in ill-conditioned channels than the corresponding ZF decoder. The complexity of the proposed decoder is as low as the ZF decoder. [1]{} Ramin Shariat-Yazdi and Tad Kwasniewski, “Configurable K-best MIMO Detector Architecture,” ISCCSP 2008, Malta, 12-14 March 2008. Chung-An Shen and Ahmed M. Eltawil, “An Adaptive Reduced Com- plexity K-Best Decoding Algorithm with Early Termination,” IEEE CCNC 2010 proceedings. Y. Cho et. al, MIMO-OFDM Wireless Communications with MATLAB, chapter 11, John Wiley & Sons (Asia) Pte Ltd, 2010. Yi Hsuan Wu, Yu Ting Liu, Hsiu-Chi Chang, Yen-Chin Liao, and Hsie- Chia Chang, “Early-Pruned K-Best Sphere Decoding Algorithm Based on Radius Constraints,” IEEE Communications Society, ICC 2008. N. Sathish Kumar and K. R. Shankar Kumar, “Performance analysis and comparison of m x n zero forcing and MMSE equalizer based receiver for mimo wireless channel,” Songklanakarin J. Sci. Univ. 33 (3), 335-340, May - Jun. 2011. Michael T. Heath, “Scientific Computing: An Introductory Survey,” chapter 2, ISBN-10: 0071244891, 2001. Lloyd N. Trefethen, David Bau, “Numerical Linear Algebra,” lecture 11, 1997.
--- abstract: 'We have studied the ferromagnetic Kondo lattice model (FKLM) with an Anderson impurity on finite chains with numerical techniques. We are particularly interested in the metallic ferromagnetic phase of the FKLM. This model could describe either a quantum dot coupled to one-dimensional ferromagnetic leads made with manganites or a substitutional transition metal impurity in a MnO chain. We determined the region in parameter space where the impurity is empty, half-filled or doubly-occupied and hence where it is magnetic or nonmagnetic. The most important result is that we found, for a wide range of impurity parameters and electron densities where the impurity is magnetic, a singlet phase located between two saturated ferromagnetic phases which correspond approximately to the empty and double-occupied impurity states. Transport properties behave in general as expected as a function of the impurity occupancy and they provide a test for a recently developed numerical approach to compute the conductance. The results obtained could be in principle reproduced experimentally in already existent related nanoscopic devices or in impurity doped MnO nanotubes.' author: - 'S. Costamagna and J. A. Riera' title: 'Magnetic and transport properties of the one-dimensional ferromagnetic Kondo lattice model with an impurity' --- Introduction {#intro} ============ Manganese oxides, such as La$_{1-x}$Ca$_{x}$MnO$_3$, commonly referred to as manganites, have attracted an intensive theoretical and experimental effort,[@lamno; @dagorep] mainly due to their property of colossal magnetoresistance[@colossal] and its consequent applications to magnetic recording devices. General applications of the ferromagnetic (FM) metallic phase of manganites belong to the field of spintronics[@spinrev; @ox-spin] where the spin of the electrons is exploited in addition to its charge. A simple spintronics device which is relevant for the present study is formed by a quantum dot (QD),[@hanson] a nanometer-scale box, connected to two FM leads.[@pasupathy] This device can act as a spin valve[@martinek; @gatorie] or a spin filter. Ferromagnetic metals (Co, Pd-Ni) or diluted magnetic semiconductors, such as GaMnAs, are employed as leads. Alternatively, manganites are also used as FM leads in spintronics[@hueso; @cottet] because of its high polarization. Manganites are usually described by the ferromagnetic Kondo lattice model (FKLM) in which the conduction sites represent the orbitals $e_g$ and the localized spins the orbitals $t_{2g}$.[@lamno] The QD will be described as a single Anderson impurity. The spin valve with manganites as leads corresponds then to a FKLM with an Anderson impurity which is the model we will study in the present work. Moreover, we will consider this model in a one-dimensional (1D) space. One should keep in mind also that the magnetoresistance of manganites is usually applied in multilayer heterostructures FM/M/FM or FM/I/FM (M: metal, I: insulator) which can be considered as 1D systems in the direction perpendicular to the interface.[@hetero] This model, in addition to its application to a wide class of devices, could describe other more conventional condensed matter systems such as a transition metal ion such as Cu replacing Mn in a manganese oxide chain.[@cromo] It is well-known that a single impurity could lead to interesting and important local or short-range effects in magnetic systems in low dimensions.[@martins] These effects can in turn modify the long-range physics of such systems for a finite density of impurities. The main purpose of this work is to search in the parameter space of the model for phases in which the saturated FM is reduced to a partially polarized FM, or even to a nonmagnetic state, upon the introduction of an Anderson impurity. This problem would be the analog for a ferromagnetic chain of the effect that causes a magnetic impurity in a paramagnetic metallic chain, that is, the paradigmatic Kondo effect.[@hewson] We would like to emphasize that finite size effects due to our finite-cluster calculations could be relevant both to describe mesoscopic devices and to capture local or short-range features caused by an impurity in a Mn-O chain. For completeness, since the model studied may be applied to electronic devices, we will compute the conductance through the QD but clearly the study of transport properties is not the main motivation for the present work. In any case, even though the physics found for most of the parameter space corresponds to the saturated FM phase and hence transport properties can be recovered by a spinless fermion model, the impurity-FKLM is an interesting testing ground for the quite recent numerical techniques we will employ for this study. ![(Color online) Picture of model (\[fklm-QD\]). []{data-label="fig1"}](fig1.eps){width="43.00000%"} Model and methods {#model} ================= Hence, in this article we will study a one-dimensional FKLM with an Anderson impurity located in the center of the chain (see Fig. \[fig1\]). Then, the model is defined by the Hamiltonian: $$\begin{aligned} {\cal H}_0 = &-& t_0 \sum_{i>0,i<-1,\sigma} (c^{\dagger}_{i+1 \sigma} c_{i \sigma} + H.c. ) - J_H \sum_{l\neq 0} {\bf S}_{l}\cdot {\bf s}_{l} \nonumber\\ &-& t' \sum_{\sigma} (c^{\dagger}_{-1 \sigma} c_{0 \sigma} + c^{\dagger}_{1 \sigma} c_{0 \sigma} + H.c. ) \nonumber\\ &+& U n_{0,\uparrow} n_{0,\downarrow} + \epsilon n_{0} \label{fklm-QD}\end{aligned}$$ where the notation is standard. The Anderson impurity or “QD", with parameters $U, \epsilon$, is located at site “0" and is connected to the rest of the system with a hopping $t'$. The “leads" ($i\neq 0$) correspond to the FKLM with the Hund’s rule exchange coupling $J_H > 0$. $S_l$ is the spin operator for the localized spin-1/2 orbital and $s_l$ is the one for the conduction electron at site $l$ ($l\neq 0$). $t_0=1$ is adopted as unit of energy, and we take $t'=0.4$ throughout. Model (\[fklm-QD\]) will be termed “QD-FKL" model. The pure single-orbital FKLM or Kubo-Ohata model[@lamno] has been extensively studied, particularly using numerical techniques[@rierahallberg] and its phase diagram for various spatial dimensions and values $S$ of the localized spins has been determined.[@dagotto] Even in the simplest case of 1D and spin-1/2 localized spins, the model reproduces qualitatively the main features of manganites. In the following we will work in the metallic FM phase of the FKLM, typically, the density of conduction electrons $n \leq 0.6$, $J_H=20$ (all coupling constants are expressed in units of $t_0$). The on-site potential $\epsilon$ and Coulomb repulsion $U$ are the main variables whose effects we want to study. In a heterostructure $\epsilon$ would be fixed by chemistry but in a spin valve it would correspond to the gate voltage which can be varied at will. In order to detect any departure from the fully polarized FM state it is essential to work in the subspace of total $S^z=0$ (1/2) for even (odd) number of electrons. We denote with $L$ the total length of the system including the impurity site. Open boundary conditions (OBC) were adopted in the lattices studied except otherwise stated. Small clusters with $L$ up to 12 will be studied using exact diagonalization (ED) with the Lanczos algorithm. Larger clusters will be solved using density matrix-renormalization group (DMRG).[@dmrgrev] For calculations in the subspace of maximum total z-component of the spin, $S^z$, i.e., saturated ferromagnetism, we used completely independent ED and DMRG codes for the spinless fermion model. The ED and DMRG codes for the FKLM were thoroughly checked, in the first place by reproducing results in the literature. In the second place, by comparing results obtained by both techniques in small clusters and finally, by comparing results for large chains between FKLM and the spinless fermion model in the case of maximum $S^z$. Here we would like to stress the fact that convergence to the ground state, both with ED and in the diagonalization of the superblock Hamiltonian at each iteration of DMRG is extremely slow. This is already known for ED studies of the Hubbard model with very large $U/t$ and small doping, that is in the proximity of the Nagaoka phase. DMRG studies for the Kondo lattice model (KLM), with both ferro- and antiferro- magnetic exchange coupling, have been in general restricted to smaller chains than for the Hubbard model. In fact, even in 1D, the DMRG treatment of KLM has the level of difficulty of an interacting system on a two-leg ladder. The convergence is even worst for the FKLM where previous studies have been limited to $L\approx 36$ with a discarded weight of order $10^{-5}$.[@garcia] Last but not least, the presence of impurities makes the convergence more difficult particularly for DMRG. In our calculations, with a retained number of $M\approx 400$, the truncation error is negligible ($\approx 10^{-14}$) for $L\approx 20$ in the regions close to saturated ferromagnetism but it drops to $\approx 10^{-10}$ in the nonmagnetic region. In the case of the spinless fermion model, for $L\approx 20$ and $M\approx 400$, the precision in energy is at least 12 digits. In the parameter regions where total spin $S$ takes its maximum possible value $S_{max}$, the energy in the $S^z=0$ subspace reproduces the value obtained in the $S^z=S_{max}$ subspace using the spinless fermion model within at least 9 digits. In any case, the limited precision within DMRG depends essentially on the lack of convergence in the diagonalization of the Hamiltonian. The conductance will be estimated by a numerical setup[@alhassanieh; @schmitteckert] in which a small bias voltage is applied to the left (L) and right (R) leads, with $\Delta V=V_R-V_L$, ($V_R=-V_L$), at time $t=0$. The current $J(t)$ induced by this voltage on each bond connecting the QD to the leads is computed with the time evolution formalism both within ED or DMRG.[@schollwock] This numerical setup is equivalent to the systems which were treated analytically using the Keldysh Green functions formalism.[@wingreen] These results for out-of equilibrium, with interacting QD, contain as particular cases the ones for noninteracting systems described by the Landauer formula.[@datta] These analytical results, both for interacting and noninteracting QDs, were recovered using this numerical setup and time-dependent DMRG.[@cazalilla; @alhassanieh; @schmitteckert] The advantage of this numerical procedure is that it can be extended with no formal limitations to study the case of [*interacting leads*]{}[@qdhub] as will be done in the present work. In principle, one could adopt as a measure of the conductance the maximum of $J(t)/\Delta V$. It has been shown that this recipe provides correct results for the conductance when the maximum of $J(t)$ corresponds to a “plateau" which appears in high-precision calculations using the “adaptive" time-dependent-DMRG on large clusters.[@alhassanieh] In the following we adopt the less precise “static" algorithm[@cazalilla] which still gives qualitatively correct results, particularly if relatively small clusters are considered, but is much faster than the “adaptive" scheme, thus allowing to explore a wider range of couplings and densities. In the case of ED, the time evolution is exactly computed in the full Hilbert space of the system. The time-evolution of the ground state is given by $|\Psi(t+\tau)\rangle = e^{-i{\cal H}\tau}|\Psi(t)\rangle$, where ${\cal H} = {\cal H}_0 + V_L N_L + V_R N_R$, $N_L$, $N_R$ are the electron occupancies of the left and right leads respectively, and $|\Psi(t=0)\rangle=|\Psi_0\rangle$, ${\cal H}_0 |\Psi_0\rangle = E_0 |\Psi_0\rangle$. $|\Psi(t+\tau)\rangle $ was computed using the Krylov algorithm.[@manmana] All the results reported below correspond to $\Delta V=0.01$, $\tau=0.1$. The dynamical impurity magnetic susceptibility and dynamical magnetic structure factor (defined for convenience in Section \[awaysymmetric\]) are computed within ED and DMRG using the standard continued fraction formalism. In the case of DMRG we again choose the “static" formulation which although less precise is enough to determine the presence or absence of a peak at the bottom of the spectrum. We would like to stress that the most important results reported in this article correspond to static properties, that is, ground state energies and spin-spin correlations, where the precision of DMRG is maximal. For all quantities studied, the results obtained with ED are precise to precision machine. ![(Color online) (a) QD occupancy as a function of $\epsilon$, at several electron densities $n$. (b) $\Delta E=E(S^z)-E(0)$, versus $S^z$, $n=0.5$. (c) $J(t)/\Delta V$ for $n=0.4$, $\epsilon=-4.5$, $-5.5$ and $-12$ (full curves). Sinusoidal fits are shown with dashed lines. Curves for $\epsilon=-4.5$ and $-12$ have been multiplied by 2 and 20 respectively. (d) Conductance, as a function of $\epsilon$, for the same electron densities as in (a). $L=10$, $J_H=20$, $t'=0.4$, $\epsilon=-U/2$.[]{data-label="fig2"}](fig2.eps){width="43.00000%"} Results at the symmetric point {#symmetric} ============================== Let us start to analyze results for the QD-FKL model in the $L=10$ cluster obtained by ED. We consider in the first place the case of the “symmetric point", $\epsilon=-U/2$. The symmetric point of an Anderson impurity is in principle the obvious place to look for a magnetic impurity placed in a noninteracting chain. However, this is not the case for the present model. It can be seen in Fig. \[fig2\](a) that the QD or impurity occupancy, and hence its actual magnetic or nonmagnetic character, experiences a sharp crossover as a function of the on-site potential of the impurity. For values of $\epsilon$ larger than a crossover value $\epsilon^*$, $n_{QD}\approx 0$, and for values of $\epsilon < \epsilon^*$, $n_{QD}\approx 1$. $\epsilon^*$ may be defined as the value of $\epsilon$ at which $n_{QD}=0.5$. This crossover can be understood by examining two variational states in the atomic limit. One, with energy $E_1=-J_H n_e/4$, ($n_e$ is the number of conduction electrons) where all electrons are located on the leads and ferromagnetically aligned with the localized spins, and the other with energy $E_2=-J_H (n_e-1)/4+\epsilon$, where one electron has been moved from the leads to the QD. The crossover between both variational states at $\epsilon^*_{var}=-J_H/4$ is quite close to $\epsilon^*$ as shown in Fig. \[fig2\](a). The dependence of $\epsilon^*$ with $n$ is mainly due to the kinetic energy which can be easily computed within the spinless fermion model to which the FKLM is reduced in the saturated FM state, i.e. when total spin $S=S_{max}=S^z_{max}=n_e/2$. In this case, neglecting the term with $t'$, $\epsilon^*_{spinless}$ is equal to the single particle energy of the top of the band which increases with $n$ and is exactly zero at $n=0.5$. The connection between both models implies that $\epsilon_{spinless}=\epsilon+J_H/4$. It is important to notice at this point that although the pure system is in the saturated FM state for the densities studied, for some impurity parameters the impurity may drive the system into partially polarized FM states with total spin $S<S_{max}$. In fact, as shown in Fig. \[fig2\](b) for $n=0.5$, there is a region of $\epsilon$, close to $\epsilon^*$, where $S=0$ ($1/2$) for even (odd) $n_e$. We have observed this nonmagnetic state for other chain lengths and densities. Although the difference in energy between states with different $S^z$ is very small, these results strongly suggest that there are regions in parameter space where the impurity causes a breakdown of the fully saturated FM state. This possibility will be thoroughly examined in the next Section. Let us discuss in detail how the conductance $G$ is determined. In Fig. \[fig2\](c), it is shown $J(t)/\Delta V$ ($J(t)$ is the average of the current on the two bonds connecting the QD to the leads) which presents the typical oscillatory behavior. This oscillatory behavior follows from the expansion of $e^{-i{\cal H}\tau}$ in eigenvectors of ${\cal H}$, which for small $\Delta V$ are adiabatically related to those of ${\cal H}_0$. We would like to emphasize that results depicted in Fig. \[fig2\](c) are exact, i.e., no truncation of the Hilbert space was performed. Then, we fit each curve by a sinusoidal and we adopt $G$ as the amplitude of this sinusoidal. In this small cluster, but also for $L=20$, a single sinusoidal gives a reasonable fitting of $J(t)/\Delta V$ for most of the cases studied, particularly near $\epsilon^*$. Although this procedure is not very precise, it gives qualitatively correct results as we discuss in the following. Results for the conductance are shown in Fig. \[fig2\](d) for various densities as a function of $\epsilon$. $G$ is only different from zero at the crossover between the region of empty QD ($\epsilon>>\epsilon^*$) and the region of half-filled QD ($\epsilon<<\epsilon^*$) and it has a sharp peak at $\epsilon^*$ with a width approximately equal to the bandwidth of a tight-binding model on the leads, $4 t_0$. This behavior is what one would expect for the spinless fermion model. The determination of the variation of the maximum conductance with density $n$, which would require calculations on a finer mesh in $\epsilon$, is out of the scope of the present study. ![(Color online) (a) QD occupancy and (c) conductance, as a function of $\epsilon$, for several electron densities $n$ indicated on the plot. (b) $J(t)/\Delta V$ for $n=0.4$, $\epsilon=-5$, $-5.5$ and $-7$ (full curves). Fits to a sine function are shown with dashed lines. Results for $L=20$, $J_H=20$, $t'=0.4$, $\epsilon=-U/2$. (d) Spin-spin correlations along the conduction chain, for the pure FKLM (stars) and in the presence of an impurity for various values of $\epsilon$ indicated on the plot, $L=19$, $n=0.421$. The reference site is located at the center of the chain and the normalization $\langle S^z_0 S^z_0 \rangle =1$ was adopted.[]{data-label="fig3"}](fig3.eps){width="43.00000%"} Let us now discuss results for $L=20$, obtained with DMRG, also at the symmetric point. In Fig. \[fig3\](a), it can be seen that, as for $L=10$, the impurity occupancy $n_{QD}$ experiences a sudden change as a function of the on-site potential $\epsilon$. This crossover is located approximately at $\epsilon^*=-J_H/4$ as argued before, even inside the incommensurate (IC) phase (but not strictly at $n=1$), which is expected since the variational states are independent of the underlying FM or IC order. As in the $L=10$ cluster, the location of this crossover shifts to larger values of $\epsilon$ as the density is increased. At the IC-FM crossover for $J_H=20$, $n\approx 0.55$, $\epsilon^*$ experiences a somewhat larger increase and then it remains relatively unchanged up to half-filling. In the IC region of course the kinetic energy is no longer approximated by the spinless fermion model. In fact, it is easy to realize that the kinetic energy versus $\epsilon$ follows an [*opposite*]{} behavior as the IC-FM border is crossed. The computation of the conductance follows the steps previously outlined. In Fig. \[fig3\](b), $J(t)/\Delta V$ is shown for $n=0.4$ and several values of $\epsilon$. It can be seen that in spite of the approximate nature of the computation of the time evolution in a truncated Hilbert space, $J(t)$ is clearly well fitted by a single sinusoidal, particularly close to $\epsilon^*$, and these oscillations have similar behaviors as a function of $\epsilon^*$ as earlier for the $L=10$ chain. In Fig. \[fig3\](c) we show the resulting $G$ as a function of $\epsilon$ and for several densities. As for the smaller lattice $L=10$, the conductance is different from zero only for $\epsilon \approx \epsilon^*$, with a peak at $\epsilon^*$.[@note2] In spite of the extremely slow convergence in DMRG calculations, by keeping 450 states we were able to find that $E(S^z=0)< E(S^z=S_{max})$ for $\epsilon=-8$, thus suggesting for $L=19$ a similar behavior to that shown in Fig. \[fig2\](c) for $L=10$. Further indications of this behavior can be obtained by examining the $z$-component of the spin-spin correlations $\langle S^z_j S^z_0\rangle$, where the reference site “0" is the center of the chain and $j$ labels conduction sites. Due to the large value of $J_H$, the correlations between the impurity site and the localized spins have the same qualitative behavior so in this and in the following section we will only consider the correlations between the impurity and the conduction sites. These correlations, shown in Fig. \[fig3\](d) for $n=0.421$, clearly depart from the correlations in the pure system as $|\epsilon|$ is increased. This behavior indicates that the ground state computed by DMRG is a mixture of states very close in energy which depart from the saturated FM state. Then, this behavior of $\langle S^z_j S^z_0\rangle$ with $\epsilon$, which follows the same trend as the one for the $L=10$ chain, suggests that also for the $L=19$ chain the ground state $S < S_{max}$. Notice also that in this low-$S$ region, $\langle S^z_j S^z_0\rangle$ do not show any trace of antiferromagnetic order. ![(Color online) (a) Variational states for model Eq. (\[fklm-QD\]) in the $U$-$\epsilon$ plane. Vertical thick lines indicate approximately the region with minimum $S$ for some values of $U$, $n=0.4$, discussed in the text. (b) QD occupancy and (c) conductance as a function of $\epsilon$, for $U=4$, and various electron fillings $n$. Results for $L=10$ chain except otherwise stated. []{data-label="fig4"}](fig4.eps){width="40.00000%"} Breakdown of the ferromagnetic state away from the symmetric point {#awaysymmetric} ================================================================== Let us discuss the consequences of these results for devices where the leads are made with manganites. Since $J_H/t_0$ in manganites has been estimated of the order or larger than 10,[@lamno; @dagorep] then a value of $U^* = -2 \epsilon^* \geq 5$ in the QD (see Fig. \[fig4\](a)) would be required for $n_{QD}\sim 1$. This value of $U$ is somewhat larger than the one in materials employed in the QD, such as semiconductors or carbon nanotubes. It is necessary then that the device could be operated away from the symmetric point. More importantly, a smaller $U$ could imply a larger effective coupling between the impurity and the conduction sites, assuming that to lowest approximation the relation $J_{eff} \approx t'^2/U$ is still valid for the present model for $U > t'$. In support of this hypothesis, we have observed that the spin-spin correlation between the impurity and its nearest neighbor site becomes more negative with decreasing $U$ at fixed $\epsilon$. Then, by working with a larger $J_{eff}$ we could expect to be more able to detect the presence of the nonmagnetic phase which was suggested by the results found in the previous section. For these reasons, in the following we adopt a moderate value of the on-site Coulomb repulsion, $U=4$, and we study the properties of the model for variable $\epsilon$, i.e. following a vertical line in Fig. \[fig4\](a). ![(Color online) (a) $\Delta E =E(S^z_{max})-E(S^z_{min})$, as a function of $\epsilon$, for $U=4$, and various electron fillings $n$. Results for $L=10$ chain except otherwise stated. (b) Upper (solid line) and lower (dashed line) boundaries of the region where the ground state $S = 0 (1/2)$, for $n=0.4$, as a function of $L$, for $U=4$, and $n=0.4$. (c) $<S^{z^2}>$ (normalized to 1) at the impurity site (full symbols) and total spin normalized to $S_{max}$ (open symbols), as a function of $\epsilon$, for $U=4$. Results for $L=11$, $n=0.364$ (circles), $L=19$, $n=0.421$ (diamonds), and $L=20$, $n=0.4$ (triangles).[]{data-label="fig5"}](fig5.eps){width="43.00000%"} The electron occupancy at the QD, shown in Fig. \[fig4\](b) for $L=10$, presents now three regions where $n_{QD}$ is approximately 0, 1, and 2 as $\epsilon$ decreases. The crossovers among these regions are located near the variational estimates $\epsilon^*_{0,1} \approx -J_H/4$, and $\epsilon^*_{1,2} \approx -J_H/4 -U$ which are shown in Fig. \[fig4\](a). As it can be seen in Fig. \[fig4\](c), the conductance $G$ consistently with the results shown in Figs.  \[fig2\](d) and  \[fig3\](c) at the symmetric point, presents sharp peaks at the crossovers between regions with different $n_{QD}$, i.e. when $n_{QD}\approx 0.5$ or $1.5$. Notice that $G$ has a larger value between the peaks for $n=0.4$ compared with the one for $n=0.3$, which is essentially zero. Figure \[fig5\] contains the most important result of our work. In Fig. \[fig5\](a) we plot $\Delta E =E(S^z_{max})-E(S^z_{min})$, where $S_{min}=S^z_{min}=0$ ($1/2$) for even (odd) $n_e$, as a function of $\epsilon$ in the $L=10$ chain. It can be clearly seen that the ground state $S$ is smaller than $S_{max}$ for the various densities considered for $\epsilon \leq \epsilon^*_{0,1}$. In this case, $\Delta E$ is quite large and we were able to obtain for $L=20$ results very close to those for $L=10$, as shown for $n=0.4$, suggesting that this feature is at least not an artifact of this small chain. More interesting is the fact that inside the $n_{QD}=1$ region there is an interval in $\epsilon$, which depends on the density, where $S=S_{min}$, as shown in Fig. \[fig5\](b) for $n=0.4$ as a function of $1/L$. This state appears between two saturated FM states, $S=S_{max}-1$ below the lower boundary line because of the double occupancy of the QD, and $S=S_{max}$ above the upper boundary line where the QD is empty. The error bars quoted in the plot correspond to the grid adopted in the $\epsilon$ axis. For the largest chain studied, $L=30$, we have only computed the energies in the $S^z=S_{max}$, $S_{max}-1$, $S_{max}-2$ and $S_{max}-3$ subspaces. Decreasing precision as $L$ increases prevents us to take a finer grid close to the crossover between different regions and hence error bars are larger. An extrapolation to the bulk limit would not be reliable with these error bars. For $L=10$, $n=0.3$, the region in $\epsilon$ where $S=1/2$ shrinks to $[-9.3,-8.6]$. It is tempting to relate this smaller interval for $n=0.3$ with respect to the one for $n=0.4$ with the behavior of $G$ noticed above but further analysis would be needed to confirm this possibility. In Fig. \[fig5\](c) we show that the presence of a state with $S=S_{min}$ corresponds to a strong magnetic character of the impurity with $<S^{z^2}>$ approaching its maximum value of one (according to the normalization adopted). This feature has been observed for all lattices, densities, and values of $U$ studied. The intervals in $\epsilon$ where $S=S_{min}$ for various values of $U$, $n=0.4$, $L=10$ are shown in Fig. \[fig4\](a) with thick lines. ![(Color online) Spin-spin correlations from the impurity site along the conduction chain for several values of $\epsilon$ in the regions where (a) $S=S_{max}$, (b) $S=0$, and (c) $S=S_{max}-1$. The normalization $\langle S^z_0 S^z_0 \rangle =1$ was adopted. (d) Static structure factor along the conduction chain for various values of $\epsilon$. Results for $L=11$ chain, PBC, $U=4$, $n=0.364$, except otherwise stated. Results for $L=19$ correspond to $n=0.421$ and for $L= 25$ $n=0.4$. []{data-label="fig6"}](fig6.eps){width="43.00000%"} The behavior of the magnetic character of the impurity affects in turn all the magnetic properties in the system. Let us first examine the spin-spin correlations between the impurity site and the remaining sites along the conduction chain. Results for the $L=11$ chain with periodic boundary conditions (PBC), $n=0.364$, are shown in Fig. \[fig6\](a), (b) and (c) in the three regions where the spin of the ground state is $S_{max}$, 0 and $S_{max}-1$, respectively. We also included in this figure results for $L=19$, $n=0.421$, and $L=25$, $n=0.4$, OBC, which show the same behavior. It should be noticed that in addition to the expected larger magnitude of these correlations in the $S=0$ region with respect to the ones in the other two regions, there are also qualitative changes. To detect these qualitative differences we have computed the static structure factor along the conduction chain: $$\begin{aligned} \chi(q) = \frac{1}{L} \sum_{l,j} \langle S^z_l S^z_j\rangle e^{i q (l-j)} \label{formfact}\end{aligned}$$ where $j,l$ label the conduction chain sites, and $q=(2\pi/L)n$, $n=0,\ldots,L-1$ . Results where site $j$ is restricted to the impurity site, that is just the Fourier transform of the correlations shown in Fig. \[fig6\](a), (b) and (c), are essentially the same as those obtained using Eq. (\[formfact\]). As it can be seen in Fig. \[fig6\](d), $\chi(q)$ has the typical shape of a FM order in the subspace of total $S^z=0$ in the regions with $S=S_{max}$ and $S_{max}-1$, while it presents a peak at the smallest nonzero momentum in the region $S=0$. As observed earlier, in Fig. \[fig3\](d), there are no traces of AF order in this region. Further information about changes in magnetic properties caused by the impurity can be obtained by looking at the dynamical impurity susceptibility defined as: $$\begin{aligned} S_{imp}(\omega) = \sum_{n} |\langle \Psi_n | S^z_{imp} | \Psi_0 \rangle |^2 \delta(\omega -(E_n-E_0)) \label{dyn-rs}\end{aligned}$$ where the notation is standard. Since the contribution from the remaining sites on the conduction chain is negligible, $S_{imp}(\omega)$ is essentially equal to the total dynamical susceptibility once the contribution from localized spins has been subtracted. In all results below, the peaks have been broadened with a width $\delta=0.1$. ![(Color online) Dynamical impurity susceptibility $S_{imp}(\omega)$ vs $\omega$ for $U=4$, $n=0.4$, and values of $\epsilon$ in the region where (a) $S=S_{max}$, (b) $S=0$, and (c) $S=S_{max}-1$. Energies of the dominant peaks in the dynamical structure factor $\chi(q,\omega)$, (d) in the high- and (e) in the low-$\omega$ regions as defined in the text for various values of $\epsilon$. The size of the symbols is proportional to the intensity of each peak. Results for $L=11$ chain except otherwise stated. []{data-label="fig7"}](fig7.eps){width="43.00000%"} In Fig. \[fig7\](a), (b) and (c) we show $S_{imp}(\omega)$ in the low-$\omega$ part of the spectrum for various values of $\epsilon$ in the regions $S=S_{max}$, 0 and $S_{max}-1$ respectively, for $L=11$, $n=0.4$ and $L=19$, $n=0.421$. Again, in addition to an expected difference in amplitude, a clear qualitatively different behavior is noticeable. In this part of the spectrum $S_{imp}(\omega)$ presents small peaks at finite $\omega$ in the $S=S_{max}$ and $S_{max}-1$ regions while in the $S=0$ region, $S_{imp}(\omega)$ shows a strong peak close to $\omega=0$. Results for $L=25$, $n=0.4$, $\epsilon=-8$ are indistinguishable from those for $L=19$. The distinct behavior caused by the impurity can also be detected by looking at the dynamical structure factor on the conduction chain which is defined as: $$\begin{aligned} \chi(q,\omega) = \sum_{n} |\langle \Psi_n | S^z_{q} | \Psi_0 \rangle |^2 \delta(\omega -(E_n-E_0)) \label{dyn-mom}\end{aligned}$$ where $S^z_{q}=(1/L)\sum_j S^z_j \exp{(ijq)}$, and the sum extends over the conduction chain. Figure \[fig7\](d) and (e) show the energy and intensity of the centroid of the main peaks in the high- and low-$\omega$ parts of the spectrum for values of $\epsilon$ in the three regions of total spin $S$ above discussed. In the $S=S_{max}$ and $S_{max}-1$ regions the peaks with largest weight are those with energy close to $J_H$ which corresponds to the magnon excitation. This behavior is strikingly different to the one in the $S=0$ region where the most weighted peak form a dispersionless band at the bottom of the spectrum. This band is reminiscent of the one found in gapped spin systems upon doping with nonmagnetic impurities.[@martins] In the present case, these low-energy peaks may correspond to magnetic excitations living in a “cloud" surrounding the impurity that can be observed in real-space in Fig. \[fig6\](b). Discussion and conclusions {#conclusions} ========================== We have applied both well-established and recently developed numerical approaches to study the FKLM with an Anderson impurity in the FM phase on finite chains. We found that the magnetic or nonmagnetic character of the impurity is determined by a relationship between the impurity parameters and the Hund’s rule exchange coupling of the manganite. As expected, transport occurs at the crossovers between the empty, half-filled or filled QD regions. The most important result of this article is the presence of an intermediate singlet phase between the fully saturated phases with $S_{max}$ (empty impurity) or $S_{max}-1$ (doubly occupied impurity). This problem has some resemblance with the problem of the existence of an intermediate phase between two ordered states in the frustrated Heisenberg model on the square lattice. This intermediate phase had been predicted by ED studies on small clusters.[@j1j2inter] The alternative view was a first order quantum phase transition between both ordered phases. Only recently was this controversy being settled favouring the existence of the intermediate state[@j1j2fin] but the nature of this phase is still a subject of active research.[@j1j2nuevo] Of course the physics involved in both problems is completely different, but by analogy, in our problem, we could consider the possibility of a first-order transition between two FM states instead of the proposed intermediate nonmagnetic phase. By comparing the models involved in these two problems, FKLM and frustrated Heisenberg model, the former is much more difficult to analyze than the latter by numerical techniques since the size of the Hilbert space is much larger for a given cluster and moreover taking into account the convergence problems discussed in Section \[model\]. These difficulties prevent us to perform an extrapolation in order to decide if this intermediate phase really exists in the bulk limit or if it is just how the transition between the $S_{max}$ and $S_{max}-1$ phases manifests in finite systems. In any case, the possibility of the existence of this intermediate state is in principle interesting and important and it deserves further study. In this sense, there are three issues that should be considered. The first one is that this intermediate phase could be stabilized by some modifications of the model, for example by including the Heisenberg interaction between localized spins, or by replacing the spin-1/2 localized spins by higher spin ones which, in addition, are also more realistic for manganites. The second issue is that even if this intermediate phase has a finite range around an impurity, a [*finite density*]{} of impurities could lead to a macroscopic feature. The situation here is analog to the presence of nonmagnetic impurities in the above mentioned gapped systems.[@martins] In these systems a single impurity attracts locally a spinon to the impurity and a finite density of impurities drives the system to a long-range AF order. These two issues are currently under study.[@largo] The third issue we would like to consider is related to the relevance of the present model to devices with [*finite*]{} dimensions. In these mesoscopic systems, as discussed in introductory textbooks,[@datta] due to its finite size, many physical properties are different to that found in bulk systems. It is then relevant for these devices to capture short-range effects. Finally, we would like to provide a qualitative scenario to help understanding this nonmagnetic state. Let us assume that the system is in a low $S^z$ state. In the region where the impurity is empty or double occupied, both “leads" are relatively disconnected and each one would have a ferromagnetic state with spins polarized in one direction and the other with spins polarized in the opposite direction. Of course this state is degenerate with the one with reversed polarizations. Now, when the impurity is singly occupied, not only it would have a definite magnetic character but it would allow the crossing of one electron from one lead to the other where it would have then a “wrong" spin. This kind of magnons would then decrease the total energy both by increasing the kinetic energy and by decreasing the magnetic energy due to an effective AF interaction with the impurity. Of course, this gain in energy would not occur for the fully polarized system so it would drive the system to lower $S$ and presumably to $S_{min}$.[@note3] It is interesting to notice that this scenario would then imply an enhancement of transport in the system which could be relevant for the devices mentioned in the Introduction. In summary, we present a prediction on the magnetic state of the FKLM doped with a magnetic impurity. This prediction could be experimentally verified on Cu-doped manganite nanotubes. These results could also be in principle reproduced experimentally on spin valves where manganites are used as ferromagnetic leads. We hope the present results will encourage theoretical studies to further characterize this proposed intermediate phase and to explore its presence in more realistic models for manganites. We thank E. Dagotto, A. Dobry, C. J. Gazza, M. E. Torio, and S. Yunoki for useful discussions. M. B. Salamon and M. Jaime, Rev. Mod. Phys [**73**]{}, 583 (2001); T. Kaplan and S. Mahanti, (eds.), [*Physics of Manganites*]{}, (Kluwer Academic/Plenum Publishers, New York, 1999). E. Dagotto, T. Hotta, and A. Moreo, Phys. Rep. [**344**]{}, 1 (2001). A. P. Ramirez, J. Phys.: Condens. Matter [**9**]{}, 8171 (1997), and references therein. I. Žutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. [**76**]{}, 323 (2004). M. Bibes and A. Barthélémy, IEEE Trans. Electron. Devices [**54**]{}, 1003 (2007). R. Hanson, L. P. Kouwenhoven, J. R. Petta, S. Tarucha, and L. M. K. Vandersypen, Rev. Mod. Phys. [**79**]{}, 1217 (2007). A. N. Pasupathy, R. C. Bialczak, J. Martinek, J. E. Grose, L. A. K. Donev, P. L. McEuen, and D. C. Ralph, Science [**306**]{}, 86 (2004). J. Martinek, M. Sindel, L. Borda, J. Barnas, J. König, G. Schön, and J. von Delft, Phys. Rev. Lett. [**91**]{}, 247202 (2003) C. J. Gazza, M. E. Torio, and J. A. Riera, Phys. Rev. B [**73**]{}, 193108 (2006). L. E. Hueso, J. M. Pruneda, V. Ferrari, G. Burnell, J. P. Valdes-Herrera, B. D. Simons, P. B. Littlewood, E. Artacho, A. Fert, N. D. Mathur, Nature 445, 410 (2007). A. Cottet, T. Kontos, S. Sahoo, H. T. Man, M-S. Choi, W. Belzig, C. Bruder, A. Morpurgo, C. Schoenenberger, Semicond. Sci. Technol. [**21**]{}, S78 (2006). C. H. Ahn, A. Bhattacharya, M. Di Ventra, J. N. Eckstein, C. D. Frisbie, M. E. Gershenson, A. M. Goldman, I. H. Inoue, J. Mannhart, A. J. Millis, A. F. Morpurgo, D. Natelson, J.-M. Triscone, Rev. Mod. Phys. [**78**]{}, 1185 (2006). Cr-doped manganites have already been studied but Cr-ions are not described as an Anderson impurity. See e.g., T. Kimura, Y. Tokura, R. Kumai, Y. Okimoto, and Y. Tomioka, J. Appl. Phys. [**89**]{}, 6857 (2001). G. B. Martins, E. Dagotto and J. A. Riera, Phys. Rev. B [**54**]{}, 16032 (1996); G. B. Martins, M. Laukamp, J. Riera, and E. Dagotto, Phys. Rev. Lett. [**78**]{}, 3563 (1997). A. J. Hewson, [*The Kondo problem to heavy fermions*]{}, (Cambridge University Press, Cambridge, UK, 1993). J. Riera, K. Hallberg and E. Dagotto, Phys. Rev. Lett. [**79**]{}, 713 (1997). E. Dagotto, S. Yunoki, A. L. Malvezzi, A. Moreo, J. Hu, S. Capponi, D. Poilblanc, and N. Furukawa, Phys. Rev. B [**58**]{}, 6414 (1998). U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005). D. J. García, K. Hallberg, B. Alascio, and M. Avignon, Phys. Rev. Lett. [**93**]{}, 177204 (2004). K. A. Al-Hassanieh, A. E. Feiguin, J. A. Riera, C. A. Büsser, and E. Dagotto, Phys. Rev. B [**73**]{}, 195304 (2006). P. Schmitteckert, Phys. Rev. B [**70**]{}, 121302 (2004). U. Schollwöck, J. Phys. Soc. Jpn. 74 (Suppl.), 246 (2005), and references therein. Y. Meir and N. S. Wingreen, Phys. Rev. Lett. [**68**]{}, 2512 (1992); N. S. Wingreen, A. P. Jauho, and Y. Meir, Phys. Rev. B [**48**]{}, 8487 (1993). S. Datta, [*Electronic transport in mesoscopic systems*]{} (Cambridge University Press, Cambridge, 1995). M. A. Cazalilla and J. B. Marston, Phys. Rev. Lett. [**88**]{}, 256403 (2002). S. Costamagna, C. J. Gazza, M. E. Torio, and J.A. Riera, Phys. Rev. B [**74**]{}, 195103 (2006). S. R. Manmana, A. Muramatsu, and R. M. Noack, AIP Conf. Proc. [**789**]{}, 269-278 (2005). For chains with $L$ odd, we obtain that the conductance maximum is located at values of $\epsilon$ lower (higher) than $\epsilon^*$ for even (odd) number of conduction electrons. However, the results for $L$ odd converge to those of $L$ even as the chain length is increased. J. E. Hirsch and S. Tang, Phys. Rev. B [**39**]{}, 2887 (1989); M. P. Gelfand, R. R. P. Singh, and D. A. Huse, [**40**]{}, 10801 (1989). R. R. P. Singh, Z. Weihong, C. J. Hamer, and J. Oitmaa, Phys. Rev. B [**60**]{}, 7278 (1999); S. Kurata, C. Sasaki, and K. Kawasaki, Phys. Rev. B [**63**]{}, 024412 (2000). V. Lante and A. Parola, Phys. Rev. B [**73**]{}, 094427 (2006). S. Costamagna and J. A. Riera, in preparation. Preliminary calculations show that the magnetic energy decreases strongly than the increase of kinetic energy as $S^z$ is reduced.
--- abstract: | At the CHEP03 conference we launched the Physics Analysis eXpert (PAX), a C++ toolkit released for the use in advanced high energy physics (HEP) analyses. This toolkit allows to define a level of abstraction beyond detector reconstruction by providing a general, persistent container model for HEP events. Physics objects such as particles, vertices and collisions can easily be stored, accessed and manipulated. Bookkeeping of relations between these objects (like decay trees, vertex and collision separation, etc.) including deep copies is fully provided by the relation management. Event container and associated objects represent a uniform interface for algorithms and facilitate the parallel development and evaluation of different physics interpretations of individual events. So-called analysis factories, which actively identify and distinguish different physics processes and study systematic uncertainties, can easily be realized with the PAX toolkit. PAX is officially released to experiments at Tevatron and LHC. Being explored by a growing user community, it is applied in a number of complex physics analyses, two of which are presented here. We report the successful application in studies of $t\bar t$ production at the Tevatron and Higgs searches in the channel $t \bar tH$ at the LHC and give a short outlook on further developments. author: - | Steffen Kappler, Martin Erdmann, Ulrich Felzmann, Dominic Hirschbühl,\ Matthias Kirsch, Günter Quast, Alexander Schmidt and Joanna Weng[^1][^2][^3][^4] title: | The PAX Toolkit and its Applications\ at Tevatron and LHC --- particle physics analysis, reconstruction of complex events, event container model, C++ toolkit; Introduction ============ analyses at modern collider experiments enter a new dimension of event complexity. At the LHC, for instance, physics events will consist of the final state products of the $\mathrm{O}(20)$ collisions taking place during each readout cycle. In addition, a number of physics questions is studied in channels with complex event topologies and configuration ambiguities occurring during event analysis. ![a) Associated Higgs production in the channel ${t\bar tH}$ with ${H{\rightarrow}b\bar b}$ and $t\bar t {\rightarrow}WW\,b\bar b {\rightarrow}qq'\, \mu \bar \nu_\mu\, b\bar b$.   b) The visible reconstructed partons of this channel.[]{data-label="ttHFeyn.eps"}](ttHFeyn.eps){width="3.25in"} One item in the long list of examples is a channel of $t$-quark associated Higgs production, ${t\bar tH}$ with ${H{\rightarrow}b\bar b}$ (see [Fig.\[ttHFeyn.eps\]]{}.a). The event topology of four $b$-jets, two light-quark-jets, an isolated muon, missing energy and possible additional jets from initial state radiation (ISR) and final state radiation (FSR) imposes highest demands on detectors and reconstruction algorithms. In addition, non-trivial ambiguities must be resolved during event analysis. Even if all final state products could be reconstructed perfectly (as illustrated in [Fig.\[ttHFeyn.eps\]]{}.b) and no ISR or FSR effects occured, at least 24 different configurations would be possible. Finite jet resolutions, limited efficiency and purity of the $b$-tagging as well as the presence of additional jets complicate ambiguity resolution and signal identification. This task can be approached with a likelihood method based on characteristical event variables, where each possible event configuration is developed individually and rated with the likelihood function; the most probable of all interpretations finally is selected. Such an approach can be implemented by object-oriented coding and suggests the use of a class collection, that provides event containers for the reconstructed objects (muons, jets, missing energy, vertices, collisions, etc.) and handles relations between the individual objects (as, for instance, vertex relations for particle decays). Due to the large number of ambiguities occurring during the reconstruction of ${t\bar tH}$ events, these classes are required to offer automated copy functionality for containers, objects and corresponding relations. The application of a *generalized event container* comes with a number of desirable side-effects. If used to define an abstraction interface between the output of event generator, simulation or reconstruction software and the physics analysis code, the latter is protected from changes in the underlying software packages to a large extent. This reduces code maintainance and increases code lucidity. In addition, unnecessary duplication of the analysis code can be avoided: so can the influence of detector effects (studied by direct comparison of the results on generator, simulation and on real data level) be investigated ad hoc, i.e. with the same analysis source code. Analysis factories, in which a number of analyses are executed at the same runtime, identifying and distinguishing different physics processes or studying systematic uncertainties, can easily be realized when using common physics objects and a common event container model in each of the analyses. Analysis environments based on a well-defined, generalized event container also provide a basis for efficient team work. Collaboration in (and supervision of) groups of students is facilitated, and knowledge transfer between subsequent generations of analysts as well as between different experiments is fostered. In this article, we present the Physics Analysis eXpert (PAX), a C++ toolkit for particle physics analysis that provides such a generalized event container together with various built-on functionalities. The PAX class structure ======================= The PAX kernel, introduced in the year 2002 [@PAX02] and released at the CHEP03 conference in 2003 [@PAX03], is currently available as 2.00 version. For the convenience of connecting to existing software packages, PAX is realized in the C++ programming language [@CPPSTL]. It provides additional functionality in top of the vector algebra of the widely-spread libraries CLHEP [@CLHEP] or ROOT [@ROOT].[^5] The PAX container model as well as file I/O are based on the C++ Standard Template Library (STL) [@CPPSTL]. The PAX toolkit provides three types of generalized physics objects: - [particles (or reconstructed objects), i.e. Lorentz-vectors, represented by the class [[*PaxFourVector*]{}]{},]{} - [vertices, i.e. three-vectors, represented by the class [[*PaxVertex*]{}]{},]{} - [and collisions, represented by the class [[*PaxCollision*]{}]{}.]{} These objects are able to establish relations, and can be stored and managed in event containers, represented by the [[*PaxEventInterpret*]{}]{} class. Physics objects --------------- ![The [[*PaxFourVector*]{}]{} class extends the basic functionalities of the [[*PaxLorentzVector*]{}]{} in order to represent particles in HEP decays.[]{data-label="PaxFourVector.eps"}](PaxFourVector.eps){width="3.25in"} ![The [[*PaxVertex*]{}]{} class extends the basic functionalities of the [[*PaxThreeVector*]{}]{} in order to represent vertices in HEP particle decays.[]{data-label="PaxVertex.eps"}](PaxVertex.eps){width="3.25in"} ![The [[*PaxCollision*]{}]{} class represents collisions in bunch crossings at high luminosity colliders. Besides storage of general properties, the [[*PaxCollision*]{}]{} allows the user to establish and manage relations to [[*PaxVertex*]{}]{} and [[*PaxFourVector*]{}]{} objects.[]{data-label="PaxCollision.eps"}](PaxCollision.eps){width="3.25in"} The [[*PaxFourVector*]{}]{} class (see [Fig.\[PaxFourVector.eps\]]{}) represents particles or reconstructed objects (such as muons, electrons, missing energy, jets etc.). It inherits its basic Lorentz-vector characteristics from the well-known libraries CLHEP or ROOT. Commonly needed, additional properties such as particle-id, status, charge etc. can be stored in data members. Specific information (such as b-tags, jet cone sizes or energy corrections, for instance) can be stored in the so-called user records. User records are collections of string-double pairs, meant to hold object information complementary to data members. All PAX physics objects own user records (instances of the class [[*PaxUserRecord*]{}]{}) and provide methods for quick access to individual user record entries. Each instance of a PAX physics object carries an unique integer key (the so-called [[*PaxId*]{}]{}) and a string name (the so-called [[*PaxName*]{}]{}). An integer workflag facilitates tagging of individual objects. Print methods are provided to allow monitoring of object state and established relations on various verbosity levels. Copy constructors are provided to perform deep copies of PAX physics objects. The [[*PaxVertex*]{}]{} class, sketched in [Fig.\[PaxVertex.eps\]]{}, represents the spatial point of decays in particle reactions. Thus, in analogy with the [[*PaxFourVector*]{}]{}, it obtains its basic three-vector characteristics also from the CLHEP or ROOT package. The [[*PaxCollision*]{}]{} class (see [Fig.\[PaxCollision.eps\]]{}) allows the separation of collisions in multicollision events, as they occur at high-rate hadron colliders. It provides the relation management necessary to associate [[*PaxVertex*]{}]{} and [[*PaxFourVector*]{}]{} objects with different collisions in the event. Access to primordial C++ classes {#sectionExpClassRel} -------------------------------- Each PAX physics object can record pointers to an arbitrary number of instances of arbitrary C++ classes. This way, the user can keep track of the data origin within the detector reconstruction software, for instance. Access to the pointers is possible at the same runtime during any later stage of the analysis. A typical use case is the need to re-fit a track which requires access to the hits in the tracking chamber. The PAX object that represents this track, i.e. a [[*PaxFourVector*]{}]{} instance, provides the two template methods [*addPointer$<$Type$>$(name, ID, pointer)*]{} and [*findPointer$<$Type$>$(name, ID)*]{}. The argument [*name*]{} is supposed to correspond to the C++ class name, e.g. [*Type*]{}, the argument [*ID*]{} is a unique integer identifier for the referenced instance of the C++ class [*Type*]{}, and the third argument is a pointer to this instance. The mechanism behind is sketched in [Fig.\[PaxExperimentClass.eps\]]{}. The class template [*PaxExperiment$<$Type$>$*]{} provides storage, access, and clone of the pointer of type [*Type*]{}. Its base class [[*PaxExperimentClass*]{}]{} is used as the interface to the PAX classes which are enabled to store and access the pointer through the C++ `dynamic_cast` operator. When copying a PAX physics object, all pointers are copied as well by making use of the [*clone()*]{} method. ![The classes [[*PaxExperimentClass*]{}]{} and [*PaxExperiment$<$Type$>$*]{} provide recording of arbitrary pointers with PAX objects.[]{data-label="PaxExperimentClass.eps"}](PaxExperimentClass.eps){width="3.25in"} Relation management ------------------- The principal duty of the PAX relation management is handling of decay trees. The manager is based on the Mediator design pattern, described in detail in reference [@Mediator]. In this design all relations are kept locally (i.e. every object knows about all their directly related objects), so that global relation directories can be avoided. ![The PAX classes for relation management inherit from the class [[*PaxRelationManager*]{}]{}.[]{data-label="PaxRelMgr.eps"}](PaxRelMgr.eps){width="3.25in"} Speaking of PAX physics objects, this means, that each [[*PaxCollision*]{}]{} object owns relation managers (see [Fig.\[PaxRelMgr.eps\]]{}) that carry pointers to the related [[*PaxVertex*]{}]{} and [[*PaxFourVector*]{}]{} objects. At the same time, the [[*PaxVertex*]{}]{} objects hold pointers to their related [[*PaxCollision*]{}]{}s as well as to their incoming and outgoing [[*PaxFourVector*]{}]{}s. By the same token, [[*PaxFourVector*]{}]{}s know about their related [[*PaxCollision*]{}]{}s and about their begin and end [[*PaxVertex*]{}]{} objects. With this functionality, PAX allows to store complete multicollision events from parton to stable particle level, including four-momenta and spatial vertex information. In addition, the PAX relation management is used to record analysis histories: each object, which is copied via copy constructors, keeps pointers to its original instances. This way the user may always go back and ask for original properties of objects which might have changed during the development of the analysis. A powerful feature, implemented by means of the relation management, is the so-called locking mechanism. It is implemented to enable the user to exclude parts of decay trees from the analysis (i.e. excluding a lepton from a jet finding algorithm, etc.). If one particle or vertex is locked, all the objects down the decay tree (and the history) will be locked, too. Locking and unlocking are relaized by setting or removing the lock-flag owned by each PAX physics object. Maps & object containers ------------------------ The PAX kernel provides the base classes [[*PaxMap$<$key, item$>$*]{}]{}and [[*PaxMultiMap$<$key, item$>$*]{}]{}, which inherit from the STL classes [*map$<$key, item$>$*]{} and [*multimap$<$key, item$>$*]{}, respectively. The explicit inheritance has been chosen to provide the use of existing STL objects and methods with these PAX classes. This way, iterations of PAX maps can be performed by using either the PAX iterator classes ([[*PaxIterator*]{}]{}, [[*PaxMapIterator*]{}]{}, [[*PaxMultiMapIterator*]{}]{}) or the commonly known STL iterators. All PAX classes which serve as containers are based on the class [[*PaxMap*]{}]{} (see [Fig.\[PaxContainers.eps\]]{}). ![The PAX container classes inherit from the class [[*PaxMap*]{}]{}.[]{data-label="PaxContainers.eps"}](PaxContainers.eps){width="3.25in"} Event container --------------- ![The [[*PaxEventInterpret*]{}]{} class represents the generalized container for complete HEP events. It stores and handles multiple collisions, vertices and particles as well as event specific information in the user records.[]{data-label="PaxEventInterpret.eps"}](PaxEventInterpret.eps){width="3.25in"} The [[*PaxEventInterpret*]{}]{} class, illustrated in [Fig.\[PaxEventInterpret.eps\]]{}, is the generalized event container provided by PAX. By incorporating the previously described functionalities, it is capable of holding the complete information of one multicollision event with decay trees, spatial vertex information, four-momenta as well as additional reconstruction data in the user records. Physics objects (i.e. instances of the classes [[*PaxFourVector*]{}]{}, [[*PaxVertex*]{}]{} and [[*PaxCollision*]{}]{}) can be added or created with the [[*PaxEventInterpret*]{}]{}[*::add()*]{} and [[*PaxEventInterpret*]{}]{}[*::create()*]{} methods. Depending on the object type, a pair of [[*PaxId*]{}]{} and Pointer to the individual object is stored in one of three maps ([[*PaxFourVectorMap*]{}]{}, [[*PaxVertexMap*]{}]{} or [[*PaxCollisionMap*]{}]{}). Access to these maps as well as direct access to the physics objects is guaranteed via methods such as [[*PaxEventInterpret*]{}]{}[*::getFourVectors()*]{} and [[*PaxEventInterpret*]{}]{}[*::findFourVector()*]{}. At deletion of a [[*PaxEventInterpret*]{}]{} instance, all contained physics objects will be deleted, too. The [[*PaxEventInterpret*]{}]{} class is so named, because it is intended to represent a distinct interpretation of an event configuration (e.g.connecting particles to the decay tree according to one out of a number of hypotheses, applying different jet energy corrections, etc.). To facilitate the development of numerous parallel or subsequent event interpretations, the [[*PaxEventInterpret*]{}]{} class features a copy constructor, which provides a deep copy of the event container with all data members, physics objects, and their (redirected) relations. PAX file I/O {#sectionPaxIoFile} ------------ The PAX toolkit offers a file I/O scheme for persistent storage of the event container, based on STL streams. It allows the user to write the contents of [[*PaxEventInterpret*]{}]{} instances with all contained physics objects[^6] as well as their relations to PAX data files. When restoring the data from file, an empty [[*PaxEventInterpret*]{}]{} instance is filled with the stored data and objects and all object relations are reproduced. The PAX data file format provides multi-version and multi-platform compatibility. It is built of a hierarchy of binary data chunks: the top level unit is an event, which consists of an arbitrary number of event interpretations. The event interpretation chunk consists of data members, user records as well as chunks for each of the contained physics objects. Each chunk carries header information (one byte for unit type and four bytes for data amount information) and the actual binary data. This allows file structure checks and fast positioning. Therefore, the user can quickly skip arbitrary numbers of events in PAX data files, without having to sequentially read and discard. PAX also provides the possibility to write event units to strings (and to restore the [[*PaxEventInterpret*]{}]{} instances from those strings). This way, the user can store PAX objects to any data format supporting strings or binary data fields (like databases or experiment specific data formats). Accessories and interfaces -------------------------- As a complement to the PAX kernel, we released two accessory packages for reading standard event generator file formats. The [*PaxTuple*]{} package provides transfilling of decay trees stored in the HEPEVT or ROOT Ntuple data formats to [[*PaxEventInterpret*]{}]{} containers. Accordingly, the [*PaxHepMC*]{} package gives access to HepMC files. In addition, interfaces developed and posted by PAX users, that fill PAX objects with specific data of HEP experiments, are available via the PAX web page [@PAXWWW]. Software development procedure ------------------------------ The PAX kernel and its officially supported accessories are coded and maintained by a core group of currently six developers at CERN and the Aachen and Karlsruhe universities. New developments and code modifications pass a certification procedure and are discussed and adopted in regular video meetings. As a guideline, new developments focus on aspects of performance improvement and on user feedback. New releases are to be backward compatible. Version management of the software project is handled with a web-browsable Version Control System (CVS) [@CVS][@PAXCVS]. Availability, documentation and support --------------------------------------- The continuously updated PAX web page [@PAXWWW] provides download of the various versions of PAX kernel and accessories (based on the aforementioned web-browsable CVS repository). It also provides the PAX Users Guide[@PAXGuide], a comprehensive text documentation of the PAX toolkit, as well as class reference and fast navigator pages for download or online use. The web page also offers access to mailing lists, in which PAX users are informed about new developments and in which technical issues of PAX analyses can be discussed. How PAX physics analyses can be structured ========================================== ![One possible realization of a physics analysis with PAX; a dedicated, experiment-specific class for filling the PAX containers represents the interface between detector reconstruction software and PAX-based physics analysis. The PAX persistency scheme is used to store the data to PAX data files for later use.[]{data-label="PAXAnaI.eps"}](PAXAnaI.eps){width="3.25in"} ![a) Exchangeability of the filling class allows PAX physics analyses to be applied to various input sources, e.g. to Monte Carlo event generator data. b) The use of PAX data files allows fast analysis of the reconstruction data decoupled from the experiment-specific environment.[]{data-label="PAXAnaII.eps"}](PAXAnaII.eps){width="3.25in"} To exploit the features offered by the PAX toolkit, physics analyses might be realized, for instance, according to the example structure illustrated in [Fig.\[PAXAnaI.eps\]]{}. There, a dedicated, experiment-specific interface class for filling the PAX containers (i.e. [[*PaxEventInterpret*]{}]{} instances) represents the interface between detector reconstruction software and PAX-based physics analysis. Once all relevant information is filled, the analysis code is called, and the PAX objects (as obtained by the filling class or at any subsequent stage of the event analysis) can be stored persistently to PAX data files for later use. Analysis results might be further processed with help of the ROOT package. With an analysis consistently formulated with PAX objects, the filling class can be exchanged easily, and the identical analysis code can be applied, for instance, directly to the output of a Monte Carlo event generator or a fast simulation software, see [Fig.\[PAXAnaII.eps\]]{}.a. Furthermore, the use of PAX data files, which provide the distilled experimental event information, allows fast analysis of the reconstruction data decoupled from the experiment-specific software and data storage environment, see [Fig.\[PAXAnaII.eps\]]{}.b. Implementation of PAX into experiment specific software environments ==================================================================== PAX has been made available within the software environments of the experiments CDF, D0[^7] (both Tevatron) and CMS (LHC). Following the same principles, the integration of PAX into the latter is described as a general example. The PAX toolkit is provided by the CMS software environment as an external package [@PAXAFS], enabling the physicists inside the CMS collaboration to use PAX without having to care about installation or setup of the package. An extensive example analysis for the use of PAX with the detector reconstruction software ORCA [@ORCA] is included in the CMS CVS repository [@PAXExPaxAnalysis]. In this example, the (ambiguous) reconstruction of the partonic process of the decay $W{\rightarrow}\mu \bar \nu_\mu$ is carried out by using reconstructed muons and missing transverse energy. The missing information about the longitudinal component of the neutrino momentum is obtained with a $W$-mass constraint, which yields (up to) two analytical solutions, and thus two possible event interpretations. Subsequently, both interpretations are developed in two separate [[*PaxEventInterpret*]{}]{} instances, and a number of example histograms is filled. The class design of this example analysis is based on the structure described in the previous section, including interface classes for filling [[*PaxEventInterpret*]{}]{} containers with the reconstructed objects of ORCA. To facilitate the start-up for new PAX users, a tutorial video for this example plus supplementary material can be found in the CMS section of the PAX web page [@PAXTutorial]. PAX physics analyses for Tevatron and LHC ========================================= Provided for the software environments of the CDF, D0 and CMS experiments, PAX is being explored by a growing user community. In the following, two successful applications of PAX in complex physics analyses are presented. A PAX-based $t \bar t$ analysis for CDF --------------------------------------- ![The channel $t \bar t$ on parton level (a) and the visible reconstructed partons of this channel (b).[]{data-label="ttFeyn.eps"}](ttFeyn.eps){width="3.25in"} ![ Verification of the $t$-quark reconstruction in generated $t \bar t$ events. The full histograms show reconstructed properties of the event interpretation which reproduces the partonic $t \bar t$ state best. Further information results from the selection procedure using reconstructed quantites of the event only: the symbols represent the selected event interpretation, the dashed histogram summarizes the other possible interpretations. a) Reconstructed mass of the $t$-quark with a subsequent leptonic $W$-decay. b) Angular distribution of the $W$-boson in the rest frame of the $t$-quark. c) Angular distribution of the charged lepton in the rest frame of the $W$-boson. (For this study, the HERWIG Monte Carlo generator [@HERWIG] and CDF detector simulation [@CDFMC] have been used.)[]{data-label="ttResults.eps"}](ttResults.eps){width="3.25in"} In this section, an analysis of top-antitop-quark events ($t \bar t$ events) with the CDF experiment at Tevatron is described [@DHDiss]. As illustrated in [Fig.\[ttFeyn.eps\]]{}, the electron-plus-jet decay channel shows similar combinatorial tasks as the aforementioned ${t\bar tH}$ channel. In this $t \bar t$ study, an analysis factory based on the PAX event interpretation concept is used to perform complete reconstruction of the partonic scattering process and to optimize the separation of signal and background processes. The partonic process of the decay $\bar t {\rightarrow}W \bar b {\rightarrow}e \bar \nu_e \bar b$ is reconstructed as follows. First, the W-boson decaying into electron and neutrino is reconstructed. From the W-mass constraint two possible solutions can be deduced for the longitudinal neutrino momentum. This results in two event interpretations for the W-boson. Combining each of those with one of the jets leads to the interpretations for the $t$-quark (with different kinematics and reconstructed masses). The remaining part of the process, i.e. $t {\rightarrow}Wb {\rightarrow}q\bar q'b$, is reconstructed from three of the remaining jets. Consequently, in a four jet $t\bar t$ event, 24 interpretations can be constructed. The most likely $t\bar t$ event interpretation is selected by first demanding non-zero b-probability for one of the jets of one of the $t$-quark candidates. Finally, one of these solutions is selected by evaluating the most likely event interpretation based on kinematic properties, the reconstructed mass of the W boson decaying to $q\bar q'$, and the mass difference of the two reconstructed $t$-quarks. The resulting example plots are shown in [Fig.\[ttResults.eps\]]{}. A PAX-based ${t\bar tH}$ analysis for CMS ----------------------------------------- ![Reconstructed Higgs mass in the channel ${t\bar tH}$ with ${H{\rightarrow}b\bar b}$ on generator (a) and full simulation level (b). The gray shaded area corresponds to the combinatorial background, i.e. to those events, in which a wrong ${H{\rightarrow}b\bar b}$ configuration was selected. (For this study, the PYTHIA Monte Carlo generator [@PYTHIA] and CMS detector simulation [@CMSMC] have been used.)[]{data-label="ttHResults.eps"}](ttHResults.eps){width="3.25in"} The channel of associated Higgs production, ${t\bar tH}$ with ${H{\rightarrow}b\bar b}$, by means of which the requirements to a particle physics analysis toolkit have been motivated in the introduction of this article, is studied in the CMS experiment at the LHC [@HiggsDiscovPot][@SKDiss], for instance. The most recent of these studies makes use of the PAX event interpretation concept to develop possible event interpretations in a manner similar to the one described in the previous CDF example. After development of all interpretations, a likelihood function is used to select the most probable one by rating the different configurations on the basis of kinematics variables and masses of the two $t$-quarks and their decay products. [Fig.\[ttHResults.eps\]]{} illustrates the performance of this method in simulations with and without detector effects. Please notice, that [Fig.\[ttHResults.eps\]]{}.a and [Fig.\[ttHResults.eps\]]{}.b have been produced with the identical analysis code, by simply exchanging the interface classes (compare [Fig.\[PAXAnaI.eps\]]{} and [Fig.\[PAXAnaII.eps\]]{}). In this way, a good measure for how detector and reconstruction methods influence the results can directly be obtained – with almost no analysis code duplication. Conclusions =========== The PAX toolkit is designed to assist physicists at modern collider experiments in the analysis of complex scattering processes. PAX provides a generalized HEP event container with three types of physics objects (particles, vertices and collisions), relation management and file I/O scheme. The PAX event container is capable of storing the complete information of multicollision events (including decay trees with spatial vertex information, four-momenta as well as additional reconstruction data). An automated copy functionality for the event container allows the user to consistently duplicate event containers with physics objects and relations. The PAX file I/O scheme can be used to write (and read) complete event containers to (from) disk file; this offers an easy realization of distilled experiment data streams. By structuring physics analyses based on PAX objects, the identical source code can be applied to various data levels. This adds a desirable aspect of flexibility to the software-side of particle physics analysis. PAX is available within the software environments of experiments at Tevatron and LHC, where it is applied in a number of physics analyses. Two thereof are outlined in this article, demonstrating typical use cases and successful applications of the PAX toolkit. Evident advantages arising from the usage of the PAX toolkit are avoidance of code duplication, increased code lucidity, unified data model and nomenclature, and therefore more efficient team work in the complex physics analyses at modern HEP experiments. Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank Rene Brun, Anne-Sylvie Giolo-Nicollerat, Christopher Jung, Yves Kemp, Klaus Rabbertz, Jens Rehn, Sven Schalla, Patrick Schemitz, Thorsten Walter, and Christian Weiser for helpful contributions and feedback. [1]{} M. Erdmann et al., *Physics Analysis Expert*. Proceedings of the 14th Topical Conference on Hadron Collider Physics, HCP2002, Karlsruhe, Germany, 2002. M. Erdmann, D. Hirschbühl, C. Jung, S. Kappler, Y. Kemp, M. Kirsch et al., *Physics Analysis Expert PAX: First Applications*, physics/0306085, 2003. B. Stroustrup, *The C++ Programming Language*, Addison Wesley, ISBN 0-201-88954-2, 1997. Documentation online available: http://proj-clhep.web.cern.ch/proj-clhep/ R. Brun et al., *ROOT, an object oriented data analysis framework*, Proceedings of the 23rd CERN School of Computing, Marathon (Greece), 2000. E. Gamma et al., *Design Patterns*, Addison-Wesley, ISBN 0-201-63361-2, 1994. M. Erdmann, S. Kappler, M. Kirsch, A. Schmidt, *PAX – Physics Analysis eXpert*, online documentation and support: http://cern.ch/pax Documentation online available: http://www.cvshome.org/ CVS repository online available:\ http://isscvs.cern.ch/cgi-bin/viewcvs-all.cgi/?cvsroot=pax M. Erdmann, S. Kappler, M. Kirsch, A. Schmidt, *PAX Users Guide*,\ online available: http://cern.ch/pax Package available on AFS: //afs/cern.ch/cms/external/pax/ Documentation online available: http://cmsdoc.cern.ch/orca/ CVS repository online available: http://cmsdoc.cern.ch/swdev/viewcvs/ viewcvs.cgi/ORCA/Examples/ExPaxAnalysis/?cvsroot=ORCA M. Erdmann, S. Kappler, A. Schmidt, *PAX Tutorial*,\ online available: http://cern.ch/pax D. Hirschbühl, PhD thesis in preparation at Karlsruhe university. G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson et al., *HERWIG 6*, JHEP 01 (2001) 010, hep-ph/0011363, 2001. E. Gerchtein, M. Paulini, *CDF detector simulation framework and performance*, Proceedings of the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla (CA), USA, March 2003, physics/0306031, 2003. S. Abdullin, S. Banerjee, L. Bellucci, C. Charlot, D. Denegri, M. Dittmar et al., *Summary of the CMS Potential for the Higgs Boson Discovery*, CERN, CMS NOTE 2003/033, 2003. S. Kappler, *Higgs Search Studies in the Channel ${t\bar tH}$ with the CMS Detector at the LHC*, PhD thesis at Karlsruhe university, IEKP-KA 2004/17, part I, 2004. T. Sjöstrand, P. Edén, C. Friberg, L. Lönnblad, G. Miu, S. Mrenna, E. Norrbin, *High-Energy-Physics Event Generation with PYTHIA 6.1"*, hep-ph/0010017, 2001. Documentation online available: http://cmsdoc.cern.ch/cmsim/cmsim.html [^1]: Manuscript submitted to IEEE Trans. Nucl. Sci., November 15, 2004, revised July 22, 2005. [^2]: S. Kappler, M. Erdmann and M. Kirsch are with III. Physikalisches Institut A, RWTH Aachen, Germany. [^3]: U. Felzmann, D. Hirschbühl, G. Quast, A. Schmidt and J. Weng are with Institut für Experimentelle Kernphysik, Universität Karlsruhe (TH), Germany. [^4]: Contact: steffen.kappler@cern.ch [^5]: At compile-time, the user can choose between the vector algebra packages of CLHEP [@CLHEP] (default) or ROOT [@ROOT]. Depending on a compiler switch, the two type definitions [[*PaxLorentzVector*]{}]{} and [[*PaxThreeVector*]{}]{} are set to [*HepLorentzVector*]{} and [*Hep3Vector*]{} of CLHEP or to [*TLorentzVector*]{} and [*TVector3*]{} of ROOT. [^6]: For obvious reasons, pointers recorded with PAX physics objects by means of the [[*PaxExperimentClass*]{}]{}  functionality (as described in section \[sectionExpClassRel\]) are not stored to disk. [^7]: Interfaces to the D0 software are available as $\beta$-version since April 2005.
--- abstract: 'We report major advances in the research program initiated in “Moment-Based Evidence for Simple Rational-Valued Hilbert-Schmidt Generic $2 \times 2$ Separability Probabilities” ([*J. Phys. A*]{}, [**45**]{}, 095305 \[2012\]). A highly succinct separability probability function $P(\alpha)$ is put forth, yielding for generic (9-dimensional) two-rebit systems, $P(\frac{1}{2}) = \frac{29}{64}$, (15-dimensional) two-qubit systems, $P(1) = \frac{8}{33}$ and (27-dimensional) two-quater(nionic)bit systems, $P(2)=\frac{26}{323}$. This particular form of $P(\alpha)$ was obtained by Qing-Hu Hou by applying Zeilberger’s algorithm (“creative telescoping”) to a fully equivalent–but considerably more complicated–expression containing six $_{7}F_{6}$ hypergeometric functions (all with argument $\frac{27}{64} =(\frac{3}{4})^3$). That hypergeometric form itself had been obtained using systematic, high-accuracy probability-distribution-reconstruction computations. These employed 7,501 determinantal moments of partially transposed $4 \times 4$ density matrices, parameterized by $\alpha = \frac{1}{2}, 1, \frac{3}{2}, 2,\ldots,32$. From these computations, exact rational-valued separability probabilities were discernible. The (integral/half-integral) sequences of 32 rational values, then, served as input to the Mathematica FindSequenceFunction command, from which the initially obtained hypergeometric form of $P(\alpha)$ emerged.' author: - 'Paul B. Slater' bibliography: - 'Concise100.bib' title: 'A Concise Formula for Generalized Two-Qubit Hilbert-Schmidt Separability Probabilities' --- Introduction ============ Our study will be devoted to addressing the fundamental quantum-information-theoretic problem, first apparently, explicitly discussed by [Ż]{}yczkowski, Horodecki, Lewenstein and Sanpera (ZHSL) [@ZHSL] in their highly-cited 1998 paper, “Volume of the set of separable states” [@ZHSL]. They gave “three main reasons of importance”–philosophical, practical and physical–for examining such problems (cf. [@singh]).) Specifically, we will address the problem raised in [@ZHSL] of what proportion (that is, “separability probability”) of quantum states are separable/disentangled [@RFWerner]. We endow the (generalized two-qubit) states, to which we confine our attention here, with the Hilbert-Schmidt (Euclidean/flat) metric and its accompanying measure [@szHS; @ingemarkarol]. It is certainly also of interest to study the problem posed by ZHSL in alternative–but perhaps even more challenging analytically–settings, in particular that of the Bures (minimal monotone) metric/measure [@szBures; @ingemarkarol; @slaterC; @slaterJGP; @osipov; @ye; @BuresHilbert]. We do report an apparent resolution of the ZHSL separability-probability problem in the generalized two-qubit Hilbert-Schmidt context, in terms of the titular “concise formula”, which we will denote by $P(\alpha)$. Though we still lack a fully rigorous argument for its validity, the formula strongly appears to fulfill the indicated role, while manifesting important mathematical (random matrix theory [@dumitriu; @tomsovic],…) and physical (quantum entanglement [@tomsovic; @ZHSL; @ingemarkarol]) properties. Thus, we have $$\label{Hou1} P(\alpha) =\Sigma_{i=0}^\infty f(\alpha+i),$$ where $$\label{Hou2} f(\alpha) = P(\alpha)-P(\alpha +1) = \frac{ q(\alpha) 2^{-4 \alpha -6} \Gamma{(3 \alpha +\frac{5}{2})} \Gamma{(5 \alpha +2})}{3 \Gamma{(\alpha +1)} \Gamma{(2 \alpha +3)} \Gamma{(5 \alpha +\frac{13}{2})}},$$ and $$\label{Hou3} q(\alpha) = 185000 \alpha ^5+779750 \alpha ^4+1289125 \alpha ^3+1042015 \alpha ^2+410694 \alpha +63000 =$$ $$\alpha \bigg(5 \alpha \Big(25 \alpha \big(2 \alpha (740 \alpha +3119)+10313\big)+208403\Big)+410694\bigg)+63000.$$ A reader, equipped with any standard contemporary mathematical language programming package (Maple, Mathematica, Matlab,…), can readily verify that (to arbitrarily high-precision \[hundreds/thousands of digits\]), quite remarkably (but not yet formally proven [@mathoverflow]), $P(0)=1,P(\frac{1}{2})=\frac{29}{64},P(1)=\frac{8}{33}$ and $P(2) =\frac{26}{323}$ (Figs. \[fig:HypergeometricFormula1\] and [fig:HouGraph]{}). In terms of the physical implications of the formula, we find compelling evidence that $P(\alpha)$ yields the separability probability [@ZHSL]–with respect to Hilbert-Schmidt measure–of generalized two-qubit states, where, in particular $\alpha =0, \frac{1}{2}, 1, 2$ correspond to classical, rebit, qubit and quater(nionic)bit states, respectively. We will indicate below the multistep procedure by which the particular concise form of $P(\alpha)$ presented above was obtained. This process depended upon, first, the derivation [@MomentBased] of (hypergeometric-based) formulas for the moments of probability distributions over the determinants of partially transposed density matrices, followed by the estimation (using a certain Legendre-polynomial-based probability-distribution-reconstruction procedure [@Provost]) from those moments of cumulative (over the separability interval) probabilities. Then, $\alpha$-parameterized sequences of these cumulative probabilities were analyzed to extract the underlying structure captured by $P(\alpha)$. This initially took a relatively complicated hypergeometric form (Fig. \[fig:HypergeometricFormula1\]), from which the concise formula above was subsequently derived (Figs. \[fig:HouProg\] and \[fig:HouProg2\]) by Qing-Hu Hou using Zeilberger’s algorithm [@doron]. Background ---------- The underpinning, predecessor paper [@MomentBased]–addressing the relatively long-standing $2 \times 2$ separability probability question [@ZHSL; @slaterJGP; @slaterqip; @slaterA; @slaterC; @slaterPRA; @slaterPRA2; @slaterJGP2; @pbsCanosa; @slater833] (cf. [@sz1; @sz2; @ye])–consisted largely of two sets of analyses. The first set was concerned with establishing formulas for the bivariate determinantal product moments $\left\langle \left\vert \rho^{PT}\right\vert ^{n}\left\vert \rho\right\vert ^{k}\right\rangle ,k,n=0,1,2,3,\ldots,$ with respect to Hilbert-Schmidt (Euclidean/flat) measure [@ingemarkarol sec. 14.3] [@szHS], of generic (9-dimensional) two-rebit and (15-dimensional) two-qubit density matrices ($\rho$). Here $\rho^{PT}$ denotes the partial transpose of the $4 \times 4$ density matrix $\rho$. Nonnegativity of the determinant $|\rho^{PT}|$ is both a necessary and sufficient condition for separability in this $2 \times 2$ setting [@augusiak]. In the second set of primary analyses in [@MomentBased], the [*univariate*]{} determinantal moments $\left\langle \left\vert \rho^{PT}\right\vert ^{n} \right\rangle$ and $\left\langle \left ( \vert \rho^{PT}\right\vert \left\vert \rho\right\vert)^n \right\rangle$, induced using the bivariate formulas, served as input to a Legendre-polynomial-based probability distribution reconstruction algorithm of Provost [@Provost sec. 2] (cf. [@vericat]). This yielded estimates of the desired separability probabilities. (The reconstructed probability distributions based on $|\rho^{PT}|$ are defined over the interval $|\rho^{PT}| \in [-\frac{1}{16},\frac{1}{256}]$, while the associated separability probabilities are the cumulative probabilities of these distributions over the nonnegative subinterval $|\rho^{PT}| \in [0,\frac{1}{256}]$. We note that for the fully mixed (classical) state, $|\rho^{PT}| = \frac{1}{256}$, while for a maximally entangled state, such as a Bell state, $|\rho^{PT}| = -\frac{1}{16}$, thus, delimiting the range of $|\rho^{PT}|$.) A highly-intriguing aspect of the (not yet rigorously established) determinantal moment formulas obtained (by C. Dunkl) in [@MomentBased App.D.4] was that both the two-rebit ($\alpha = \frac{1}{2}$) and two-qubit ($\alpha = 1$) cases could be encompassed by a [*single*]{} formula, with a Dyson-index-like parameter $\alpha$ [@MatrixModels] serving to distinguish the two cases. Additionally, the results of the formula for $\alpha=2$ and $n=1$ and 2 have recently been confirmed computationally by Dunkl using the “Moore determinant” (quasideterminant) [@Moore; @Gelfand] of $4 \times 4$ quaternionic density matrices. (However, tentative efforts of ours to verify the $\alpha=4$ \[conjecturally, [*octonionic*]{} [@LiaoWangLi], problematical\] case, have not proved successful.) When the probability-distribution-reconstruction algorithm [@Provost] was applied in [@MomentBased] to the two-rebit case ($\alpha=\frac{1}{2}$), employing the first 3,310 moments of $|\rho^{PT}|$, a (lower-bound) estimate that was 0.999955 times as large as $\frac{29}{64} \approx 0.453120$ was obtained (cf. [@advances p. 6]). Analogously, in the two-qubit case ($\alpha =1$), using 2,415 moments, an estimate that was 0.999997066 times as large as $\frac{8}{33} \approx 0.242424$ was derived. This constitutes an appealingly simple rational value that had previously been conjectured in a quite different (non-moment-based) form of analysis, in which “separability functions” had been the main tool employed [@slater833]. (Note, however, that the two-rebit separability probability conjecture of $\frac{8}{17}$, somewhat secondarily advanced in [@slater833], has now been discarded in favor of $\frac{29}{64}$.) Let us note, supportively, that in an extensive Monte Carlo analysis, Zhou, Chern, Fei and Joynt obtained an estimate for this two-qubit separability probability of $0.2424 \pm 0.0002$ [@joynt eq. (B7)]. Additionally, in the very same context, Fonseca-Romero, Rinc[ó]{}n and Viviescas report a compatible statistic of $24\%$ [@Fonseca-Romero sec. VIII]. Further, the determinantal moment formulas advanced in [@MomentBased] were then applied with $\alpha$ set equal to 2. This appears–as the indicated recent (Moore determinant) computations of Dunkl show–to correspond to the generic 27-dimensional set of quaternionic density matrices [@andai; @adler]. Quite remarkably, a separability probability estimate, based on 2,325 moments, that was 0.999999987 times as large as $\frac{26}{323} \approx 0.0804954$ was found. Outline of Present Study ======================== In the present study, we extend these three (individually-conducted) moment-based analyses in a more systematic, thorough manner, [*jointly*]{} embracing the sixty-four integral and half-integral values $\alpha =\frac{1}{2}, 1, \frac{3}{2}, 2,\ldots, 32$. We do this by accelerating, for our specific purposes, the Mathematica probability-distribution-reconstruction program of Provost [@Provost], in a number of ways. Most significantly, we make use of the three-term recurrence relations for the Legendre polynomials. Doing so obviates the need to compute each successive higher-degree Legendre polynomial [*ab initio*]{}. In this manner, we were able to obtain–using exact computer arithmetic throughout–“generalized” separability probability estimates based on 7,501 moments for $\alpha = \frac{1}{2}, 1, \frac{3}{2},\ldots,32$. In Fig. \[fig:ListPlotLogEstimates\] we plot the logarithms of the resultant sixty-four separability probability estimates (cf. [@MomentBased Fig. 8]), which fall close to the line $-0.9464181889 \alpha$. ![\[fig:ListPlotLogEstimates\]Logarithms of generalized separability probability estimates, based on 7,501 Hilbert-Schmidt moments of $|\rho^{PT}|$, as a function of the Dyson-index-like parameter $\alpha$](EnlargedFitPlot.pdf) In Fig. \[fig:EnlargedResiduals\] we show the residuals from this linear fit. ![\[fig:EnlargedResiduals\]Residuals from linear fit to logarithms of generalized separability probability estimates](EnlargedResiduals.pdf) In Fig. \[fig:HypergeometricFormula1\] we present a hypergeometric-function-based formula, together with striking supporting evidence for it, that appears to succeed in uncovering the functional relation ($P(\alpha)$) underlying the entirely of these sixty-four generalized separability probabilities. ![\[fig:HypergeometricFormula1\]Hypergeometric formula $P(\alpha)$ for Hilbert-Schmidt generic $2 \times 2$ [*generalized*]{} separability probabilities and evidence that it reproduces the basic three (real \[$\alpha = \frac{1}{2}$\], complex \[$\alpha = 1$\] and quaternionic \[$\alpha = 2$\]) conjectures of $\frac{29}{64}, \frac{8}{33}$ and $\frac{26}{323}$](HypergeometricFormula1.pdf) Further, in (\[strikingresults\]), and the immediately preceding text, we list a number of remarkable values yielded by this hypergeometric formula for values of $\alpha$ other than the basic sixty-four (half-integral and integral) values from which we have started. Then, we are able to report–with the assistance of Qing-Hu Hou–a striking condensation of the lengthy expression presented in Fig. \[fig:HypergeometricFormula1\], that is, the titular “concise formula” (eqs. (\[Hou1\])-(\[Hou3\])). Some additional computational results of interest are presented in the Appendix. New Results =========== The three basic (rebit, qubit, quaterbit) conjectures revisited {#threebasics} --------------------------------------------------------------- ### $\alpha=\frac{1}{2}$–the two-rebit case In [@MomentBased], a lower-bound estimate of the two-rebit separability probability was obtained, with the use of the first 3,310 moments of $|\rho^{PT}|$. It was 0.999955 times as large as $\frac{29}{64} \approx 0.453120$. With the indicated use, now, of 7,501 moments, the figure increases to 0.999989567. This outcome, thus, fortifies our previous conjecture. ### $\alpha=1$–the two-qubit case In [@MomentBased], a lower-bound estimate of the two-qubit separability probability was obtained, with the use of the first 2,415 moments of $|\rho^{PT}|$, that was 0.999997066 times as large as $\frac{8}{33} \approx 0.242424$ (cf. [@joynt eq. (B7)]). Employing 7,501 moments, this figure increases to 0.99999986. ### $\alpha=2$–the quaternionic case In [@MomentBased], a lower-bound estimate of the (presumptive) quaternionic separability probability was obtained that was 0.999999987 times as large as $\frac{26}{323} \approx 0.0804954$, using the first 2,325 moments of $|\rho^{PT}|$. Based on 7,501 moments, this figure increases, quite remarkably still, to 0.999999999936. Generalized separability probability hypergeometric formula ----------------------------------------------------------- A principal motivation in undertaking the analyses reported here–in addition, to further scrutinizing the three specific conjectures reported in [@MomentBased]–was to uncover the functional relation underlying the curve in Fig. \[fig:ListPlotLogEstimates\] (and/or its original non-logarithmic counterpart). Preliminarily, let us note that the [*zeroth*]{}-order approximation (being independent of the particular value of $\alpha$) provided by the Provost Legendre-polynomial-based probability-distribution-reconstruction algorithm is simply the [*uniform*]{} distribution over the interval $|\rho^{PT}| \in [-\frac{1}{16},\frac{1}{256}]$. The corresponding zeroth-order separability probability estimate is the cumulative probability of this distribution over the nonnegative subinterval $[0,\frac{1}{256}]$, that is, $ \frac{1}{256}/(\frac{1}{16} +\frac{1}{256}) =\frac{1}{17} \approx 0.0588235$. So, it certainly appears that speedier convergence (sec. \[threebasics\]) of the algorithm occurs for separability probabilities, the true values of which are initially close to $\frac{1}{17}$ (such as $\frac{26}{323} \approx 0.0804954$ in the quaternionic case). Convergence also markedly increases as $\alpha$ increases. It appeared, numerically, that the generalized separability probabilities for integral and half-integral values of $\alpha$ were rational values (not only $\frac{29}{64}, \frac{8}{33}, \frac{26}{323}$, for the three specific values $\alpha = \frac{1}{2}, 1, 2$ of original focus). With various computational tools and search strategies based upon emerging mathematical properties, we were able to advance additional, seemingly plausible conjectures as to the exact values for $\alpha=3, 4, \ldots,32$, as well. (We inserted many of our high-precision numerical estimates into the search box on the Wolfram Alpha website–which then indicated likely candidates for corresponding rational values.) We fed this sequence of thirty-two conjectured rational numbers into the FindSequenceFunction command of Mathematica. (This command “attempts to find a simple function that yields the sequence $a_i$ when given successive integer arguments,” but apparently can succeed with rational arguments, as well.) To our considerable satisfaction, this produced a generating formula (incorporating a diversity of hypergeometric functions of the $_{p}F_{p-1}$ type, $p=7,\ldots,11$, [*all*]{} with argument $z =\frac{27}{64}= (\frac{3}{4})^3$) for the sequence (cf. [@FussCatalan eq. (11)]). (Let us note that $z^{-\frac{1}{2}} = \sqrt{\frac{64}{27}}$ is the “residual entropy for square ice” [@finch p. 412] (cf. [@ckksr eqs.\[(27), (28)]. An analogous appearance of $\frac{27}{64}$ occurs in a hypergeometric \[“Ramanujan-like”\] summation for $\frac{16 \pi^2}{3}$ of J. Guillera [@guillera]. In a private communication, he remarked that the value $z =\frac{27}{64}$ appears to frequently occur in hypergeometric identities, and that this appears to have some modular or modular-like origin.). In fact, the Mathematica command succeeds using only the first twenty-eight conjectured rational numbers, but no fewer–so it seems fortunate, our computations were so extensive.) However, the formula produced by the Mathematica command was quite cumbersome in nature (extending over several pages of output). With its use, nevertheless, we were able to convincingly generate rational values for [*half*]{}-integral $\alpha$ (including the two-rebit $\frac{29}{64}$ conjecture), also fitting our corresponding half-integral thirty-two numerical estimates exceedingly well. (Let us strongly emphasize that the hypergeometric-based formula was initially generated using [*only*]{} the integral values of $\alpha$. The process was fully reversible, and we could first employ the half-integral results to generate the formula–which then–seemingly perfectly fitted the integral values.) At this point, for illustrative purposes, let us list the first ten half-integral and ten integral rational values (generalized separability probabilities), along with their approximate numerical values. $$\begin{array}{cc} \begin{array}{cccc} \text{$\alpha $ =} & \frac{1}{2} & \frac{29}{64} & 0.453125 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 1 & \frac{8}{33} & 0.242424 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{3}{2} & \frac{36061}{262144} & 0.137562 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 2 & \frac{26}{323} & 0.0804954 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{5}{2} & \frac{51548569}{1073741824} & 0.0480083 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 3 & \frac{2999}{103385} & 0.0290081 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{7}{2} & \frac{38911229297}{2199023255552} & 0.0176948 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 4 & \frac{44482}{4091349} & 0.0108722 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{9}{2} & \frac{60515043681347}{9007199254740992} & 0.00671852 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 5 & \frac{89514}{21460999} & 0.00417101 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{11}{2} & \frac{71925602948804923}{27670116110564327424} & 0.0025994 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 6 & \frac{179808469}{110638410169} & 0.00162519 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{13}{2} & \frac{3387374833367307236269}{3324546003940230230441984} & 0.0010189 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 7 & \frac{191151001}{298529164591} & 0.000640309 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{15}{2} & \frac{124792688228667229196729}{309485009821345068724781056} & 0.000403227 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 8 & \frac{1331199762}{5232880523393} & 0.000254391 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{17}{2} & \frac{407557367133399293946182513}{2535301200456458802993406410752} & 0.000160753 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 9 & \frac{74195568677}{729345064647247} & 0.000101729 \\ \end{array} \\ \begin{array}{cccc} \text{$\alpha $ =} & \frac{19}{2} & \frac{1338799759394288468677657208071}{20769187434139310514121985316880384} & 0.0000644609 \\ \end{array} & \begin{array}{cccc} \text{$\alpha $ =} & 10 & \frac{730710456538}{17868447453498669} & 0.0000408939 \\ \end{array} \\ \end{array}$$ To simplify the cumbersome (several-page) output yielded by the Mathematica FindSequenceFunction command, we employed certain of the “contiguous rules” for hypergeometric functions listed by C. Krattenthaler in his package HYP [@ck] (cf. [@bytev]). Multiple applications of the rules C14 and C18 there, together with certain gamma function simplifications suggested by C. Dunkl, led to the rather more compact formula displayed in Fig. \[fig:HypergeometricFormula1\]. This formula incorporates a six-member family ($k =1,\ldots,6$) of $_7F_6$ hypergeometric functions, differing only in the first upper index $k$, $$\label{family} \, _7F_6\left(k,\alpha +\frac{2}{5},\alpha +\frac{3}{5},\alpha +\frac{4}{5},\alpha +\frac{5}{6},\alpha +\frac{7}{6},\alpha +\frac{6}{5};\alpha +\frac{13}{10},\alpha +\frac{3}{2},\alpha +\frac{17}{10},\alpha +\frac{19}{10},\alpha +2,\alpha +\frac{21}{10};\frac{27}{64}\right) .$$ (The reader will note interesting sequences of upper and lower parameters (cf. [@zudilin]).) We are only able to, in general, evaluate the formula numerically, but then to arbitrarily high (hundreds, if not thousand-digit) precision, giving us strong confidence–despite the lack yet of a formal proof (cf. [@mathoverflow])–in the validity of the [*exact*]{} generalized separability probabilities ($\frac{29}{64}, \frac{8}{33}, \frac{26}{323}$, …), that we advance. ### Additional interesting values yielded by the hypergeometric formula Let us now apply the formula (Fig. \[fig:HypergeometricFormula1\]) to values of $\alpha$ other than the initial sixty-four studied. For $\alpha = 0$, the formula yields–as would be expected–the “classical separability probability” of 1. Further, proceeding in a purely formal manner (since there appears to be no corresponding genuine probability distribution over $[-\frac{1}{16},\frac{1}{256}]$), for the [*negative*]{} value $\alpha = - \frac{1}{2}$, the formula yields $\frac{2}{3}$. For $\alpha =-\frac{1}{4}$, it gives -2. Remarkably still, for $\alpha = \frac{1}{4}$, the result is clearly (to one thousand decimal places) equal to $2-\frac{34}{21 \text{agm}\left(1,\sqrt{2}\right)} = 2-\frac{17 \Gamma \left(\frac{1}{4}\right)^2}{21 \sqrt{2} \pi ^{3/2}} \approx 0.6486993992$, where the arithmetic-geometric mean of 1 and $\sqrt{2}$ is indicated. (The reciprocal of this mean is Gauss’s constant.) For $\alpha = \frac{3}{4}$, the result equals $2-\frac{9689 \Gamma \left(\frac{3}{4}\right)}{4420 \sqrt{\pi } \Gamma \left(\frac{5}{4}\right)} \approx 0.3279684732$, while for $\alpha=-\frac{3}{4}$, we have $\frac{128}{21 \text{agm}\left(1,\sqrt{2}\right)}+2 =2+\frac{32 \sqrt{2} \Gamma \left(\frac{1}{4}\right)^2}{21 \pi ^{3/2}} \approx 7.087249321$. For $\alpha=\frac{2}{3}$, the outcome is $2-\frac{288927 \Gamma \left(\frac{1}{3}\right)^3}{344080 \pi ^2} \approx 0.36424897456$. Results are presented in the table $$\label{strikingresults} \left( \begin{array}{ccc} \alpha & P(\alpha ) & \text{value} \\ -\frac{3}{4} & 2+\frac{32 \sqrt{2} \Gamma \left(\frac{1}{4}\right)^2}{21 \pi ^{3/2}} & 7.08725 \\ -\frac{2}{3} & 2-\frac{8 \pi }{\sqrt{3} \Gamma \left(\frac{1}{3}\right)^3} & 1.24527 \\ -\frac{1}{2} & \frac{2}{3} & 0.666667 \\ -\frac{1}{3} & 2+\frac{3 \Gamma \left(\frac{1}{3}\right)^3}{4 \pi ^2} & 3.461 \\ -\frac{1}{4} & 2 & 2 \\ \frac{1}{4} & 2-\frac{17 \Gamma \left(\frac{1}{4}\right)^2}{21 \sqrt{2} \pi ^{3/2}} & 0.648699 \\ \frac{1}{3} & 2-\frac{459 \sqrt{3} \pi }{91 \Gamma \left(\frac{1}{3}\right)^3} & 0.572443 \\ \frac{2}{3} & 2-\frac{288927 \Gamma \left(\frac{1}{3}\right)^3}{344080 \pi ^2} & 0.364249 \\ \frac{3}{4} & 2-\frac{9689 \Gamma \left(\frac{3}{4}\right)}{4420 \sqrt{\pi } \Gamma \left(\frac{5}{4}\right)} & 0.327968 \\ \end{array} \right) .$$ (Let us note that the term $\frac{3 \Gamma \left(\frac{1}{3}\right)^3}{4 \pi ^2} \approx 1.46099848$ present in the result for $\alpha =-\frac{1}{3}$ is “Baxter’s four-coloring constant” for a triangular lattice [@finch p. 413].) Also, for $\alpha=-1$, we have $\frac{2}{5}$. For $\alpha=-\frac{3}{2}$, the result is $\frac{2}{3}$. Concise reformulation of $_{7}F_{6}$ hypergeometric expression (Fig. \[fig:HypergeometricFormula1\]) ==================================================================================================== ![\[fig:HouGraph\]Generalized two-qubit separability probability function $P(\alpha)$, with $P(0) =1, P(\frac{1}{2}) =\frac{29}{64}, P(1)=\frac{8}{33}, P(2)= \frac{26}{323}$ for generic classical four-level ($\alpha =0$), two-rebit ($\alpha =\frac{1}{2}$), two-qubit ($\alpha =1$) and two-quaterbit ($\alpha =2$) systems, respectively.](HouGraph.pdf) We had previously ourselves been unable to find an equivalent form of $P(\alpha)$ with fewer than six hypergeometric functions (Fig. \[fig:HypergeometricFormula1\]). Qing-Hu Hou of the Center for Combinatorics of Nankai University, however, was able to obtain the remarkably succinct and clearly correct results (\[Hou1\])-(\[Hou3\])–which he communicated to us in a few e-mail messages. (Accompanying them were two Maple worksheets indicating his calculations \[Figs. \[fig:HouProg\] and \[fig:HouProg2\]\].) Hou, first, observed that the hypergeometric-based formula for $P(\alpha)$ could be expressed as an infinite summation. Letting $P_l(\alpha)$ be the $l$-th such summand, application of Zeilberger’s algorithm [@doron] (a method for producing combinatorial identities, also known as “creative telescoping”) yielded that $$\label{referee1} P_l(\alpha) -P_l(\alpha+1) =-P_{l+1}(\alpha) + P_l(\alpha) .$$ (The package APCI–available at http://www.combinatorics.net.cn/homepage/hou/–was employed. In a different quantum-information context, Datta employed the algorithm to ascertain that no closed form exists for a certain series, “retarding” the evaluation of the “ratio of the negativity of random pure states to the maximal negativity for Haar-distributed states of $n$ qubits” [@Datta App. A, Table I].) Summing over $l$ from 0 to $\infty$, Hou found that $$\label{referee2} P(\alpha) -P(\alpha+1)=P_0(\alpha).$$ Letting $f(\alpha) =P_0(\alpha)$, the concise summation formula (\[Hou1\]) is obtained. (C. Krattenthaler indicated \[Krattenthaler, private communication\] that these results might equally well be derived without recourse to Zeilberger’s algorithm. Also, a referee expressed puzzlement at the peculiar \[redundant\] form of eq. (\[referee1\]). This appears to be an artifact arising from the particular manner in which the algorithm is applied in the proving of hypergeometric identities.) ![\[fig:HouProg\]First Maple worksheet of Hou used in deriving concise form of hypergeometric formula (Fig. \[fig:HypergeometricFormula1\])](HouProg.pdf) ![image](HouProg2.pdf) ![\[fig:HouProg2\]Second Maple worksheet of Hou used in deriving concise form of hypergeometric formula (Fig. \[fig:HypergeometricFormula1\])](HouProg2.pdf) We certainly need to indicate, however, that if we do explicitly perform the infinite summation indicated in (\[Hou1\]), then we revert to a (“nonconcise”) form of $P(\alpha)$, again containing six hypergeometric functions. Further, it appears that we can only evaluate (\[Hou1\]) numerically–but then easily to hundreds and even thousands of digits of precision–giving us extremely high confidence in the specific rational-valued Hilbert-Schmidt separability probabilities advanced. Concluding Remarks ================== There remain the important problems of formally verifying the formulas for $P(\alpha)$ (as well as the underlying determinantal moment formulas for $|\rho^{PT}|$, …, in [@MomentBased], employed in the probability-distribution reconstruction process), and achieving a better understanding of what these results convey regarding the geometry of quantum states [@ingemarkarol; @avron; @avron2]. Further, questions of the asymptotic behavior of the formula ($\alpha \rightarrow \infty$) and of possible Bures metric [@szBures; @ingemarkarol; @slaterJGP; @slaterqip; @slaterC] counterparts to it, are under investigation [@BuresHilbert]. We are presently engaged in attempting to determine further properties–in addition to the cumulative (separability) probabilities over $[0,\frac{1}{256}]$ obtained from the titular concise formula (eq. (\[Hou1\])-(\[Hou3\]))–of the probability distributions of $|\rho^{PT}|$ over $[-\frac{1}{16},\frac{1}{256}]$, as a function of the Dyson-index-like parameter $\alpha$. As one such finding, it appears that the $y$-intercept (at which $|\rho^{PT}|=0$, that is, the separability-entanglement boundary) in the presumed quaternionic case ($\alpha=2$) is $\frac{7425}{34} = \frac{3^3 \times 5^2 \times 11}{2 \times 17} \approx 218.382$ [@slaterSuddenDeath]. (The Legendre-polynomial-based probability-distribution reconstruction algorithm of Provost [@Provost] yielded an estimate 0.99999999742 times as large as $\frac{7425}{34}$, when implemented with 10,000 moments. Based also on 10,000 moments–but with inferior convergence properties–the two-qubit \[$\alpha =1$\] and two-rebit \[$\alpha = \frac{1}{2}$\] $y$-intercepts were estimated as 389.995 (conjecturally equal to $390 = 2 \cdot 3 \cdot 5 \cdot 13$) and 502.964, respectively [@slaterSuddenDeath].) The foundational paper of [Ż]{}yczkowski, Horodecki, Sanpera and Lewenstein,“Volume of the set of separable states” [@ZHSL] (cf. [@singh]), did ask for [*volumes*]{}, not specifically [*probabilities*]{}. At least, for the two-rebit, two-qubit and two-quaterbit cases, $\alpha =\frac{1}{2}, 1$ and $2$, we can readily, using the Hilbert-Schmidt volume formulas of Andai [@andai Thms. 1-3] (cf. [@szHS; @ingemarkarol]), convert the corresponding separability probabilities to the separable volumes $\frac{29 \pi ^4}{61931520} =\frac{29 \pi^4}{2^{16} \cdot 3^3 \cdot 5 \cdot 7}$, $\frac{\pi ^6}{449513064000} = \frac{\pi^6}{2^6 \cdot 3^6 \cdot 5^3 \cdot 7^2 \cdot 11^2 \cdot 13}$ and $\frac{\pi ^{12}}{3914156909371803494400000} = \frac{\pi^{12}}{2^{14} \cdot 3^{10} \cdot 5^5 \cdot 7^3 \cdot 11^2 \cdot 13 \cdot 17^2 \cdot 19^2 \cdot 23}$, respectively. The determination of separable volumes–as opposed to probabilities–for other values of $\alpha$ than these fundamental three appears to be rather problematical, however. Let us also note the relevance of the study of Szarek, Bengtsson and [Ż]{}yczkowski [@sbz], in which they show that the convex set of separable mixed states of the $2 \times 2$ system is a body of constant height. Theorem 2 of that paper, in conjunction with the results here, allows one, it would seem, to immediately deduce that the separability probabilities of the generic minimally-degenerate/boundary 8-, 14-, and 26-dimensional two-rebit, two-qubit, and two-quaterbit states are one-half (that is, $\frac{29}{128}, \frac{4}{33}$ and $\frac{13}{323}$) the separability probabilities of their generic non-degenerate counterparts. Appendix–Exact values of derivatives of $P(\alpha)$ =================================================== Succeeding deriviatives at $\alpha =0$ -------------------------------------- The first derivative of $P(\alpha)$ evaluated at (the classical case) $\alpha =0$ is -2, while the second derivative is $40 - 20 \zeta{(2)} = 40 -\frac{10 \pi^2}{3} \approx 7.10132$. (The third derivative was computed as -43.7454236566749417600.) First derivatives at $\alpha =1, 2 \ldots$, [*et al*]{} ------------------------------------------------------- The first derivative of $P(\alpha)$ at $\alpha = -\frac{1}{2}$ is $-\frac{80}{3}$ and at $\alpha = \frac{1}{2}$ is $\frac{1}{384} (917-984 \log (2)) \approx 0.611831$, and -2 at $\alpha=0$, as previously mentioned. We have also been able to determine rational values of $P(\alpha)$ for $\alpha =1, 2, \ldots, 97$. We list the first seven of these. (The Mathematica command FindSequenceFunction, however, did not succeed in this instance in generating an underlying function for this sequence of 97 rational numbers–although, of course, one can be directly obtained from our explicit form of $P(\alpha)$.) $$\label{derivativeresults} \left( \begin{array}{ccc} \alpha & P'(\alpha) \\ 1 & -\frac{130577}{457380} \approx -0.285489\\ 2 & -\frac{3177826243}{37595998440} \approx -0.0845257\\ 3 & -\frac{3598754002551529}{124409677632540300} \approx -0.0289266 \\ 4 & -\frac{943222153906869801499}{89625168823088671652880} \approx -0.0105241\\ 5 & -\frac{7745868905935978063871447}{1956135029605259737354520400} \approx -0.00395978 \\ 6 & -\frac{163704960709243940550573265691777}{107569184582725029279135417408286275} \approx -0.00152186 \\ 7 & -\frac{124555275071579876642057723808475761407}{209867628485254931732709294271962333917 400} \approx -0.000593494 \\ \end{array} \right) .$$ I would like to express appreciation to the Kavli Institute for Theoretical Physics (KITP) for computational support in this research, and to Christian Krattenthaler, Charles F. Dunkl, Michael Trott and Jorge Santos for their expert advice, as well as to Qing-Hu Hou for his insights and permission to present his Maple worksheets. Further, I thank a number of referees/editors for their constructive suggestions.
--- author: - | Diego Marin[^1]\ with contributions by\ Pangea Association[^2] $\qquad\qquad$ Fabrizio Coppola[^3],\ Marcello Colozzo[^4] $\qquad\qquad$ Istituto Scientia[^5] title: | Arrangement Field Theory\ beyond Strings and Loop Gravity --- Abstracts ========= This paper regroups all contributions to the arrangement field theory (AFT), together with a philosophical introduction by Dr. Fabrizio Coppola. AFT is an unifying theory which describes gravitational, gauge and fermionic fields as elements in the super-symmetric extension of Lie algebra $Sp(12,\mathbf{C})$. $\pt$ **Paper number 1** We introduce the concept of non-ordered space-time” and formulate a quaternionic field theory over such generalized non-ordered space. The imposition of an order over a non-ordered space appears to spontaneously generate gravity, which is revealed as a fictitious force. The same process gives rise to gauge fields that are compatible with those of Standard Model. We suggest a common origin for gravity and gauge fields from a unique entity called arrangement matrix” ($M$) and propose to quantize all fields by quantizing $M$. Finally we give a proposal for the explanation of black hole entropy and area law inside this paradigm. $\pt$ **Paper number 2** In this work we apply the formalism developed in the previous paper (The arrangement field theory”) to describe the content of standard model plus gravity. The resulting scheme finds an analogue in supersymmetric theories but now all quarks and leptons take the role of gauginos for $Sp(12,\mathbf{C})$ gauge fields. Moreover we discover a triality between *Arrangement Field Theory*, *String Theory* and *Loop Quantum Gravity*, which appear as different manifestations of the same theory. Finally we show as three families of fields arise naturally and we discover a new road toward unification of gravity with gauge and matter fields. $\pt$ **Paper number 3** We show how antigravity effects emerge from arrangement field theory. AFT is a proposal for an unifying theory which joins gravity with gauge fields by using the Lie group $Sp(12,\mathbf{C})$. Details of theory have been exposed in the papers number 1 and number 2. The philosophy of arrangement field theory {#intro} ========================================== Classical Physics \[classicalphy\] ---------------------------------- In classical physics, space and time are fundamental entities, providing a preordained structure in which interactions between physical objects can occur. In short, space and time are absolute". Moreover, the physical properties of a body or system are supposed to be objective and independent from a possible observation. In this paradigm, reality exists independently of classical measurements and is not significantly influenced by measurements, unless these are particularly invasive". But even in such cases, it is assumed that the observed systems had their own pre-existing characteristics. These were obvious and implicit tenets in classical physics, which influenced whole science, aimed to be purely objective. Space and time according to philosophers \[philosophers\] --------------------------------------------------------- Despite the rapid and successful development of classical physics and science in general, firmly based on the fixed concepts of space and time, between late 17th century and early 19th century respectable philosophers such as Locke, Hume, Leibniz, Kant and Schopenhauer, conceptualized space and time not as objective and universal entities, but as concepts defined by our own intellect, aimed to interpret the external reality perceived by our senses. This idea was radically different from the founding conception of classical physics, based on full objectivity, and appeared quite extravagant to several scientists at that time. Nevertheless Kant, who had a scientific background, exposed his conception in a profound and rational way. In 1781 Kant distinguished two main activities of conscious mind [@kant]: analytic propositions“ and synthetic propositions”. In an oversimplied interpretation, analytic propositions" are the elements of rational, logical reasoning, in which thoughts proceed by deduction, starting from known facts and finding consequences which, anyway, were implicit in the premises and only had to made explicit by reasoning. Synthetic propositions", instead, are new, non-deductible informations, coming from perceptions and sensations. For instance we can not deduce whether an apple is sweet, or a radiator is hot, but we must check that through our senses. Kant also proposed a distinction between a priori“ propositions, meaning in advance”, ie before“ an experience is performed; and a posteriori” propositions, meaning after" an experience. According to Kant, all analytic propositions are a priori“. A trivial example is given by any sum, such as 4 + 7 = 11. This analytic proposition is true a priori”: the result is already 11 before we make the calculation. Kant states that no analytic proposition can be a posteriori“. Synthetic propositions, on the other side, are generally a posteriori”, since perceptions come from experience. Now, an interesting question remains: may a priori“ synthetic propositions exist? Kant answers that they do actually exist. Certain categories” that human mind applies to events, such as the principle of cause and effect“, are a priori”. In fact we perceive events and relate to each other according to a category, causality", which, according to Kant, already exists in our intellect. Kant states that space“ and time” are also a priori“ synthetic forms. Even if space, time and causality are related to experience, Kant does not consider them as inherent to the objective phenomena, but as subjective tools (even if they manifest themselves as universal) that our intellect uses to order” the experiences. After Kant’s definitions, anyway, classical, mechanistic science continued to achieve extraordinary results. However, in the early twentieth century, physics started to face unexpected problems and contradictions, that forced scientists to formulate new principles and accept radical changes. Relativistic physics \[relativistic\] ------------------------------------- In 1632 Galileo had intuited and enunciated the principle of relativity", stating that the laws of physics are the same in every inertial frame of reference [@galileo1]. Later developments of physics, including several discoveries in optics and electromagnetism, suggested instead that a privileged, steady, fundamental frame of reference should exist. This issue especially afflicted electromagnetism, that was an excellent theory but included certain unsolved inconsistencies. In 1905 Einstein solved the whole problem, restarting from the Galileo’s principle of relativity and applying it to the new knowledge of electromagnetism and optics, thus developing an original, consistent theory, special relativity"[@einstein1]. His theory also accounted for the results of the Michelson-Morley experiment [@mmorley], conducted in 1887, which had demonstrated that speed of light does not follow the classical laws of velocity addition. Einstein solved all the inconsistencies by proposing that the speed of light $c$ is independent from the motion of the emitting body. The universal constant $c$ became an insurmountable speed limit in physics. Einstein’s theory also implied new, counter-intuitive ideas: for example, time flows differently in different inertial reference systems, and perception of space also depends on the frame of reference of the observer. In light of such new discoveries, Kant’s ideas do not seem to be so extravagant anymore. Space and time lose their absolute characteristics if considered independently from each other, but, adequately considered as components (coordinates) of four-dimensional points, remain absolute“ (invariant”) in a single entity, space-time“ or chronotope”, ruled by a generalized geometrical entity including time as the fourth coordinate. In 1908 such a four-dimensional structure was perfected and named Minkowski space" [@minko]. In 1916 Einstein expanded the principle of relativity to non-inertial reference frames, thus defining the new theory of general relativity“, in which the four-dimensional geometry is curved by the presence of the masses [@einstein3]. Hence, even the (linear) Minkowski space had to be considered as an approximation, valid only in small regions of the (curved) universe. In this perspective, gravitational forces” find their natural explanation in geometrical terms, based on a specific concept of metric. This approach also affected the interpretation of the principle of cause-effect, to the point that Einstein, in paragraph $a2$, wrote: The law of causality has not the signicance of a statement as to the world of experience, except when observable facts ultimately appear as causes and effects“ [@einstein3]. Kant had exposed this extravagant” idea a long time before [@kant]. In this paper we suggest a new step in the direction of relativization" (so to say), by questioning the absolute ordering of the space-time points, that we believe is an imposition made by our intellect, rather than a proper quality of Nature. Such conjecture might open new unexpected perspectives for understanding the fundamental fields of physics, as we are going to see. Quantum limitation of objectivity \[limitation\] ------------------------------------------------ In 1900 Planck had proposed quantization“ of energy to explain the electromagnetic emission of a black body” [@planck]. In 1905 quantization of energy was also applied by Einstein to explain the photoelectric effect" [@einstein2]. The several discoveries that clarified the structure of the atom from 1905 to the 1930’s included the Rutherford’s experiment [@rutherford] in 1911, and the consequent Bohr model [@bohr1] in 1913. Bohr started from the results of the Rutherford’s experiment, and imposed quantization to the angular momentum of electrons, instead of quantizing energy directly. As a consequence, energy also turned out to be quantized, and the calculated levels were in excellent agreement with the experimental values. The agreement was nearly perfect in the case of hydrogen, the simplest atom in Nature. In the case of more complex and heavier chemical elements, the mathematical frame was more difficult and the results were less precise. To solve these problems, the complete theory of Quantum Mechanics (QM) was gradually developed (mainly by the Copenhagen school" directed by Bohr himself during the 1920’s), which came out to be intuitively abstruse, offering no image of the motion of the electrons around the atomic nucleus. While developing QM, it began to emerge that the experiments inevitably influenced the observed systems. Bohr, Heisenberg and other physicists of the Copenhagen school" suspected that physical properties of quantum systems could no longer be assumed to be completely predefined and ontologically independent from observation. In the first version of the Copenhagen interpretation" they assumed that free will of the conscious observers played a decisive role in the collapse of a quantum state into an eigenstate [@heisenberg1]. This appeared as an unacceptable extravagance to many physicists, including Einstein, because of the unexpected restrictions that the supposed objectivity of the universe had to suffer, as a consequence of the new theory. Quantum states evolve deterministically according to the Schrödinger equation [@schr], formulated in 1926, but remain devoided of certain characteristics, which can be revealed (objectivated“) only when the quantum state collapses into an eigenstate” of the measured physical quantity. This is the main reason why physical quantities in QM are called observables". QM works fine“ only if it is accepted that such hidden properties are not objectively defined before the measurement and are partly created by observation itself, when the state is reduced to an eigenstate. The eigenvalues calculated according to QM are in excellent agreement with the possible outcomes given by experiments, even though the theory can not predict which eigenvalue will come out: only the respective probabilities can be calculated, as pointed out by Born [@born] in 1926. This led in 1927 to the Heisenberg’s uncertainty principle” [@heisenberg2], which put an end to the absolute determinism that was implicit in classical physics. QM thus introduced a margin of uncertainty“, in which Nature may reserve a small room for Her non-predictable caprice” or willingness“, according to Jordan [@jordan], and secondarily [@heisenberg1] accepted by Pauli, Wigner, Eddington, and von Neumann [@von], and years later by Wheeler [@wheeler], Stapp [@stapp1], and other physicists. For example, Stapp in 1982 defined human mental activity as creative”, because it only partially undergoes the course of causal mechanisms, having a margin for free choices [@stapp1]. Another important consequence concerns the act of measurement, after which, the subsequent course of the physical system under observation is unavoidably modified by the measurement itself, so that observations inevitably imprint different directions to events. In 1932 von Neumann, after reordering and formalizing QM into a consistent theory, stated that a distinctive element was necessary to trigger the quantum collapse“or reduction”, and declared that the consciousness of an observer could be such an element, distinctive enough from the usual physical quantities [@von]. In 2001 Stapp consistently explained this concept in detailed and clear terms [@stapp2]. In 1935 the discussion about the interpretation of QM faced the problem introduced by the Einstein, Podolski and Rosen (EPR) paradox [@epr], [@bridge], that later, in 1951, was better defined by Bohm [@bohm]. In this well-known thought experiment, two particles in quantum entanglement" but far away from each other, produce instant, non-local influences, in contradiction with the upper limit set by relativity at the speed of light: E., P. and R. considered that as absurd and impossible. Nevertheless, the experimental version that was defined by the Bell’s theorem [@bell] in 1964, and implemented in 1982 by Aspect et al. [@aspect], confirmed the existence of non-local influences due to the entanglement. Thus, a conflict seems to exist between special relativity (that does not allow non-local influences) and QM (which includes and reveals such influences). The subsequent theories have not been able to solve in a convincing way such a dissonance. The conjecture exposed in this paper, however, may offer a new framework where such conflict can be finally overcome. Fabrizio Coppola, Istituto Scientia The arrangement field theory (AFT) ================================== Introduction to formalism {#sec:1} ------------------------- The arrangement field paradigm describes the universe be means of a graph (ie an ensemble of vertices and edges). However there is a considerable difference between this framework and the usual modeling with spin-foams or spin-networks. The existence of an edge which connects two vertices is in fact probabilistic. In this way we consider the vertices as fundamental physical quantities, while the edges become dynamic fields. In section \[reciprocal\] we introduce the concept of non-ordered space-time, ie an ensemble of vertices without any information on their mutual positions. In section \[arrmatrix\] we define the arrangement matrix” ($M$), which is a matricial field whose entries define the probability amplitudes for the existence of edges. The arrangement matrix regulates the order of vertices in the space-time, determining the topology of space-time itself. In the same section we extend the concept of derivative on such non-ordered space-time. In section \[ord\] we define a simple toy-action” for a quaternionic field in a non-ordered space-time. We show how the imposition of an arrangement in such space-time generates automatically a metric $h$ which is strictly determined by $M$. In section \[local\] we discover a low energy limit under which the toy-action” becomes a local action after the arrangement imposition. In section \[spin\] we show that a new interpretation of spin nature arises spontaneously from our framework. In the same section, the role of arrangement matrix” is compared to the role of an external observer. In section \[symmetry\] we anticipate some unpublished results regarding the availment of our framework to describe all standard model interactions. In section \[entropy\] we apply a second quantization to the arrangement matrix”, turning it in an operator which creates or annihilates edges. We show how this process can give a new interpretation to black hole entropy and area law. We infer that quantization of $M$ automatically quantizes $h$, apparently without renormalization problems. A non-ordered universe \[unordered space\] ------------------------------------------ ### Reciprocal relationship between space-time points \[reciprocal\] Every euclidean $4$-dimensional space can be approximated by a graph $\L$, that is a collection of vertices connected by edges of length $\Delta$. We recover the continuous space in the limit $\Delta \ra 0$. Moreover we can pass from the euclidean space to the lorenzian space-time by extending holomorphically any function in the fourth coordinate $x_4 \ra ix_4$ [@minko]. In non commutative geometry, one can assume that a first vertex is connected to a second, without the second is connected to the first. This means that connections between vertices are made by two oppositely oriented edges, which we can represent by a couple of arrows. We assume the vertices as fundamental quantities. Then we can select what couples of vertices are connected by edges; different choices of couple generated different graphs, which in the limit $\Delta \ra 0$ correspond to different spaces. Our fundamental assumption is that the existence of an edge follows a probabilistic law, like any other quantity in QM. We draw any pair of vertices, denoted by $v_{1}$ and $v_{2}$, and we connect each other by a couple of arrows oriented in opposite directions. Before proceeding, we extend the common definition of amplitude probability. Usually this is a complex number, whose square module represents a probability and so is minor or equal to one. We define instead the amplitude probability as an element in the division ring of quaternionic numbers, commonly indicated with $\mathbf{H}$. Its square module represents yet a probability and so is minor or equal to one. A quaternion $q$ have the form $q = a+ib+jc+kd$ with $a,b,c,d \in \mathbf{R}$, $i^2 = j^2 = k^2 = -1$ and $ij = -ji = k$, $jk = -kj =i$, $ki = -ik =j$. We write a quaternionic number near the arrow which moves from $v_1$ to $v_2$. It corresponds to the probability amplitude for the existence of an edge which connects $v_1$ with $v_2$. We do the same thing for the other arrow, writing the probability amplitude for the existence of an edge which connects $v_2$ with $v_1$ \[geometry\]. ![image](fig1.jpg){width="40.00000%"} A non-drawn arrow corresponds to an arrow with number $0$. In principle, for every pair of vertices exists a couple of arrows which connect each other, eventually with label $0$. We can describe our universe by means of vertices connected by couple of arrows, with a quaternionic number next to each arrow, as shown in figure \[eq: network\], below. ![We can describe our universe by means of vertices connected by couple of arrows, with a quaternionic number next to each arrow.[]{data-label="eq: network"}](fig2.jpg){width="70.00000%"} What we are building is another variation of the Penrose’s spin-network model [@spinnet] or the Spin-Foam models [@spinfoam1], [@spinfoam2] in Loop Quantum Gravity [@loopgravity], which generalize Feynman diagrams. ### The Matrix relating couples of points \[arrmatrix\] Given a spin-network, like the one in figure \[eq: network\], we can move from picture to the Arrangement Matrix” $M$, which is a simple table constructed as follows. We enumerate all the vertices in the graph at our will, provided we enumerate all of them. Typically we think of indexing the vertices by the usual sequence of integers $1,2,3,4,5,\ldots.$ Thus we create such matrix, whose rows and columns are enumerated in the same way as the vertices in the graph. Then we look at the vertices $v_{i}$ and $v_{j}$: in the entry $(i,j)$ we report the number situated near the arrow which moves from $v_i$ to $v_j$. Similarly, in the entry $(j,i)$ we report the number written near the opposite arrow. Remember that an absent arrow is an arrow with number $0$ and consider for the moment $|M^{ij}|\leq 1$ for every $ij$. ![image](fig3.jpg){width="30.00000%"} \[figuranuova\] In principle, we can image an entry $M_{ij} \neq M_{ji}$, even with $|M_{ij}|^2 \neq |M_{ji}|^2$. This means that $v_i$ may be connected to $v_j$ even if $v_j$ is not connected to $v_i$. In that case, a non-commutative geometry is involved. The probability amplitude that $v_i$ and $v_j$ are mutually connected (we could talk about classical” connection), is: $$Cl.ampl. \propto M_{ij} M_{ji}$$ The probability amplitude for the vertex $v_i$ to be classically connected with any other vertex (hence it will be not isolated) is: $$Cl.ampl. \propto \sum_j M_{ij} M_{ji} = (M \cdot M)_{ii}$$ We can imagine our table with elements $M_{ij}$ as a machine which creates” jointures between vertices, by connecting each other or closing a single vertex onto itself through a loop. The loops are obviously represented by diagonal elements of matrix, with the form $(i,i)$. Now let’s ask ourselves: is it necessary to know where the vertices are located? Let’s look at the Standard Model action: it is given by a sum (or more properly, an integral), over $all$ the points of the universe, of locally defined terms. Any term is defined on a single point. Since the terms are separated - a term for each point - and we integrate all of them, we do not need to know where the points physically are. However, there are terms which are not strictly local, ie those containing the derivative operator $\partial$. The operator $\partial$, acting on a field $\varphi$ in the point $v_{j}$, calculates the difference between the value of $\varphi$ in a point immediately after” $v_{j}$, and the value of $\varphi$ immediately  before” $v_{j}$. In the discretized theory, the integral over points becomes a sum over vertices of the graph. Similarly, the derivative becomes a finite difference. Hence, for terms containing $\partial$, we need a clear definition of before” and after”, that is an arrangement of the vertices, as defined by the matrix $M$. We consider a scalar field but don’t represent it with the usual function (or distribution) $\varphi\left( x\right) $. Instead we denote it with a column of elements (an array) where each element is the value of the field in a specific vertex of the graph. For example (with only 7 vertices): [1.1]{} $$\varphi=\left( \begin{array} [c]{c}\varphi\left( p_{0}\right) \\ \varphi\left( p_{1}\right) \\ \varphi\left( p_{2}\right) \\ \varphi\left( p_{3}\right) \\ \varphi\left( p_{4}\right) \\ \varphi\left( p_{5}\right) \\ \varphi\left( p_{6}\right) \end{array} \right)$$ [1.5]{} For simplicity, we start with a one-dimensional graph: it’s easy to see how the derivative operator is proportional to an antisymmetric matrix $\tilde{M}$ whose elements are different from zero only immediately above the diagonal (where they count +1), and immediately below (where they count -1). We can see this, for example, in a toy-graph” formed by only $12$ separated vertices (figure \[cerchio\]). The argument remains true while increasing the number of vertices. [1.1]{} $$\! \partial\varphi \! = \!\fr 1 {2\Delta} \!\left( \begin{array} [c]{cccccccccccc}0 & +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1\\ -1 & 0 & +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -1 & 0 & +1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -1 & 0 & +1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & +1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & +1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & +1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & +1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & +1\\ +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \end{array} \right) \!\!\! \left( \begin{array} [c]{c}\varphi\left( 0\right) \\ \varphi\left( 1\right) \\ \varphi\left( 2\right) \\ \varphi\left( 3\right) \\ \varphi\left( 4\right) \\ \varphi\left( 5\right) \\ \varphi\left( 6\right) \\ \varphi\left( 7\right) \\ \varphi\left( 8\right) \\ \varphi\left( 9\right) \\ \varphi\left( 10\right) \\ \varphi\left( 11\right) \end{array} \right)$$ = ( \[c\][c]{}( 1) -( 11)\ ( 2) -( 0)\ ( 3) -( 1)\ ( 4) -( 2)\ ( 5) -( 3)\ ( 6) -( 6)\ ( 7) -( 5)\ ( 8) -( 6)\ ( 9) -( 7)\ ( 10) -( 8)\ ( 11) -( 9)\ ( 0) -( 10) )\[eq: matrix\_dev\] $$\pt$$ ![[A simple graph with $12$ vertices which approximates a circular one-dimensional space.]{}[]{data-label="cerchio"}](fig4.jpg){width="40.00000%"} $\Delta$ is the length of graph edges. In the continuous limit, $\Delta \ra 0$ (that occurs in Hausdorff spaces, where matricial product turns into a convolution), we obtain \(x) &=& \_[0]{}1 [2]{} (x,y) (y) dy\ (x) &=& \_[0]{}1 [2]{} (y) dy\ (x) &=& \_[0]{} = (x) In this way our definition is consistent with the usual definition of derivative. While increasing the number of points, a $(-1)$ still remains in the up right corner of the matrix, and a $(+1)$ in the down left corner as well. To remove those two non-null terms, it is sufficient to make them unnecessary, by imposing boundary conditions that make the field null in the first and in the last point. In fact we can describe an open universe (a straight line in one dimension), starting from a closed universe (a circle) and making the radius to tend to infinity. Hence we see that the conditions of null field in the first and in the last point become the traditional boundary conditions for the Standard Model fields. Note that in spaces with more than one dimension, a derivative matrix $\tilde{M}_\mu$ assumes the form (\[eq: matrix\_dev\]) only if we number the vertices progressively along the coordinate $\mu$. However, two different numberings can be always related by a vertices permutation. A quaternionic field action in a non-ordered space-time \[ord\] --------------------------------------------------------------- For any graph $\L$ we define its *associated non-ordered space* $\mathbf{S}_\Lambda$ as the ensemble of all its vertices. The graph includes vertices plus edges (ordered connections between vertices), while the *associated non-ordered space* contains only vertices. In some sense, $\S$ doesn’t know where any vertex is. Consider a *numbering function* $\pi$, that is whatever bijection from $X \subset\mathbf{N}$ to the non-ordered space. &:& X §\ && i v\_i = (i) In this way, every vertex $v_i$ in $\S$ is one to one with an integer $i \in X\subset\mathbf{N}$. This means that the ensemble of vertices has to be at most numerable. We consider a generic invertible matrix $M$ and interpret any entry $M^{ij}$ of $M$ as the probability amplitude for the existence in $\L$ of an edge which connects $\pi(i)$ with $\pi(j)$. Remember that a couple of vertices can be connected by at most two oriented edges with different orientations. $M^{ij}$ defines the probability amplitude for the edge which moves from $\pi(i)$ to $\pi(j)$, while $M^{ji}$ defines the probability amplitude for the edge which moves from $\pi(j)$ to $\pi(i)$. Take care that in four dimensions we have to number the vertices by elements $(i,j,k,l)$ in $\mathbf{N}^4$ before taking the limit $\Delta \ra 0$. In this way $\sum_{(i,j,k,l)}\Delta^4$ becomes $\int dx^0 dx^1 dx^2 dx^3$. If, as we have suggested, the vertices have been already numbered with elements of $\mathbf{N}$, we can change the numbering by using the natural bijection $\vartheta$ between $\mathbf{N}$ and $\mathbf{N}^4$, with $(i,j,k,l) = \vartheta (a)$, $(i,j,k,l) \in \mathbf{N}^4$ and $a \in \mathbf{N}$. Given any skew hermitian matrix $A_\mu$, with entries in $\mathbf{H}$, and a skew hermitian matrix $\tilde{M}_\mu$, which assumes the form (\[eq: matrix\_dev\]) when the vertices are numbered along the coordinate $\mu$, their associated covariant derivative is $$\nabla_{\mu}=\tilde{M}_{\mu}+ A_{\mu}.$$ We indicate with $n$ the number of elements inside $X \subset \mathbf{N}$. Given a normal matrix $\hat{M}$ and four covariant derivatives $\nabla_\mu$ ($\mu = 0,1,2,3$) with dimensions $n \times n$, an *arrangement* for $\hat{M}$ is a quadruplet of couples $(\hat{D}^\mu,\hat{U})$, with $\hat{D}^\mu$ diagonal and $\hat{U}$ hyperunitary, such that $$\hat{M} = \sum_\mu \hat{U}\hat{D}^\mu \nabla_\mu \hat{U}^{\dag}.$$ We require that covariant derivative will be form-invariant under the action of a transformation $V \in U(n,\mathbf{H})$ which acts both on $\tilde{M}_\mu$ and $A_\mu$. We explicit $V\nabla_{\mu}V^{\dag}$: $$\begin{aligned} V\nabla_{\mu}V^{\dag} & = V\left( \tilde{M}_{\mu}+A_{\mu}\right) V^{\dag}\\ & = \underset{=1}{\underbrace{VV^{\dag}}}\tilde{M}_{\mu}+V\left[ \tilde {M}_{\mu},V^{\dag}\right] +VA_{\mu}V^{\dag}\nonumber .\end{aligned}$$ Setting $$A_{\mu}^{\prime}= V\left[ \tilde{M}_{\mu},V^{\dag}\right] +VA_{\mu}V^{\dag},$$ we obtain $$V\nabla_{\mu}V^{\dag}=\tilde{M}_{\mu}+A_{\mu}^{\prime}\overset{def}{=}\nabla_{\mu}^{\prime} \label{eq: der_A}$$ that means $$V\nabla_{\mu}[A]V^{\dag} = \nabla_\mu [A'].$$ Hence the transformation law for the matrix $A_{\mu}$ is like we expect: $$A_{\mu}\rightarrow {A'}_\mu = V\left[ \tilde{M}_{\mu},V^{\dag}\right] +VA_{\mu}V^{\dag} . \label{eq: trasforma_A}$$ We observe that (\[eq: trasforma\_A\]) preserves the hermiticity of $A_{\mu}$. In fact \^\_&=& (V\[\_,V\^\] + VA\_V\^)\^\ &=& (V\_V\^- \_+ VA\_V\^)\^\ &=& V\_\^V\^- \_\^+ V A\_\^V\^\ &=& - V \_V\^+ \_- V A\_V\^\ &=& -(V\[\_,V\^\] + VA\_V\^) = -[A’]{}\_It’s easy to see that (\[eq: trasforma\_A\]) reduces to the usual transformation for a gauge field ${A'}_{\mu} = V\partial_{\mu}V^{\dag}+VA_{\mu}V^{\dag}$ in the limit $\Delta \ra 0$. For every invertible normal matrix $\hat{M}$ and every covariant derivative $\nabla[A]_\mu$ which is invertible (in the matricial sense), there exist 1. A new quadruplet of covariant derivatives ${\nabla'}_\mu = \nabla[A']_\mu$ such that $D^\mu {\nabla'}_\mu = 1$ for some diagonal matrix $D^\mu$, where $A^\prime_\mu$ is the gauge transformed of $A_\mu$ for some unitary transformation $U$; 2. An arrangement $(\hat{D}^\mu,\hat{U})$ between $\hat{M}$ and $\nabla^\prime_\mu$. \[existence\] According to spectral theorem, $\forall \hat{M} \in\mathbb{M}^{(N)}$ $\exists \hat{U}$ hyperunitary such that $\hat{U}\hat{M}\hat{U}^{\dag}= K$ with $K$ diagonal. $\hat{M}$ is invertible, so the same is true for $K$. Setting $\hat{D}= K^{-1}$: $$\begin{aligned} \hat{U}\hat{M}\hat{U}^{\dag}\hat{D} = K\hat{D} = KK^{-1} = 1\label{ordinamento}\\ \hat{D}\hat{U}\hat{M}\hat{U}^{\dag}= \hat{D}K = K^{-1}K = 1 .\nonumber\end{aligned}$$ At this point we choice a covariant derivative $\nabla_\mu$ (which is also a normal matrix) and we reason as we did above for $\hat{M}$, putting $$1 =D^{\mu}U\nabla_{\mu}U^{\dag}=U\nabla_{\mu}U^{\dag}D^{\mu} \label{eq: riduzioneB}$$ for some $D^\mu$ diagonal and $U$ unitary. No sum over repeated indices is implied. A well known theorem states that $U$ can be chosen in such a way that $D^\mu$ takes values in $\mathbf{C}$. Moreover we can always find a quaternion $s$ with $|s|=1$ such that, if $D^\mu$ takes values in $\mathbf{C} = \mathbf{R} \oplus i\mathbf{R}$, then $s^* D^\mu s$ will take values in $\mathbf{C} = \mathbf{R} \oplus (ri+tj+pk)\mathbf{R}$, with fixed $r,t,p \in \mathbf{R}$ and $r^2+t^2+p^2 =1$. Every $s$ with $|s|=1$ describes in fact a rotation in the $3$ dimensional space with base elements $i,j,k$. Introducing such $s$, the equation (\[eq: riduzioneB\]) becomes s1s = s\^\*D\^ss\^[\*]{}U\_ U\^s. Now we note that $s^*U$ is another hyperunitary transformation. Redefining $s^* D_\mu s \ra D_\mu$, $s^* U \ra U$ we obtain newly 1 = D\^U\_ U\^. In this way we can always choose in what complex plane is $D_\mu$. In the following we call this propriety $s$-invariance”. Using (\[eq: der\_A\]) into (\[eq: riduzioneB\]): $$1=D^\mu \nabla_{\mu}^{\prime}=\nabla_{\mu}^{\prime}D^\mu \Longrightarrow\left[ \nabla_{\mu}^{\prime},D^\mu \right] = 0 .$$ Taking into account (\[ordinamento\]): $$\begin{aligned} \hat{D}\hat{U}\hat{M} \hat{U}^{\dag} & =D^\mu \nabla_{\mu}^{\prime}\\ \hat{U}\hat{M} \hat{U}^{\dag}\hat{D} & =\nabla_{\mu}^{\prime}D^\mu \nonumber .\end{aligned}$$ Summing on $\mu$ we obtain: $$\begin{aligned} 4\hat{D}\hat{U}\hat{M} \hat{U}^{\dag} & = \sum_\mu D^\mu \nabla_{\mu}^{\prime}\\ 4\hat{U}\hat{M} \hat{U}^{\dag}\hat{D} & = \sum_\mu \nabla_{\mu}^{\prime}D^\mu . \nonumber\end{aligned}$$ Solving for $\hat{M}$: = 14 \_\^ \^[-1]{} D\^ \_\^ = 14 \_\^ \_\^ D\^\^[-1]{} . Defining $\hat{D}^\mu$ as $\fr 14 \hat{D}^{-1} D^\mu $ = \_\^ \^ \_\^ \[finale-dim\] QED Note that in general $\hat{M} \neq \sum_\mu \hat{U}^{\dag} \nabla_{\mu}^{\prime} \hat{D}^{\mu}\hat{U}$ because $\hat{D}^{-1} D^\mu \neq D^\mu \hat{D}^{-1}$ for the non commutativity of quaternions. For every invertible matrix $M$ with entries in $\mathbf{H}$, a normal matrix $\hat{M} = U_M M$ exists, where $U_M$ is unitary and $\hat{M}$ is neither hermitian nor skew hermitian. \[normal\] Given an invertible matrix $M$, a unique choice of matrices $U$ and $P$ always exists, with $U$ unitary and $P$ hermitian positive, such that $UM = P$. Moreover, a well known theorem states that, for every hermitian matrix $P$ with entries in $\mathbf{H}$, there exist $I,J,K$ skew hermitian unitary matrices which commute with $P$. Moreover $I,J,K$ achieve the same algebra of quaternionic imaginary unities $i,j,k$. [1.5]{} Consider then the unitary matrix $p = exp((bI + cJ + dK)P)$, with $b,c,d \in \mathbf{R}$. It’s easy to see that $[p,P]=0$. Moreover the matrix $\hat{M} = pP$ is normal and it is neither hermitian or skew hermitian. In fact $$(pP)^\dag = p^{\dag} P = p^{-1} P = \neq \pm pP$$ $$(pP)(pP)^\dag = (Pp)(Pp)^\dag = Ppp^\dag P^\dag = PP = Pp^\dag pP = P^\dag p^\dag pP = (pP)^\dag (pP)$$ Moreover $$\hat{M} = pUM = U_M M\qquad U_M = pU\,\,\, unitary.$$ $\pt\!\!\!\!$ For every invertible matrix $M$, we define an *associated normal matrix* as a normal matrix obtained trough the construction above. We indicate it with $\hat{M}$ and use the notation $U_M$ for the unitary transformation which transforms $M$ in $\hat{M} = U_M M$. For every $n \times n$ invertible matrix $M$ with entries in $\mathbf{H}$ and every quadruplet of covariant derivatives $\nabla[A]_\mu$ which are invertible (in the matricial sense), there exist 1. An associated normal matrix $\hat{M} = U_M M$ with $U_M$ unitary; 2. A new quadruplet of covariant derivatives ${\nabla'}_\mu = \nabla[A']_\mu$ such that $D^\mu {\nabla'}_\mu = 1$ for some diagonal matrix $D^\mu$, where $A^\prime_\mu$ is the gauge transformed of $A_\mu$ for some unitary transformation $U$; 3. An arrangement $(\hat{D}_\mu,\hat{U})$ between $\hat{M}$ and $\nabla^\prime_\mu$ such that && S = (M)\^(M) =\_[i=1]{}\^n \_[,]{} h\^(x\_i) ([’]{}\_\^(x\_i))\^\* ([’]{}\_\^(x\_i)) .\ && \[azione-sca\] Here $\phi$ is a one-component quaternionic field, while && x\_i (i)\ && \^(x\_[i]{})= [\^]{}\^[i]{}(x)= \_[j]{} \^[ij]{}\^[j]{}(x) = \_[j]{} \^[ij]{}(x\_[j]{})\ && h h\^(x\_i) = 12 d\^d\^[\*]{}(x\_i) +c.c.\_\^[ij]{} = d\^(x\_i)\_[ij]{}.\ && \[transf-scalar\] The existence of $\nabla^\prime_\mu = \nabla[A']_\mu$ follows from the proof of theorem \[existence\], while the existence of an associated normal matrix $\hat M = U_M M$ descends from theorem \[normal\]. Hence we see that the first action in (\[azione-sca\]) is invariant for transformations $(U_1,U_2)$ in $U(n,\mathbf{H}) \otimes U(n,\mathbf{H})$ which send $M$ in $U_2 M U_1^\dag$ and $\phi$ in $U_1\phi$. In fact [1.3]{} S\[\] &=& \^M\^M\ && \^U\_1\^(U\_2 M U\_1\^)\^(U\_2 M U\_1\^) U\_1\ &=& \^U\_1\^U\_1 M\^U\_2\^U\_2 M U\_1\^U\_1\ &=& \^ M\^M = S\[\]If we set $U_1 = 1$ and $U_2 = U_M$ we have S\[\] \^M\^U\_M\^U\_M M = { [ll]{} = \^M\^M = S\[\]\ = \^\^ . . \[sca-nuova\] We substitute (\[finale-dim\]) in (\[sca-nuova\]) with $\hat{M}$ in place of $M$. S\[\] &=& [\_[,]{}]{} ( \^\^\_\^)\^( \^\^\_\^)\ &=& [\_[,]{}]{} ( \^\^\_\^\^ \^\^\_\^)\ &=& [\_[,]{}]{} ( \^\^\_\^\^ \^\_\^ )\ &=& [\_[,]{}]{} ( \^\^\_\^\^ \^\_\^)\ &=& [\_[,]{}]{} ( \^\_\^ \^\^ \_\^\^) . In the last step we have taken in account the definition (\[transf-scalar\]). Finally S = 12 \_[,]{} \^[\^]{} \_\^[\^]{} ( \^\^ +c.c. ) \_\^\^. It is remarkable that $\hat{D}_{\mu}$ is diagonal: $$\hat{D}^{\mu}_{ij}=d^{\mu}\left( x_{i}\right) \delta_{ij} .$$ We can set $$\sqrt{\left\vert h\right\vert }h^{\mu\nu}\left( x_{i}\right) = \fr 12 d^{\mu *}d^{\nu}\left( x_{i}\right) +c.c.$$ and then $$S= {\displaystyle\sum\limits_{i,\mu,\nu}} \sqrt{\left\vert h\right\vert }h^{\mu\nu}(x_{i})\left( \nabla_{\mu}^{\prime }\phi^{\prime}\right)^{*i} \left( \nabla_{\nu}^{\prime} \phi^{\prime}\right)^{i} . \label{ultima2}$$ QED. The action of a transformation $(U_1, U_2)$ on $\nabla'$ follows from its action on $M$. We can always use the invariance under $U(n,\mathbf{H}) \otimes U(n,\mathbf{H})$ to put $M$ in the form $M =\sum_\mu \hat{D}^\mu \nabla'_\mu$. Starting from this we have U\_2 M U\_1\^= \_U\_2 \^’\_U\_1\^= \_U\_2 \^U\_1\^U\_1’\_U\_1\^.\[transf\]We define ${\nabla''}_\mu = U_1\nabla'_\mu U_1^\dag$ the transformed of $\nabla'$ under $(U_1, U_2)$ and $\hat{D}^{\prime\mu} = U_2 \hat{D}^\mu U_1^\dag$ the transformed of $\hat{D}^\mu$. We assume that $A^\prime_\mu$ inside $\nabla^\prime_\mu$ transforms correctly as a gauge field, so that $$\nabla^\prime [A^\prime]_\mu \phi' = \nabla^\prime [A^\prime]_\mu U_1^\dag \phi'' = U_1^\dag \nabla'' [A^\prime]_\mu \phi'' = U_1^\dag \nabla^\prime [A^\prime_{U1}]_\mu \phi''$$ $$\phi'' = U_1 \phi' .$$ We want $\hat{D}^{\prime\mu}$ remain diagonal and $h' = h[\hat{D}'] = h[\hat{D}]$. In this case there are two relevant possibilities: 1. $\hat{D}$ is a matrix made by blocks $m \times m$ with $m$ integer divisor of $n$ and every block proportional to identity. In this case the residual symmetry is $U(1,\mathbf{H})^n \times U(m,\mathbf{H})^{n/m}$ with elements $(sV, V)$, $s$ both diagonal and unitary, $V \in U(m,\mathbf{H})^{n/m}$; 2. $h$ is any diagonal matrix. The symmetry reduces to $U(1,\mathbf{H})^n \otimes U(1,\mathbf{H})^n$ which is local $U(1,\mathbf{H}) \otimes U(1,\mathbf{H}) \sim SU(2) \otimes SU(2) \sim SO(4)$. In this way, if we keep fixed the metric $h$ and keep diagonal $\hat{D}$, the new action will be invariant at least under $U(1,\mathbf{H})^n \otimes U(1,\mathbf{H})^n$ which doesn’t modify $h$. Note however that the action (\[ultima2\]) is highly non local, because the fields $A_\mu(x^a, x^b)$ with $a \neq b$ can relate couples of vertices very far each other. In fact the transformations in $U(n,\mathbf{H})$ mix all the vertices in the universe independently from their position. In the next section we’ll discover in what limit (besides $\Delta \ra 0$) the (\[ultima2\]) becomes a local action. Let us now pause on the metric $h^{\mu\nu}$. We observe how the metric $h$ has appeared from nowhere. We get the impression” that the metric does not exist a priori”, but is generated by the matrices $\hat{D}$. In other words: the metric is simply the result of our desire to see an ordered universe at any cost. Note that we have chosen the matrix $\nabla$ between skew hermitian matrices, so that the gauge fields $AR_i$ have real eigenvalues, corresponding to effectively measurable quantities[^6]. Conversely, $\hat{M}$ must remain generically normal. In fact, if $\hat{M}$ was (skew) hermitian, the fields $d$ would become (imaginary) real, and there would not be enough degrees of freedom to construct the metric $h$. We focus on the relationship: $$\sqrt{\left\vert h\right\vert }h^{\mu\nu}\left( x_{i}\right) = \fr 12 d^{\mu *}d^{\nu}\left( x_{i}\right) +c.c. \label{eq: metrica_ord}$$ We set: d =( \[c\][c]{}a\_[0]{}+ib\_[0]{}+jc\_0+kd\_0\ a\_[1]{}+ib\_[1]{}+jc\_1+kd\_1\ a\_[2]{}+ib\_[2]{}+jc\_2+kd\_2\ a\_[3]{}+ib\_[3]{}+jc\_3+kd\_3 ) It’s easy to see how $s$-invariance permits us to choose the $D_\mu$ in such a way that the real vectors [1.3]{} ( \[c\][c]{}a\_[0]{}\ a\_[1]{}\ a\_[2]{}\ a\_[3]{} ), ( \[c\][c]{}b\_[0]{}\ b\_[1]{}\ b\_[2]{}\ b\_[3]{} ), ( \[c\][c]{}c\_0\ c\_1\ c\_2\ c\_3 ), ( \[c\][c]{}d\_0\ d\_1\ d\_2\ d\_3 ) will be linearly independent. h\^[-1]{} &=& ( \[c\][cc]{} a\_[0]{}\^[2]{}+b\_[0]{}\^[2]{}+c\_0\^2+d\_0\^2 & a\_[0]{}a\_[1]{}+b\_[0]{}b\_[1]{}+c\_0 c\_1+d\_0 d\_1\ a\_[1]{}a\_[0]{}+b\_[1]{}b\_[0]{}+c\_1 c\_0+d\_1 d\_0 & a\_1\^2+b\_1\^2+c\_1\^2+d\_1\^2\ a\_2 a\_0+b\_2 b\_0+c\_2 c\_0 +d\_2 d\_0 & a\_2 a\_1+b\_2 b\_1 +c\_2 c\_1+d\_2 d\_1\ a\_3 a\_0+b\_3 b\_0+c\_3 c\_0+d\_3 d\_0 & a\_3 a\_1+b\_3 b\_1+c\_3 c\_1+d\_3 d\_1 .\ && . \[c\][cc]{} a\_[0]{}a\_[2]{}+b\_[0]{}b\_[2]{}+c\_0 c\_2+d\_0 d\_2 & a\_[0]{}a\_[3]{}+b\_[0]{}b\_[3]{}+c\_0 c\_3+d\_0 d\_3\ a\_1 a\_2 + b\_1 b\_2+c\_1 c\_2+d\_1 d\_2 & a\_1 a\_3 +b\_1 b\_3+c\_1 c\_3+d\_1 d\_3\ a\_2\^2+b\_2\^2+c\_2\^2+d\_2\^2 & a\_2 a\_3 + b\_2 b\_3 + c\_[2]{}c\_[3]{}+d\_[2]{}d\_[3]{}\ a\_3 a\_2+b\_3 b\_2 +c\_3 c\_2+d\_3 d\_2 & a\_3\^2+b\_3\^2+c\_3\^2+d\_3\^2 ) Note that we have $10$ independent metric components as it should be. What would have happened if the entries of $M$ were been simply complex numbers? In that case we could always take a one-form $X_\nu$ such that $X_\nu (Im$ $d^\nu) = X_\nu (Re\,d^\nu) = 0$. The contraction of $X_\nu$ with the metric would be $$\sqrt h h^{\mu\nu} X_\nu = d^{*\mu} (d^\nu X_\nu) + d^\mu (d^{*\nu} X_\nu) = 0 .$$ Hence the metric would be degenerate. For $d^\mu \in \mathbf{H}$ this can’t happen, because no one-form can be orthogonal to $4$ vectors linearly independent in a $4$-dimensional space. Moreover a such one-form exists in spaces with dimension $>4$. For this reason our theory hasn’t meaning in presence of extra dimensions. A local action from the quaternionic field action {#local} ------------------------------------------------- Here we expose how to get a local action from the quaternionic field action in the limit of low energy. We can add to action quadratic $\sim M^2$ and quartic $\sim M^4$ terms, provided they are gauge invariant. In general we obtain a non-trivial potential of form $\alpha M^4 - \beta M^2$. We suppose that a minimum for such potential breaks the symmetry $U(n,\mathbf{H}) \otimes U(n,\mathbf{H})$ and provides a mass to gauge fields $A_\mu$. To view it is sufficient to rewrite $M$ as a function of $A_\mu$ and consider a quartic term: $$h^{\mu\alpha}A_{\mu}A_{\alpha}h^{\nu\beta}A_{\nu}A_{\beta} . \label{eq: quartico}$$ For a minimum of $M$ there is a minimum of $A$ which gives sense to the expansion: $$A_{\mu}=A_{\mu}^{\min}+\delta A_{\mu} .$$ Therefore the (\[eq: quartico\]) generates a factor: $$m\left( x\right) ^{2}h^{\nu\beta}A_{\nu}A_{\beta}$$ $$m\left( x\right) ^{2} = h^{\mu\alpha}A_{\mu}^{\min}A_{\alpha}^{\min}$$ Hence the gauge fields acquire a mass, varying from point to point in the universe and essentially dependent on the metric. Given a potential for $M$, which is both hermitian and invariant for $U(n,\mathbf{H})\otimes U(n,\mathbf{H})$, his minimum configurations are always invariant at least for $U(1,\mathbf{H})^n \otimes U(1,\mathbf{H})^n$, that is a local $U(1,\mathbf{H}) \otimes U(1,\mathbf{H})$. A such potential contains only terms of type $tr((MM^\dag)^j)$, $j \in \mathbf{N}$. All we can measure are eigenvalues of hermitian operators, and a hermitian operator has only real eigenvalues $q$ which are invariant under $U(1,\mathbf{H})^n$, ie $sqs^* = qss^* =q$ for $|s| =1$. The simpler hermitian operators made by $M$ are $MM^\dag$ and $M^\dag M$, whose eigenvalues are invariant under $$M \ra s_1 M s_2^*\qquad (s_1, s_2) \in U(1,\mathbf{H}) \otimes U(1,\mathbf{H})$$ $$MM^\dag \ra s_2 M s_1^* s_1 M^\dag s_2^* = s_2 MM^\dag s_2^*$$ $$M^\dag M \ra s_1 M^\dag s_2^* s_2 M s_1^* = s_1 M^\dag M s_1^*$$ In this manner we have always $m =0$ for diagonal fields $A_\mu (x^a, x^a) \overset{!}{=} A_\mu (x^a)$. A transformation $(s_1,s_2) \in U(1, \mathbf{H})\otimes U(1, \mathbf{H})$ acts inside action in the expected way (see formula (\[transf\])) $$\phi' \ra s_1 \phi' \overset{!}{=} \phi''$$ $${\nabla'}[A']_\mu \ra s_1 {\nabla'}[A']_\mu s_1^* = {\nabla'}[A^\prime_{s1}]_\mu$$ $$d^\mu \ra s_2 d^\mu s_1^*$$ S\[’, A’\] = S’\[”, A\^\_[s1]{}\] = \_(s\_2 d\^s\_1\^\* \^\_”)\^(s\_2 d\^s\_1\^\* \^\_”) \[locale1\] We use the natural correspondence (1, i, j, k) i(\^0, \^1, \^2, \^3), \^0 = -i, \[correspondence\]and define the complex field $\hat{\phi}$ as a complex $2 \times 2$ matrix, $$\hat{\phi}^a = \left( \begin{array}[c]{cc} \phi_1^a +i\phi_2^a & \phi_3^a+i\phi_4^a \\ -\phi_3^a +i\phi_4^a & \phi_1^a -i\phi_2^a \\ \end{array} \right)$$ with $\phi'' = \phi_1 + i \phi_2 +j\phi_3 +k \phi_4$ and $\phi_1, \phi_2, \phi_3, \phi_4 \in \mathbf{R}$. Every term between parenthesis becomes W\_2 (i\^k) d\_k\^W\_1\^ \[A\^\_[s1]{}\]\_ \[SU2\]where $\s$ are Pauli matrices and $(W_1,W_2) \in SU(2)\otimes SU(2)$. For every $SO(4)$ transformation $\Lambda$, a transformation $(W_1,W_2) \in SU(2) \otimes SU(2)$ exists, such that for every vector $d_j \in R^4$ we find $$\Lambda_i^{\pt j} d_j \sigma^i = d_i W_2 \sigma^i W_1^\dag .$$ We write $W_1 = U^{\prime\dag}_1 U_1$ and $W_2 = U^\prime_1 U_1$. In this manner we decompose $SU(2)\otimes SU(2)$ in $SU(2)_{rot} \otimes SU(2)_{boosts}$. $SU(2)_{rot}$ is generated by the couples $(U_1,U_1)$, while $SU(2)_{boosts}$ by the couples $(U^{\prime\dag}_1, U^{\prime}_1)$. After a wick rotation, the first one describes rotation in $R^3$, while the second one describes boosts. A generic vector $d = \left( \begin{array}[c]{cccc} d_0 & d_1 & d_2 & d_3 \end{array} \right)$ gives $$d_i (i\sigma^i) = \left( \begin{array}[c]{cc} d_0 + id_3 & id_1+d_2 \\ id_1-d_2 & d_0-id_3 \end{array} \right)$$ with $|d|^2 = det\,d_i (i\sigma^i)$. A transformation in $SO(4)$ doesn’t change the norm $|d|$. Moreover, for every $d$ exists a transformation in $SO(4)$ which put it in the normal form $$d = \left( \begin{array}[c]{cccc} |d| & 0 & 0 & 0 \end{array} \right) .$$ The same properties have to be true for $SU(2) \otimes SU(2)$. The first one is banally verified because $det\,W_1 = det\,W_2 = 1$ and then $det\, d_i(i\sigma^i) = det\,d_i(W_2 i\sigma^i W_1^\dag)$. Being $d_i(i\sigma^i)$ normal, we can use a transformation in $SU(2)_{rot}$ to put it in a diagonal form $$U_1 d_i (i\sigma^i) U_1^\dag = \left( \begin{array}[c]{cc} d_0 + id_3 & 0 \\ 0 & d_0-id_3 \end{array} \right) .$$ Define now the matrix $U^\prime_1$ as $$U^\prime_1 = \fr 1 {\sqrt{|d|}}\left( \begin{array}[c]{cc} \sqrt{d_0 + id_3} & 0 \\ 0 & \sqrt{d_0-id_3} \end{array} \right) .$$ It’s easy to verify that $U^\prime_1 U^{\prime\dag}_1 = 1$ and $det\,U^\prime_1 = 1$. Applying to $U_1 d_i (i\sigma^i) U_1^\dag$ this transformation in $SU(2)_{boosts}$ we obtain $$U_1^\prime U_1 d_i (i\sigma^i) U_1^\dag U_1^\prime = \left( \begin{array}[c]{cc} |d| & 0 \\ 0 & |d| \end{array} \right) .$$ So, for every $d$, a transformation in $SU(2) \otimes SU(2)$ exists, which puts it in the normal form. In this way, $d$ transforms exactly as a vielbein field in the Palatini formulation of General Relativity, giving then the correspondence \_i\^[j]{} d\_j (i\^i) &=& U\^\_1 U\_1 (i\^j) U\_1\^U\^\_1 d\_j\ \_i\^[j]{} d\_j \^i &=& W\_2 \^j W\_1\^d\_j\ 12 tr (\_i\^[j]{} d\_j \^i \^k) &=& 12 tr (W\_2 \^j W\_1\^\^k) d\_j\ \_k\^[j]{} d\_j &=& 12 tr (W\_2 \^j W\_1\^\^k) d\_j\ \_k\^[j]{} &=& 12 tr (W\_2 \^j W\_1\^\^k) .So, at every $\Lambda \in SO(4)$ corresponds a couple $(U_1, U_2) \in SU(2) \otimes SU(2)$. Applying this to (\[SU2\]), it becomes W\_2 (i\^k) d\_k\^W\_1\^ \[A\^\_[s1]{}\]\_ = \_k\^[i]{} d\_i\^\^k [’]{}\[A\^\_[s1]{}\]\_ \[SU2-second\] . Note that if we write $\hat{\phi} = (\hat{\phi}_1\,\,\,\hat{\phi}_2)$, with $\hat{\phi}_1, \hat{\phi}_2$ complex column arrays $1 \times 2$, then $\hat{\phi}_2 = i\s_2 \hat{\phi}_1^*$. This implies that the column array $1 \times 4$ $\left( \begin{array}[c]{c} \hat{\phi}_1 \\ \hat{\phi}_2 \\ \end{array} \right)$ transforms under $SO(4)$ as a Majorana spinor. Applying newly the correspondence (\[correspondence\]) to (\[SU2-second\]), we obtain $$s_2 d^\mu s_1^* \nabla^\prime [A^\prime_{s1}]_\mu \phi'' = \Lambda d^\mu {\nabla'} [A^\prime_{s1}]_\mu \phi''.$$ Inserting it in the action (\[locale1\]) S’\[A\^\_[s1]{},”\] &=& \_([’]{} \[A\^\_[s1]{}\]\_”)\^d\^[\*]{} \^ d\^([’]{} \[A\^\_[s1]{}\]\_”)\ &=& \_([’]{} \[A\^\_[s1]{}\]\_”)\^d\^[\*]{} d\^([’]{} \[A\^\_[s1]{}\]\_”)\ &=& \_(d\^[’]{} \[A\^\_[s1]{}\]\_”)\^(d\^ \[A\^\_[s1]{}\]\_”)\ &=& S\[A\^\_[s1]{},”\] . The diagonal gauge field $A(x^a)$ compensates the action of $SU(2)\otimes SU(2)$ inside $\nabla'$. Moreover we have just demonstrated that the field $d^\mu$ transforms under this group as a vielbein field in the Palatini formulation of General Relativity. This implies $A(x^a)$ is a gravitational spin-connection. Consequently, every purely imaginary quaternion defines a spin operator $\vec{S}$ via the correspondence $(i,j,k) \leftrightarrow 2i(S_1, S_2,$ $S_3)$. In fact, each element in $U(1,\mathbf{H})$ is the exponential of a purely imaginary quaternion, in the same way as an element in $SU(2)$ is the exponential of $i\vec{\a} \cdot \vec{S}$ for some real vector $\vec{\a}$. Note that a majorana spinor in an euclidean space can’t distinguish if $s_2$ belongs to $SU(2)_{rot}$, $SU(2)_{boosts}$ or if it is a mixed combination. Only after the wick rotation it feels a difference, because the generator of $SU(2)_{boosts}$ moves from $i\sigma^i$ to $\sigma^i$, while $SU(2)_{rot}$ remains unchanged. Someone can infer that, if $\phi$ transforms as a majorana spinor, our action has not the standard form. We don’t care this now: what exposed is only a toy model. In another work (under review) we show explicitly how to get the correct Dirac action for these and all the others fields (both fermions and bosons). To finish, we suppose that masses of other fields ($A(x^a, x^b)$ with $a \neq b$) are sufficiently large, so that the experimental physics of nowadays is unable to locate them. For the same reason, in the low energy approximation, they can be omitted from the action. Neglecting the ultra-massive” fields, the scalar field action becomes a local action $$S=\sum_{i=1}^n \sum_{\mu,\nu} \sqrt{\left\vert h\right\vert }h^{\mu\nu}\left( x_i\right) \left( \overset{G}{\nabla}_{\mu}\phi(x_i)\right)^* \left( \overset{G}{\nabla}_{\nu}\phi(x_i)\right)$$ where $\overset{G}{\nabla}$ are standard gravitational covariant derivatives. The origin of spin {#spin} ------------------ Consider the spin operator $S_{3}$ $$\hat{S}_{3} = \frac{\hslash}2\left( \begin{array} [c]{cc}1 & 0\\ 0 & -1 \end{array} \right)$$ and calculate the normalized eigenvectors and eigenstates. $$\begin{aligned} \left\vert \uparrow\right\rangle & =e^{i\phi}\left( \begin{array} [c]{c}1\\ 0 \end{array} \right) \text{, \ with eigenvalue }\lambda_{1}=+\frac{1}{2}\text{ \ (in unit }\hslash=1\text{)}\label{eq: autketS3}\\ \left\vert \downarrow\right\rangle & =e^{i\phi}\left( \begin{array} [c]{c}0\\ 1 \end{array} \right) \text{, \ with eigenvalue }\lambda_{2}=-\frac{1}{2}\text{ \ }\end{aligned}$$ where $\phi$ is an arbitrary phase. The eigenvectors completeness guarantees that the field $\hat{\phi}_1$, which appears in the precedent section, can be always decomposed in a sum of such eigenstates. The projectors on a single eigenstate of $S_{3}$ are $$\hat{\pi}^{+} =\frac{1}{2}\left( \begin{array} [c]{cc} 1 & 0 \\ 0 & 0 \end{array} \right) ,$$ $$\hat{\pi}^{-} =\frac{1}{2}\left( \begin{array} [c]{cc} 0 & 0 \\ 0 & 1 \end{array} \right) .$$ We see that $\hat{\pi}^{\pm}$ are idempotent, while $\hat{\pi}^{+}\hat{\pi }^{-}=0$, as it should be. A rotation by an angle $\theta$ around the axe $1$ is represented by the unitary matrix: $$U_{1}\left( \theta\right) =\left( \begin{array} [c]{cc} \cos(\theta/2) & -i\sin(\theta/2) \\ -i\sin(\theta/2) & \cos(\theta/2) \end{array} \right)$$ where $$U_1(\theta)\hat{\phi}_1 = \widehat{(s(U_1)\phi)}_1\qquad \hat{\phi}_1=\widehat{(\phi)}_1$$ for some quaternion $s(U)$ with $|s|=1$. In the special case of a rotation by $\pi$: $$U_{1}\left( \pi\right) =\left( \begin{array} [c]{cc} 0 & -i \\ -i & 0 \end{array} \right) \label{eq: rotazione}.$$ We suppose now that the system is in the eigenstate $\left\vert \uparrow\right\rangle $; following a rotation around the axis $1$ the state will be: $$\left\vert \uparrow\right\rangle _{R}= U_{1}(\theta)\left\vert \uparrow \right\rangle .$$ For $\theta=\pi$: $$\left\vert \uparrow\right\rangle _{R}= U_{1}\left( \pi\right) \left\vert \uparrow\right\rangle = -i\left\vert \downarrow\right\rangle =e^{-i\pi /2}\left\vert \downarrow\right\rangle \rightarrow\left\vert \downarrow \right\rangle \text{,} \label{eq: scambio_spin}$$ $$\left\vert \downarrow\right\rangle _{R}= U_{1}\left( \pi\right) \left\vert \downarrow\right\rangle = -i\left\vert \uparrow\right\rangle =e^{-i\pi /2}\left\vert \uparrow\right\rangle \rightarrow\left\vert \uparrow \right\rangle \text{,} \label{eq: scambio_spin2}$$ since the state is defined up to an inessential phase factor. We observe that a rotation by $\pi$ around the axe $1$ is equivalent to exchange $\left\vert \uparrow\right\rangle $ with $\left\vert \downarrow\right\rangle $, as we have just verified by (\[eq: scambio\_spin\]) and (\[eq: scambio\_spin2\]). Surely we can expand the matrix $M$ as follows $$M\left( x^{a},x^{b}\right) = M'\left( x^{a},x^{b}\right) + |s(x^a)|e^{r(x^a)} \d^{ab}$$ with $M'\left( x^{a},x^{b}\right) = 0$ for $a=b$. The element $r(x^a) = arg[s(x^a)]$ is a purely imaginary quaternion: when it acts on $\phi$, it determines uniquely the result of a spin measure, exchanging the states $\left\vert \uparrow\right\rangle $ - $\left \vert\downarrow\right\rangle $. This seems to suggest an identification between the arrangement field $M$ and the observer who performs the measurement. Indeed the operator $M$ can simulate a measurement operation when it presents the form $M^{ab} = u^a w^b$: $$\begin{aligned} M^{ab} &=& u^a w^b \overset{continuous}{\longrightarrow} M(x,y) = \psi(x) \psi^{\ast}(y) \nonumber \\ M^{ab}\varphi_b &=& u^a (w^b \varphi_b) \overset{continuous}{\longrightarrow} \int dy \, M(x,y) \varphi(y) \nonumber \\ &=& \psi(x)\int dy \, \psi^{\ast}(y)\varphi(y) = \psi(x)(\psi,\varphi) \nonumber\end{aligned}$$ $\psi\left( x\right)$ is any eigenstate, while $\left( \psi ,\varphi\right)$ denotes the scalar product between $\psi$ and $\varphi$. We see that $M$ projects $\varphi$ along the eigenstate $\psi$, and in quantum mechanics a measurement is just a projection. The latter argument gives also an indication about the spin nature. Consider the entries of $M$ closest to the diagonal: they are the $M^{ij+1}$ and $M^{ij-1}$ which compose $\tilde{M}$. Moreover, they represent the probability amplitudes for the existence of connections between (numerically) consecutive vertices. In the limit $\Delta \ra 0$, $\tilde{M}$ becomes $\pa$, which is proportional to $i\pa$, an operator which acts on a wave function $\psi(x)$ and returns the momentum $p$ of the corresponding particle: $$i\pa \psi(x) = p\psi(x) .$$ In this way, the entries of $\tilde{M}$ represent both a momentum and a probability amplitude for connections between (numerically) consecutive vertices. In a certain sense, $\tilde{M}$ draws continuous paths and measures the momentums along these paths (figure \[percorso\_continuo\]). If we describe a particle with a wave function $\phi$, its spin is determined by diagonal components of $M$: in fact, $exp(r)$ acts on $\phi$ as a rotation in the tangent space. Consequently, if $r$ is applied to $\phi$, it returns the spin of the associated particle. The diagonal components of $M$ represent also the probability amplitudes for a connection between a vertex and itself. Reasoning in analogy with the components of $\tilde{M}$, we associate at every such pointwise” loop a circumference $S^1$: we interpret the spin as the rotational momentum due to the motion along these circumferences (figure \[Loop\_1\]). ![$\tilde{M}$ behaves as a derivative, that is proportional to a momentum operator. The non-empty entries of $\tilde{M}$ represent both a momentum and a probability amplitude for connections between (numerically) consecutive vertices. In a certain sense, $\tilde{M}$ draws continuous paths and measures the momentums along them.[]{data-label="percorso_continuo"}](fig5.jpg){width="60.00000%"} ![Each diagonal component of $M$ represents the probability amplitude for a connection between a vertex and itself. The spin is a momentum along such pointwise loops.[]{data-label="Loop_1"}](fig6.jpg){width="60.00000%"} It is remarkable that there exist two types of pointwise loops: the one in figure \[Loop\_1\], where a particle assumes the same aspect after a complete rotation, and the one in figure \[Loop\_2\], where a particle assumes the same aspect after two complete rotations. The first case suggests a relationship with gauge fields of spin $1$, the second with fermionic fields of spin $1/2$. ![Pointwise loop associable with fermionic field.[]{data-label="Loop_2"}](fig7.jpg){width="60.00000%"} Symmetry breaking {#symmetry} ----------------- We imagine that the symmetry breaking of $U(n,\mathbf{H})\otimes U(n,\mathbf{H})$ is not complete, but a residual symmetry remains for transformations in $U(1,\mathbf{H})^n \times U(m,\mathbf{H})^{n/m}$. Here $m$ is an integer divisor of $n$. In this case, it is possible to regroup the $n$ points into $n/m$ ensembles $\mathcal{U}^a$, with $a = 1, 2, \ldots, n/m$. $$\mathcal{U}^a = \mathcal{U}^a (x^a_1, x^a_2, \ldots, x^a_m)$$ $$\varphi = (\varphi (x^a_i)) = \left( \begin{array}[c]{ccccc} \varphi(x^1_1) & \varphi(x^1_2) & \varphi(x^1_3) & \ldots & \varphi(x^1_m) \\ \varphi(x^2_1) & \varphi(x^2_2) & \varphi(x^2_3) & \ldots & \varphi(x^2_m) \\ \varphi(x^3_1) & \varphi(x^3_2) & \varphi(x^3_3) & \ldots & \varphi(x^3_m) \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ \varphi(x^{n/m}_1) & \varphi(x^{n/m}_2) & \varphi(x^{n/m}_3) & \ldots & \varphi(x^{n/m}_4) \end{array} \right)$$ $$A = (A^{ab}_{ij}) = (A(x^a_i, x^b_j)) .$$ Now the indices $a,b$ of $A$ act on the columns of $\varphi$, while the indices $i,j$ act on the rows. The fields $A^{ab}_{ij}$ with $a=b$ maintain null masses and so they continue to behave as gauge fields for $U(m,\mathbf{H})^{n/m}$. Every $U(m,\mathbf{H})$ term in $U(m,\mathbf{H})^{n/m}$ acts independently inside a single $\mathcal{U}^a$. So, if we consider the ensembles $\mathcal{U}^a$ as the real physical points, we can interpret $U(m,\mathbf{H})^{n/m}$ as a local $U(m,\mathbf{H})$. It’s simple to verify: h\^(x\^a\_i) &=& h\^(x\^a\_j) x\_i, x\_j \^a\ h(x\^a) && h(\^a) = h\^(x\^a\_i) x\^a\_i \^a\ A(x\^a\_i, x\^a\_j) &=& Tr, where\ A(x\^a) &=& \_[ij]{} A(x\^a\_i, x\^a\_j) T\^[(ij)]{}, with $T^{(ij)}$ generator of $U(m,\mathbf{H})$. Using these relations, in the next work we’ll show how the terms $tr\,(MM^\dag)$ and $tr\,(MM^\dag MM^\dag)$ generate respectively the Ricci scalar and the kinetic term for gauge fields. Extending $M$ to grassmanian elements we have (up to a generalized $U(n, \mathbf{H})$ transformation) $$M = \theta(\pa + \psi) + d^\mu (\pa_\mu + A_\mu)$$ $$M^\dag = (\pa^\dag +\psi^\dag)\theta^\dag + (\pa_\mu^\dag + A^\dag_\mu)d^{*\mu} .$$ $\theta, \theta^\dag$ are at the same time grassmanian coordinates and grassmanian equivalents of $d, d^*$. $\pa, \pa^\dag$ are grassmanian derivatives and $\psi, \psi^\dag$ grassmanian fields (ie fermions). Our final action will be $$S = tr\,\left(\fr {MM^\dag}{16\pi G}-\fr 1 4 MM^\dag MM^\dag \right)$$ This action resembles the action of a $\lambda \phi^4$ theory. Some preliminary results suggest that we can treat it by means of Feynman graphs, apparently without renormalization problems. We will see how the quartic term includes automatically the kinetic terms for gauge fields of $SO(4) \otimes SU(3)\otimes SU(2) \otimes U(1)$ and the dirac action for exactly three fermionic families. Second quantization and black hole entropy {#entropy} ------------------------------------------ It is remarkable that in our model the gauge fields and the gravitational fields have different origins, although they are both born from $M$. The gravitational field in fact appears as a multiplicative factor for moving from $M$ to the covariant derivative $\nabla'$. The gauge fields are instead some additive elements in $\nabla'$. This could be the reason for which the gravitational field seems non quantizable in the standard way. On the other side, quantizing the gauge fields is equivalent to quantize a partial piece of $M$ in a flat space. But a similar equivalence does not exist for the gravitational field. In our framework this doesn’t create problems, since we will quantize $M$ directly, rather than gravitational and gauge fields. What does it mean to quantize” $M$? It’s true that a matrice $M$ is a quantum object from its birth, as they are quantum objects the wave functions which describe particles. However, we will impose commutation relations on $M$, in the same way we impose commutation relations on the wave functions. This is the so called second quantization”. The wave functions, which first had described the probability amplitude to find a particle, then have become operators which create or annihilate particle. Similarly, $M$ describes first the probability amplitude for the existence of connections between vertices. After the second quantization it will become an operator which creates or annihilate connections. In particular, the operator $M(x^a,x^b)$ creates a connection between the vertices $x^a$ and $x^b$. $M$ corresponds to $D^\mu\nabla^\prime_\mu$ (by invariance respect $U(n,\mathbf{H})$): so it contains the various fields $A_\mu$ and $h^{\mu\nu}$. If we second quantize $M$, then, indirectly, we quantize the other fields, including the gravitational field. To quantize $M$ we put $[M^{ij}, M^{kl\dag}]=\d^{ik}\d^{jl}$. Here the symbol $\dag$ indicates the adjoint operator respect only the scalar product between states in the Fock space. The condition $[M^{ij}, M^{kl\dag}]=\d^{ik}\d^{jl}$ means that every entry $M^{ij}$ expands in a sum of $4$ operators $$M^{ij} = a + i(b_1 +b_2+ b_3)\qquad\quad b_1^\dag = b_1,\,\,\,b_2^\dag = b_2,\,\,\,b_3^\dag = b_3$$ The $b$’s realize the $SU(2)$ algebra implicit in the imaginary part of quaternions. $$[b_1,b_2]=b_3 ;\qquad [b_2,b_3] = b_1;\qquad [b_3,b_1]=b_2$$ The operators $a^\dag$ and $b^\dag = b_1 + ib_2$ create an edge which connects the vertex $i$ with the vertex $j$. The number operator is $$N^{ij} = M^{ij\dag} M^{ij} = a^\dag a + |\vec{b}|^2\qquad \text{no sum on} \pt ij$$ $a^\dag a$ has eigenvalues $q \in \mathbf{N}$ with multiplicity $1$. Moreover the eigenvalues of $|\vec{b}|^2$ are in the form $j(j+1)$ for $j \in \mathbf{N}/2$, with multiplicity $(2j+1)$. How about $N > 1$? We can consider a surface immersed into the graph. Its area is $\Delta^2$ times the number of edges which pass through it. If we admit the possibility for the creation of many superimposed edges, we can interpret this superimposition as a super-edge” which carries an area equal to $N \Delta^2$. Regarding diagonal components, we suggest a slightly different interpretation: $a^\dag$ could create loops, while $b^\dag$ could create perturbations which travel through the loops (ie particles with spin $j$). This suggest a duality between a loop on vertex $v_i$ and a closed string (as intended in *String Theory*) situated approximately on the same vertex. Note that the two interpretations can be accommodated if we consider quanta of area as non-local perturbations. The only Black Hole information detectable from the exterior, is the information coded in the Horizon. So, the only distinguishable states of a Black Hole are distinguishable states of its horizon. For the Black Hole horizon we consider all the edges which pass through it, oriented only from the interior to the exterior. If the horizon is crossed only by edges with $N = q+j(j+1)$ and $a^\dag a =q$, the number of its distinguishable states is $$num_S = \left( 2j+1 \right)^{A /(q+j(j+1))} .$$ We suppose now a generic partition with $A = \sum_{j,q} A_{j,q}$, where an area $A_{j,q}$ is crossed only by edges with $N=q+j(j+1)$ and $a^\dag a =q$. The number of distinguishable states becomes $$num_S = \sum_{\{A_{j,q}\}}\prod_{j,q} \left( 2j+1 \right)^{A_{j,q} /((q+j(j+1))\Delta^2)}$$ where the sum is over all the possible partitions of $A$. The classical” contribution comes from $j=0$ and gives $num_S =1$ (We call it classical” because it is the only one with $N =1$). This implies no entropy and is related to the fact that $tr \,M_H^\dag M_H \sim \int_H \sqrt {h_H} R(h_H) = 2\pi \chi_H$, where $M_H$ is the restriction of $M$ to the edges which cross the horizon, $h_H$ is the induced metric on the horizon and $\chi$ is the Euler characteristic. The dominant contribution comes from $q=0$ and $j = 1/2$, which gives $$num_S = 2^{4A_{1/2,0} /(3\Delta^2)}$$ So we can define entropy as $$S = k_B\,log\,2^{4A/{3\Delta^2}} = \fr {4 \,log\,2 \, k_B A }{3\Delta^2}.$$ Our approach gives thus a proposal for the explanation of area law. Indeed our entropy formula corresponds to the one given by Bekenstein and Hawking if $3\Delta^2 = 16 G\,log\,2$. What is our interpretation of black hole radiation? The proximity between vertices is probabilistic: we can have a high probability of receiving two vertices as neighbors”, but never a certainty. We look at a large number of vertices for a long time: some vertex, which first seems to be adjacent to some other, suddenly can appears far away. For this reason, some internal vertices in a Black Hole may happen to be found outside, so that the Black Hole slowly evaporates. We can consider also the contribution from ($q = j = 0$). If it exists, clearly it is the dominant one. Indeed, an horizon means absence of connections between the exterior and the interior. For an external observer, the universe finishes with the horizon. In fact, respect the coordinate system of a statical observer infinitely distant from the horizon, every object, falling in the black hole, sits on the horizon for an infinite time. In relation to the proper time of the statical far away observer, the object never surpasses the horizon. If nothing surpasses the horizon, this means that the Hawking radiation comes from the deposit of all the objects fallen in the black hole, ie from the horizon. This resolves the information paradox proposed by Hawking. Someone can infer that absence of connections is only illusory, because the horizon singularity is of the type called apparent”: it doesn’t exist in several coordinate systems, as the system comoving with a free falling object. We reply that it’s true, because also the absence of connections depends strictly from the state on which the number operator acts. Every state can be associated to a particular coordinates system and, if we change coordinate system, we have to change the state. In this way, the connections can exist for an observer and not exist for some others. It’s the same which happens for the particles. The same particle can exist in a coordinate system and not exists in an another system (see Unruh effect). This is because the same number operator acts on different states. Calculate now $num_S$ for $q = 0, j \ra 0$. It is num\_S &=& \_[j0]{} ( 2j+1 )\^[A /(j(j+1)\^2)]{}\ &=& \_[j0]{} ( 1+2j )\^[A /(j\^2)]{}\ &=& \_[j0]{} ( 1+2j )\^[2A /(2j\^2)]{}\ &=& \_[x]{} ( 1+1x )\^[2Ax/\^2]{}\ &=& e\^[2A/\^2]{} The entropy becomes $$S =k_B \,log\,e^{2A/\Delta^2} = \fr {2k_B A}{\Delta^2}$$ This corresponds to the Bekenstein-Hawking result for $\Delta^2 = 8G$. Conclusion ---------- In this paper we have abandoned the preconceived existence of an order in the space-time structure, taking a probabilistic approach also to its topology and its homology. This framework gives new suggestions about the origin of space-time metric and particles spin. At the same time it hints a possible emersion of all fields from an unique entity, ie the arrangement matrix, after the imposition of an order. Unfortunately, there isn’t space here to post an explicit calculation of terms $tr\,(M^\dag M)$ and $tr\,(M^\dag M M^\dag M)$. We have already said that they generate the Ricci scalar, the kinetic terms for gauge fields and the Dirac actions for exactly three fermionic families. In a next future we’ll show how several phenomena can find a possible explanation inside this paradigm, as we have seen earlier for black hole entropy. These deal with the galaxy rotation curves, the inflation, the quantum entanglement, the values of matrices CKM and PMNS and the value of Newton constant $G$. Here we have given a simple example by using a one-component field. Nevertheless, a potential for $M$ causes a symmetry breaking which gives mass to gauge fields without need of Higgs mechanism. In the end, the one-component field action results unnecessary. Acknowledgements {#acknowledgements .unnumbered} ---------------- I thank professor Valter Moretti, Dr. Fabrizio Coppola and Dr. Marcello Colozzo for the useful discussions and suggestions. The arrangement field theory (AFT). Part 2 ========================================== Introduction ------------ The arrangement field paradigm describes universe by means of a graph, ie an ensemble of vertices and edges. However there is a considerable difference between this framework and the usual modeling with spin-foams or spin-networks. The existence of an edge which connects two vertices is in fact probabilistic. In this framework the fundamental quantity is an invertible matrix $M$ with dimension $n \times n$, where $n$ is the number of vertices. In the entry $ij$ of such matrix we have a quaternionic number which gives the probability amplitude for the existence of an edge connecting vertex $i$ to vertex $j$. In the introductory work [@Arrangement] we have developed a simple scalar field theory in this probabilistic graph (we call it non-ordered space”). We have seen that a space-time metric emerges spontaneously when we fix an ensemble of edges. Moreover, the quantization of metric descends naturally from quantization of $M$ in the non-ordered space. In section \[formalism\] we summarize these results. In section \[ricciscalar\] we express Ricci scalar as a simple quadratic function of $M$. We discover how the gravitational field emerges from diagonal components of $M$, in contrast to gauge fields which come out from non-diagonal components. In section \[kinetic\] we define a quartic function of $M$ which develops a Gauss Bonnet term for gravity and the usual kinetic term for gauge fields. In section \[string\] we discover a triality between *Arrangement Field Theory*, *String Theory* and *Loop Quantum Gravity* which appear as different manifestations of the same theory. In section \[electroweak\] we show that a grassmanian extension of $M$ generates automatically all known fermionic fields, divided exactly in three families. We see how gravitational field exchanges homologous particles in different families. The resulting scheme finds an analogue in supersymmetric theories, with known fermionic fields which take the role of gauginos for known bosons. In the subsequent sections we explore some practical implications of arrangement field theory, in connection to inflation, dark matter and quantum entanglement. Moreover we explain how deal with theory perturbatively by means of Feynman diagrams. We warmly invite the reader to see introductory work [@Arrangement] before proceeding. Formalism --------- In paper [@Arrangement] we have considered an euclidean $4$-dimensional space represented by a graph with $n$ vertices. In this section we retrace the fundamental results of that work, moving to Lorentzian spaces in the next section. Since now we assume the Einstein convention, summing over repeated indices. In proof of **theorem 8** in [@Arrangement] we have demonstrated the equivalence between the following actions: S\_1 = (M )\^(M ) \[iniziale\]S\_2 = \_[i=1]{}\^n h\^ (x\^i) (\_\^i)\^\*(\_\^i). \[action-1\] $M$ is any invertible matrix $n \times n$ while the field $\varphi$ is represented by a column array $1\times n$, with an entry for every vertex in the graph: = ( [c]{} (x\^1)\ (x\^2)\ (x\^3)\ \ (x\^N) ) . The entries of both $M$ and $\phi$ take values in the division ring of quaternions, usually indicated with $\mathbf{H}$. The first action considers the universe as an abstract ensemble of vertices, numbered from $1$ to $n$, where $n$ is the total number of space vertices. The entry $(ij)$ in the matrix $M$ represents the probability amplitude for the existence of an edge which connects the vertex number $i$ to the vertex number $j$. We admit non-commutative geometries, which in this framework implies a possible inequivalence $|M^{ij}| \neq |M^{ji}|$. More, the first action is invariant under transformations $(U_1,U_2) \in U(n,\mathbf{H}) \otimes U(n,\mathbf{H})$ which send $M$ in $U_2 M U_1^\dag$. In action (\[action-1\]) a covariant derivative for $U(n,\mathbf{H}) \otimes U(n,\mathbf{H})$ appears, represented by a skew hermitian matrix $\nabla$ which expands according to $\nabla_\mu = \tilde{M}_\mu + A_\mu$. Here $\tilde{M}_\mu$ is a linear operator such that $lim_{\Delta \ra 0} \tilde{M}_\mu = \pa_\mu$, where $\Delta$ is the graph step. If we number the space vertices along direction $\mu$, $\tilde{M}_\mu$ becomes \^[ij]{}\_= 1 [2]{} \[dderiv\]$$\sum_j \tilde{M}^{ij} \varphi^j = \fr 1 {2\Delta} \sum_j \delta^{(i+1)j} \varphi^j - \delta^{(i-1)j} \varphi^j = \fr {\varphi(i+1) - \varphi(i-1)} {2\Delta} .$$ The gauge fields $A$ act as skew hermitian matrices too: $$A = (A^{ij}) = (A(x^i, x^j))$$ $$(A\phi)^i = A^{ij}\phi^j .$$ In proof of **theorem 5** we have discovered that for every normal matrix $\hat{M}$, which is neither hermitian nor skew hermitian, four couples $(U_1,D^\mu)$ exist, with $U_1$ unitary and $D^\mu$ diagonal, such that U\_1\^D\^\_U\_1 = \[fondamentale\] h\^(x\^i) = 1 2 d\^[\*]{}\_i d\_i\^+c.c. D\^[ij]{}\_= d\^\_i \^[ij]{} .Here $h$ is a non degenerate metric while the first relation determines uniquely the values of gauge fields. The matrices $\nabla_\mu, U_1, D^\mu$ act on field arrays via matricial product and the ensemble of four couples $(U_1, D^\mu)$ is called space arrangement”. Further, in proof of **theorem 6**, we have seen that for every invertible matrix $M$ we can always find an unitary transformation $U_M$ and a normal matrix $\hat{M}$, which is neither hermitian nor skew hermitian, such that $M = U_M \hat{M}$. If we define $U_2 = U_1 U_M^\dag$, we have M\^M = \^ \[matemat\] U\_2\^D\^\_U\_1 = M .\[fond\] It’s sufficient to substitute (\[fond\]) in (\[iniziale\]) to verify its equivalence with (\[action-1\]). We have called $\hat{M}$ the associated normal matrix” of $M$. The action of a transformation $(U_1, U_2)$ on $\nabla$ follows from its action on $M$. We can always use the invariance under $U(n,\mathbf{H}) \otimes U(n,\mathbf{H})$ to put $M$ in the form $M = D^\mu \na_\mu$. Starting from this we have $$U_2 M U_1^\dag = U_2 D^\mu \na_\mu U_1^\dag = U_2 D^\mu U_1^\dag U_1\na_\mu U_1^\dag.$$ We define $\nabla' = U_1\na_\mu U_1^\dag$ the transformed of $\nabla$ under $(U_1, U_2)$ and $D^{\prime\mu} = U_2 D^\mu U_1^\dag$ the transformed of $D^\mu$. We assume that $A_\mu$ inside $\nabla_\mu$ transforms correctly as a gauge field, so that $$\na [A]_\mu \phi = \na [A] U_1^\dag \phi' = U_1^\dag \na [A_{U1}]_\mu \phi'$$ $$\phi' = U_1 \phi .$$ We want $D^{\prime\mu}$ remain diagonal and $h' = h[D'] = h[D]$. In this case there are two relevant possibilities: 1. $D$ is a matrix made by blocks $m \times m$ with $m$ integer divisor of $n$ and every block proportional to identity. In this case the residual symmetry is $U(1,\mathbf{H})^n \times U(m,\mathbf{H})^{n/m}$ with elements $(sV, V)$, $s$ both diagonal and unitary, $V \in U(m,\mathbf{H})^{n/m}$; 2. $h$ is any diagonal matrix. The symmetry reduces to $U(1,\mathbf{H})^n \otimes U(1,\mathbf{H})^n$ which is local $U(1,\mathbf{H}) \otimes U(1,\mathbf{H}) \sim SU(2) \otimes SU(2) \sim SO(4)$. In this way, if we keep fixed the metric $h$ and keep diagonal $D$, the action (\[action-1\]) will be invariant at least under $U(1,\mathbf{H})^n \otimes U(1,\mathbf{H})^n$ which doesn’t modify $h$. We have supposed that a potential for $M$ breaks the $U(n,\mathbf{H})\otimes U(n,\mathbf{H})$ symmetry in $U(1,\mathbf{H})^n \otimes U(m,\mathbf{H})^{n/m}$ where $m$ is an integer divisor of $n$. We’ll see in fact that the more natural potential has the form $tr\,(\a M^\dag M - \b M^\dag MM^\dag M)$, known as mexican hat potential”. This potential is a very typical potential for a spontaneous symmetry breaking. In this way all the vertices are grouped in $n/m$ ensembles $\mathcal{U}^a$: $$\mathcal{U}^a = \{x^a_1, x^a_2, x^a_3, \ldots, x^a_m\}$$ $$\varphi = (\varphi (x^a_i)) = \left( \begin{array}[c]{ccccc} \varphi(x^1_1) & \varphi(x^1_2) & \varphi(x^1_3) & \ldots & \varphi(x^1_m) \\ \varphi(x^2_1) & \varphi(x^2_2) & \varphi(x^2_3) & \ldots & \varphi(x^2_m) \\ \varphi(x^3_1) & \varphi(x^3_2) & \varphi(x^3_3) & \ldots & \varphi(x^3_m) \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ \varphi(x^{n/m}_1) & \varphi(x^{n/m}_2) & \varphi(x^{n/m}_3) & \ldots & \varphi(x^{n/m}_4) \end{array} \right)$$ $$A = (A^{ab}_{ij}) = (A(x^a_i, x^b_j)) .$$ Now the indices $a,b$ of $A$ act on the columns of $\varphi$, while the indices $i,j$ act on the rows. The fields $A^{ab}_{ij}$ with $a=b$ maintain null masses and then they continue to behave as gauge fields for $U(m,\mathbf{H})^{n/m}$. Every $U(m,\mathbf{H})$ term in $U(m,\mathbf{H})^{n/m}$ acts independently inside a single $\mathcal{U}^a$. So, if we consider the ensembles $\mathcal{U}^a$ as the real” physical points, we can interpret $U(m,\mathbf{H})^{n/m}$ as a local $U(m,\mathbf{H})$. It’s simple to verify: $$h^{\mu\nu}(x^a_i) = h^{\mu\nu}(x^a_j) \qquad \forall x^a_i, x^a_j \in \mathcal{U}^a$$ $$h^{\mu\nu}(x^a) \overset{!}{=} h^{\mu\nu}(\mathcal{U}^a) = h^{\mu\nu}(x^a_i) \qquad \forall x^a_i \in \mathcal{U}^a$$ $$A_{ij} (x^a) \overset{!}{=} Tr\,\left[ A(x^a)T^{ij} \right],\quad where$$ A(x\^a) &=& \_[ij]{} A(x\^a\_i, x\^a\_j) T\^[ij]{},\ && \[definitions\] Ricci scalar in the arrangement field paradigm {#ricciscalar} ---------------------------------------------- ### Hyperions In this subsection we define an extension of $\mathbf{H}$ by inserting a new imaginary unit $I$. It satisfies: $$I^2 = -1 \qquad I^\dag = -I$$ $$[I,i] = [I,j] = [I,k] = 0$$ In this way a generic number assumes the form $$v = a + Ib+ ic + jd + ke + iIf+ jIg + kIh, \qquad a,b,c,d,e,f,g,h \in \mathbf{R}$$ $$v = p + Iq, \qquad p,q \in \mathbf{R}$$ We call this numbers Hyperions” and indicate their ensemble with $Y$. It’s straightforward that such numbers are in one to one correspondence with even products of Gamma matrices. Explicitly: $$1 \Leftrightarrow \g_0 \g_0 = 1 \qquad I \Leftrightarrow \g_5 = \g_0 \g_1 \g_2 \g_3$$ $$i \Leftrightarrow \g_2 \g_1 \qquad iI \Leftrightarrow \g_0 \g_3$$ $$j \Leftrightarrow \g_1 \g_3 \qquad jI \Leftrightarrow \g_0 \g_2$$ $$k \Leftrightarrow \g_3 \g_2 \qquad kI \Leftrightarrow \g_0 \g_1$$ Note that imaginary units $i,j,k,iI,jI,kI$ satisfy the Lorentz algebra, with $i,j,k$ which describe rotations and $iI, jI, kI$ which describe boosts. The bar-conjugation is an operation which exchanges $I$ with $-I$ (or $\g_0$ with $-\g_0$ in the $\g$-representation). Explicitly, if $v = a + Ib+ ic + jd + ke + iIf+ jIg + kIh$ with $a,b,c,d,e,f,g,h \in \mathbf{R}$, then $\bar{v} = a - Ib+ ic + jd + ke - iIf - jIg - kIh$. The pre-norm is a complex number with $I$ as imaginary unit (we say I-complex number”). Given an hyperion $v$, its pre-norm is $|v| = (\bar{v}^\dag v)^{1/2}$. If $v \in \mathbf{H}$, its pre-norm coincides with usual norm $(v^\dag v)^{1/2}$. Note that every hyperion $v$ can be written in the polar form $$v = |v|e^{ia+jb+kc+iId+jIe+kIf} \qquad a,b,c,d,e,f$$ $$|v|^2 = \bar{v}^\dag v = |v|e^{-(ia+jb+kc+iId+jIe+kIf)} |v|e^{ia+jb+kc+iId+jIe+kIf}= |v|^2.$$ If $M$ takes values in $\mathbf{Y}$, the probability for the existence of an edge $(ij)$ can be defined as $||M^{ij}||$, which is the norm of pre-norm. $\pt$ The fundamental relation (\[fondamentale\]) descends uniquely from spectral theorem in $\mathbf{H}$. You can see from work of Yongge Tian [@Tian] that spectral theorem is still valid in $Y$ in the following form: Every normal matrix $M$ with entries in $\mathbf{Y}$ is diagonalizable by a transformation $U \in U(n,\mathbf{Y})$ which sends $M$ in $UM\bar{U}^\dag$”. Here $U(n,\mathbf{Y})$ is the exponentiation of $u(n, \mathbf{Y}) = u(n, \mathbf{H})\cup Iu(n, \mathbf{H})$ and $M$ satisfies a generalized normality condition. Explicitly, $\bar{U}^\dag = U^{-1}$ and $\bar{M}^\dag M = M \bar{M}^\dag$. This implies that (\[fondamentale\]) is valid too in the form $$\bar{U}^\dag D^\mu \na_\mu U = M$$ Matrix $\na$ is now in $u(n, \mathbf{Y})$ and then it satisfies $\bar{\na}^\dag = -\na$. Accordingly, its diagonal entries belong to Lorentz algebra (they don’t comprise real and $I$-imaginary components). To conclude, we don’t know if an associate normal matrix exists for any invertible matrix with entries in $\mathbf{Y}$. Fortunately, in lorentzian spaces there is no reason for using such machinery and we can start from the beginning with a normal arrangement matrix. \[sinv\]$\pt$ It follows from spectral theorem that eigenvalues $\lambda$ of $M$ are equivalence classes $$\lambda \sim s\lambda \bar{s}^\dag \qquad s \in \mathbf{Y}, \bar{s}^\dag s = 1.$$ As a consequence, we can choose freely the diagonal matrix $D$ inside the equivalence class $SD\bar{S}^\dag$, where $S$ is both diagonal and unitary ($\bar{S}^\dag = S^{-1}$). This choice does’t affect the metric $\sqrt h h^{\mu\nu} = Re\,(\bar{D}^{\dag\mu} D^\nu)$, granting for the persistence of a symmetry $U(1,Y)^n = SO(1,3)^n$, ie local $SO(1,3)$. Clearly this is a reworking of the usual gauge symmetry which acts on the tetrads, sending $e^\mu_a$ in $\Lambda_a^{\pt b} e^\mu_b$ via the lorentz transformation $\Lambda$. In what follows we exploit $SO(1,3)$-symmetry to satisfy two conditions: && tr({|\^\_, \_} |[D]{}\^ D\^) = 0 \[scond\]\ && tr(D\^{\_, |\^\_} |[D]{}\^ D\^\_|\^\_|[D]{}\^) = 0 Note that these are global conditions because operator *tr* is analogous to a space-time integration. ### Ricci scalar with hyperions In this subsection we simplify the form of Ricci scalar by means of hyperions, in order to make it suitable for the arrangement field formalism. Given a gauge field $\w_\mu$ in $so(1,3)$ and a complex tetrad $e^\mu$, we define A\_= \^[ab]{}\_\_[a]{}\_[b]{} h\^ = Re(e\^\_a e\^\_b \^[ab]{})\[gaugey\] $$d^\mu = \sqrt e e^{\mu a} \g_0 \g_a \qquad e = \left[ det(- e^{\dag \mu}_a e^\nu_b \eta^{ab}) \right]^{-1/2} \in \mathbf{R^+}$$ $$\bar{d}^\mu = d^\mu (\g_0 \rightarrow-\g_0)$$ $$\Rightarrow \bar{d}^{\dag \mu} d^{\nu} = ee^{\dag \mu a}e^{\nu b} \g_{a} \g_{b} \quad \Rightarrow \sqrt h h^{\mu\nu} = \fr 14 Re\,\left[ tr(\bar{d}^{\dag \mu} d^{\nu})\right]$$ Note that our definitions are the same to require $\bar{A}^\dag =-A$ in the hyperions framework. The Ricci scalar can be written as $$\sqrt h R(x) = -\fr 18 tr\left(\left(\pa_\mu A_\nu - \pa_\nu A_\mu + [A_\mu, A_\nu]\right) \bar{d}^{\dag\mu} d^\nu\right)$$ To verify its correctness we expand first the commutator &=& \^[ab]{}\_\^[cd]{}\_( \_a\_b\_c\_d - \_c \_d \_a \_b )\ &=& 12 \^[ab]{}\_\^[cd]{}\_( \_a{\_b,\_c}\_d - \_c {\_d, \_a} \_b ) +\ && +1 [2]{}\^[ab]{}\_\^[cd]{}\_( \_a \[\_b,\_c \]\_d - \_c \[\_d, \_a\] \_b ) &=& (\^[ab]{}\_\_[b]{}\^[d]{} - \^[ab]{}\_\_[b]{}\^[d]{} )( \_a \_d ) +\ && +1 [4!]{}\^[ab]{}\_\^[cd]{}\_( \_[abcd]{} \^[efgh]{} \_e \_f \_g \_h )\ &=& \[\_, \_\]\^[ab]{}\_a\_b + \^[ab]{}\_\^[(D)]{}\_[ab]{} \_5 In the last line we have defined $\w^{(D)}_{ab \nu} = \e_{abcd} \w_\nu^{cd}$. Hence R(x) &=& -18 tr(\_a \_b \_c \_d)( \_\^[ab]{}\_- \_\^[ab]{}\_+ \[\_,\_\]\^[ab]{})e\^[c]{} e\^[d]{} -\ && -18 tr(\_5 \_b \_c) \^[ab]{}\_\^[(D)]{}\_[ab]{} e\^[c]{} e\^[d]{} Consider now the relations $$\fr 14 tr(\g_a \g_b \g_c \g_d) = \eta_{ab}\eta_{cd} - \eta_{ac}\eta_{bd} + \eta_{ad}\eta_{bc}$$ $$tr(\g_5 \g_b \g_c) = 0$$ We obtain $$R(x) = \left( \pa_\mu \w^{ab}_\nu - \pa_\nu \w^{ab}_\mu + [\w_\mu,\w_\nu]^{ab}\right)e^{\dag \mu}_a e^{\nu}_b$$ which is the usual definition. A complex tetrad implies that tangent space is the complexification of Minkowsky space (usually indicated with $\mathcal{CM}$). This fact gives a strict connection with theory of **twistors**[@twistor], where massless particles move on trajectories which have an imaginary component proportional to helicity. We can move freely from matrices $\g$ to hyperions, substituting $tr$ with $4$. In this way h R(x) &=& -12 (\_A\_- \_A\_+ \[A\_, A\_\]) |[d]{}\^ d\^\ &=& -12 \[\_,\_\]|[d]{}\^ d\^$$\na_\mu =\pa_\mu + A_\mu \qquad\qquad A_\mu, d^\mu \in \mathbf{Y}$$ $$e^\mu_a = Re\,e^\mu_a + I\,Im\,e^\mu_a$$ d\^&=& Ree\^[0]{} + i I Ree\^[3]{} + j I Ree\^[2]{} + k I Ree\^[1]{} +\ && + IIme\^[0]{} - i Ime\^[3]{} - j Ime\^[2]{} - k Ime\^[1]{} ### Ricci scalar in the new paradigm We try to define Hilbert-Einstein action as $$S_{HE} = tr\,(\bar M^\dag M).$$ We insert in $S_{HE}$ the usual expansion $M = UD^\mu \na_\mu \bar{U}^\dag$, obtaining S\_[HE]{} &=& tr\[(|U |[D]{}\^ |\_U\^)\^(U D\^\_|[U]{}\^)\]\ &=& tr\[ U |\^\_|[D]{}\^ |[U]{}\^U D\^\_|[U]{}\^\]\ &=& tr\[\_|\^\_|[D]{}\^ D\^\].\[HEfirst\] Now we can impose the first condition in (\[scond\]) which gives S\_[HE]{} &=& 12 tr{\[\_, |\^\_\] |[D]{}\^ D\^} \[espansione\] .Expanding the covariant derivatives we obtain S\_[HE]{} &=& 12 \_[a,b,c]{} { \^\_A\_(x\^a, x\^b)-\_|[A]{}\_\^(x\^a, x\^b) +\ && + \[|[A]{}\^\_, A\_\](x\^a, x\^b)}|[d]{}\^(x\^b)\^[bc]{} d\^(x\^c)\^[ca]{}\ &=& 12 \_a { \^\_A\_(x\^a)-\_|[A]{}\^\_(x\^a) +\ && +\[|[A]{}\^\_, A\_\](x\^a, x\^a)}|[d]{}\^(x\^a)d\^(x\^a)\ &=& 12\_[a,ba]{} { \^\_A\_(x\^a)-\_|[A]{}\_\^(x\^a) + \[|[A]{}\_\^(x\^a), A\_(x\^a)\] +\ && +\[|[A]{}\_\^(x\^a,x\^b),A\_(x\^b,x\^a)\]} |[d]{}\^(x\^a)d\^(x\^a)\ && Consider now a symmetry breaking with residual group $U(m,\mathbf{Y})^{n/m}$ which regroups vertices in ensembles $\mathcal{U}^a = \{x^a_1, x^a_2,\ldots,x^a_m\}$. We assume that fields $A(x^a_i,x^b_j)$ with $a \neq b$ acquire big masses and thus we can neglect them. The symbol $\sum_a$ becomes $\sum_{a,i}$, while $\sum_{a,b\neq a}$ becomes $\sum_{a,i,b,j|(a,i)\neq (b,j)}$. After neglecting heavy fields, the last one is simply $\sum_{a,i,j\neq i}$. S\_[HE]{} &=& 12\_[a]{} { \^\_trA\_(x\^a)-\_tr|[A]{}\^\_(x\^a) + \[tr|[A]{}\^\_(x\^a), trA\_(x\^a)\] +\ && +\_[i,ji]{}\[|[A]{}\_\^[ij]{}(x\^a) A\^[ji]{}\_(x\^a) - A\_\^[ij]{}(x\^a) |[A]{}\^[ji]{}\_\](x\^a)} |[d]{}\^(x\^a)d\^(x\^a)\ && For what follows we write $S_{HE} = \fr 12 \sum_a R^{ik}_{\mu\nu} \d^{ik}\bar{d}^{\dag\mu}d^\nu $ with R\^[ik]{}\_ &=& \^\_trA\_(x\^a)-\_tr|[A]{}\^\_(x\^a) + \[tr|[A]{}\^\_(x\^a), trA\_(x\^a)\] +\ && +\_[i,ji,kj]{}\[|[A]{}\_\^[ij]{}(x\^a) A\^[jk]{}\_(x\^a) - A\_\^[ij]{}(x\^a) |[A]{}\^[jk]{}\_\](x\^a) .\ && \[curvature\]$R^{ik}_{\mu\nu}$ is a generalization of curvature tensor. We have indicated with $tr\,A$ the track on $ij$, ie $\d^{ij}A^{ij}(x^a) = \d^{ij}A(x^a_i,x^a_j)$. Note that $[\bar{A}^{\dag ii},A^{jj}]$ is equal to zero when $i \neq j$ and then $$\sum_{i} [\tilde{A}^{^\dag ii}_\mu, A^{ii}_\nu] = \sum_{ij}[\bar{A}^{^\dag ii}_\mu, A^{jj}_\nu]= [tr\,\bar{A}^\dag_\mu, tr\,A_\nu].$$ Consider now any skew hermitian matrix $W_\mu$ with elements $W_\mu^{ij} = A_\mu^{ij}$ for $i \neq j$ and $W_\mu^{ij} = 0$ for $i = j$. It belongs to the subalgebra of $u(m,\mathbf{Y})$ made by all null track generators. This means that commutators between null track generators are null track generators too. In this way $$\sum_{i,i\neq j} [\bar{A}^\dag_\mu (x^i,x^j),A_\nu (x^j,x^i)] = tr [\bar{W}^\dag_\mu, W_\nu] = 0 .$$ Hence we can delete the mixed term in $S_{EH}$. S\_[HE]{} &=& 12 \_[a]{} { \^\_trA\_(x\^a)-\_tr|[A]{}\^\_(x\^a) + \[tr|[A]{}\^\_(x\^a), trA\_(x\^a)\]}\ &&|[d]{}\^(x\^a)d\^(x\^a) In the arrangement field paradigm, the operator $\dag$ transposes also rows with columns in matrices which represent $\pa$ and $A$. As we have seen, the fields $A$ which intervene in $R$ are only the diagonal ones, so the transposition of rows with columns is trivial. Note that $\na$ satisfies a generalized condition of skew-hermiticity ($\bar{\na}^\dag = -\na$) and then its diagonal components belong to lorentz algebra. This implies $tr\,\bar{A}^\dag = -tr\,A$, matching exactly with our request in (\[gaugey\]). Finally, if we consider the matrix which represents $\pa$ (we have called it $\tilde M$), we note that $\bar{\pa}^\dag = \pa^T = -\pa$. Explicitly $$\na_\nu^\dag = (\pa_\nu + tr\,\bar{A}_\nu)^\dag = \pa_\nu^\dag + tr\,\bar{A}_\nu^\dag = -\pa_\nu - tr\,A_\nu = -\na_\nu .$$ Applying this to $S_{HE}$, S\_[HE]{} &=& -12 \_[a]{} { \_trA\_(x\^a)-\_trA\_(x\^a) + \[trA\_(x\^a), trA\_(x\^a)\]}\ &&|[d]{}\^(x\^a)d\^(x\^a)\ &=& -12 \[\_,\_\]|[d]{}\^(x\^a)d\^(x\^a)\ &=& \_a R(x\^a) d\^4 x R(x). Here $\overset{G}{\na}$ is the gravitational covariant derivative $\overset{G}{\na} = \pa + tr\,A$. It’s very remarkable that gauge fields in $R$ are only the diagonal ones. First, this is the unique possibility to obtain ${\overset{G}{\na^\dag}_\nu} = -\overset{G}{\na}_\nu$. Moreover, while gauge fields in $R$ are tracks of matrices $(A_{ij})(x^a)$, we’ll see as the other gauge fields in Standard Model correspond to non diagonal components. The kinetic term {#kinetic} ---------------- Until now we have obtained no terms which describe gauge interactions. In this section we find a such term, with the condition that it hasn’t to change Einstein equations. One option is as follows: S\_[GB]{} &=& -tr(|[M]{}\^M |[M]{}\^M) \[eq: opz\]\ &=& -tr\ &=& -tr We assume a residual symmetry under $U(m,\mathbf{Y})^{n/m}$. This means that $D^\mu$ are matrices made of blocks $m \times m$ where every block is a hyperionic multiple of identity. We use newly the correspondence between $(1,I,i,j,k,iI,jI,kI)$ and gamma matrices: S\_[GB]{} &=& -14 tr(\_a\_b\_c\_d\_e\_f\_g\_h) We use letters $a,b,c,d$ for indices which run on Gamma matrices, $\a,\b,\mu,\nu$ for spatial coordinates indices and $ijk$ for gauge indices (ie indices which run inside a single $\mathcal{U}^a$). Pay attention to not confuse the index $a$ in the first group with the index $a$ which runs over the vertices like in $x^a_i$. We will see that physical fields arise in three families, determined by the choice of a subspace inside $Y$. This is true both for fermionic and bosonic fields. Thus the indices with letters $a,b,c,d$ run over the three families. We proceed by imposing the second condition in (\[scond\]), in such a way to ignore terms proportional to $\{\na_\b, \bar{\na}^\dag_\mu\}$ inside $S_{GB}$. We take $$S_{GB} = \sum_a L_{GB} (x^a)$$ Then L\_[GB]{} &=& R\^[ij]{}\_[ab]{} R\^[ab ji]{}\_ |[d]{}\^\_c d\^[c]{} |[d]{}\^\_d d\^[d]{} - 4R\^[ij]{}\_[ac]{} |[d]{}\^[a]{} R\^[cb ji]{}\_ d\^\_b d\^[d]{} |[d]{}\^\_d +\ && + R\^[ij]{}\_[ac]{} |[d]{}\^[a]{} d\^[c]{} R\^[cb ji]{}\_ |[d]{}\^\_c d\^\_b\ &=& h R\^[ij]{}\_[ab]{} R\^[ab ji ]{} - 4 h R\^[ij]{}\_[c ]{} R\^[c ji ]{} + h R\^[ij]{}R\^[ji]{} $R^{ij}_{\b\mu}$ was defined in (\[curvature\]), while $\sqrt[4] h R^{ij}_\mu = R^{ij}_{\b\mu} d^{\b}$ and $\sqrt h R^{ij} = R^{ij}_{\b\mu} d^{\b} d^{*\mu}$. You understand in a moment that for $i \neq j$ we have $R^{ij}_{ac\b\mu} R^{ji ac}_{\nu\a} h^{\mu\a}h^{\nu\b} = tr\,\sum_{(ac)} F^{(ac)}_{\mu\nu} F^{(ac) \mu\nu}$. The index $(ac)$ runs over three fields families and $F_{(ac)\mu\nu}$ is a strength field tensor. In this way the terms $R^{ij \nu}_\b R^{ji \b}_\nu$ and $R^{ij}R_{ji}$ are terms which mix families. The trouble with $S_{GB}$ is that it generates a factor $h$ instead of $\sqrt{h}$. However, we can solve the problem imposing the gauge condition $h=1$. Note that for $i=j$ we have $$L_{GB} = R_{ac\b\mu} R^{ac\b\mu} + R^2 - 4 R^{\a}_\mu R^{\mu}_\a$$ which is a topological term and it doesn’t change the Einstein equations. The combination of $S_{HE}$ and $S_{GB}$ gives to gravitational gauge field $\overset{G}{A}$ a potential with form $$\overset{G}{A^2} - \overset{G}{A^4}.$$ This potential has non trivial minimums which imply a non-trivial expectation value for $\overset{G}{A}$. Moreover, inside $S_{GB}$ we find the following kind of terms for other fields $A$: $$\langle \overset{G}{A^2} \rangle A^2 - A^4.$$ In this way we have a mass for gauge fields $A$ and another potential with non-trivial minimums. Therefore, also gauge fields $A$ have non-trivial expectation values. Finally, such expectation values give mass to fermionic fields via terms $$\psi^\dag \langle A \rangle \psi.$$ There is no need for a scalar Higgs boson. Connections with Strings and Loop Gravity {#string} ----------------------------------------- We have seen in [@Arrangement], at **Remark 13**, that some similarities exist between diagonal components of $M$ (loops) and closed strings in string theory. Now we have discovered that such diagonal components describe a gravitational field. Is then a case that the lower energy state for closed string is the graviton? We think no. Moreover, we have seen that gauge fields correspond to non-diagonal components of $M$, ie open edge in the graph. This finds also a connection with open strings, whose lower energy states are gauge fields. We have shown that a symmetry $U(m,\mathbf{Y})$ arises when vertices are grouped in ensembles $\mathcal{U}^a$ containing $m$ vertices. This seems to represent a superimposition of $m$ universes or branes. Gauge fields for such symmetry correspond to open edge which connect vertices in the same $\mathcal{U}^a$. Is then a case that the same symmetry arises in open strings with endpoints in $m$ superimposed branes? We still think no. Until now we have supposed that open edges between vertices in the same $\mathcal{U}^a$ have length zero, so that we haven’t to introduce extra dimensions. However, by $T-duality$ such edges correspond to open strings with $U(m,\mathbf{Y})$ Chan-Paton which moves in an infinite extended extra dimension. This happens because an absente extra dimension is a compactified dimension with $R = 0$ and $T-duality$ sends $R$ in $1/R$. Regarding edges between vertices in different $\mathcal{U}^a$, we see that they have a mass proportional to separation between endpoints. This is true both in our model and string theory. $\pt$ The following two theorems emphasize a triality between *Arrangement Field Theory*, *String Theory* and *Loop Quantum Gravity*. We can see as they are different manifestations of the same theory. \[Loop\] Every element $M^{ij}$ in the arrangement matrix can be written as a state in the Hilbert space of *Loop Quantum Gravity*, ie an holonomy for a $SO(1,3)$ gauge field[^7]. In this way, every field (gauge or gravitational) becomes a manifestation of only gravitational field. An element $M^{ij}$ can always be written in the following form: M\^[ij]{} = |M\^[ij]{}| exp (\_[x\_i]{}\^[x\_j]{} A\_dx\^) \[defM\] with $\mu = 1,2,3$ and $$|M^{ij}|= exp \left(\int_{x_i}^{x_j} A_0 dx^0\right).$$ Here $A_\mu$ is a $SO(1,3)$ connection and $A_0$ is an $I$-complex field. Obviously, we take $A_\mu$ hyperionic by using the usual correspondence with Gamma matrices. In this way $A_\mu$ is purely imaginary. The integration is intended over the edge which goes from vertex $i$ to vertex $j$, parametrized by any $\t \in [0,1]$. If you look (\[defM\]), you see on the left a discrete space (the graph) with discrete derivatives and fields which are defined only on the vertices. On the right you find instead a Hausdorff space with continuous paths, continuous derivatives and fields which are defined everywhere. Applying eventually a transformation in $U(n,\mathbf{Y})$, we have $$M^{ij} = D^{ik\mu}\na^{kj}_\mu = D^{ii\mu} \na^{ij}_\mu = d^\mu(x_i) \na^{ij}_\mu.$$ In the following we introduce a real constant $\lambda$, with length dimensions, in order to make $M$ dimensionless: M\^[ij]{} = D\^[ik]{}\^[kj]{}\_= D\^[ii]{} \^[ij]{}\_= d\^(x\_i) \^[ij]{}\_. \[scompose\]In *Loop Quantum Gravity* we consider any space-time foliation defined by some temporary parameter and then we quantize the theory on a tridimensional slice. The simpler choice is a foliation along $x_0$: in this case the metric on the slice is simply the spatial block $3\times 3$ inside the four dimensional metric when it’s taken in temporary gauge. In such framework we have $d^0 = \mathbf{1}$ and $[d^\mu(x), A_\nu(x')] = G\d^\mu_\nu \d^3(x-x')$ with $\mu,\nu = 1,2,3$. We deduce the relation $d^\mu(x) = G\d / {\d A_\mu(x)}$ and apply it to (\[scompose\]) when vertices $i$ and $j$ sit on the same slice. We obtain d\^(x\_i) \^[ij]{}\_= G \^[ij]{}\_= 1 |M\^[ij]{}| exp ([\_[x\_i]{}\^[x\_j]{} A\_dx\^]{})\[appl\]with $\mu = 1,2,3$. Note that $x_0 (x_i) = x_0 (x_j)$ when $i$ and $j$ sit on the same slice. Hence $$|M^{ij}|= exp \left(\int_{x_i}^{x_j} A_0 dx^0\right) = exp \left(\oint A_0 dx^0\right).$$ Consider now the following relation: exp ([\_[x\_i]{}\^[x\_j]{} A\_dx\^]{}) = \_d\^2s n\_exp ([\_[x\_i]{}\^[x\_j]{} A\_dx\^]{})\[relaz\]with $$n_\nu = \fr 12 \e_{\nu\mu\a} \fr {\pa x^\mu}{\pa s^a} \fr {\pa x^\a}{\pa s^b}\e^{ab}.$$ $\Omega$ is a two dimensional surface parametrized by coordinates $s^a$ with $a=1,2$ and $\int_\Omega d^2 s = G$. We assume that $\Omega$ contains the vertex $x_i$ and no other point which is a vertex or sits along an edge. Substituting (\[relaz\]) in (\[appl\]) we obtain $$\fr \d {\d A_\nu^i} \na^{ij}_{\nu} = \fr 1 {\lambda G}\fr \d {\d A_\nu} \int_\Omega d^2s \, n_\nu |M^{ij}| exp \left({\int_{x_i}^{x_j} A_\mu dx^\mu}\right)$$ and then \^[ij]{}\_ &=& 1 [G]{}\_d\^2s n\_|M\^[ij]{}| exp ([\_[x\_i]{}\^[x\_j]{} A\_dx\^]{}) +K\_(x\_i, x\_j)\ &=& 1 [G]{} \_d\^2s n\_exp ([\_[x\_i]{}\^[x\_j]{} A\_dx\^]{}) + K\_(x\_i, x\_j).$K_\nu$ is any function of $x_i$ and $x_j$ independent from $A_\mu$. In the second line we have taken $\mu =0,1,2,3$. For diagonal components this becomes A\_\^[ii]{} = 1 [G]{}\_d\^2s n\_exp ([A\_dx\^]{})+K\_(x\_i).\[una\]We have used $\pa^{ii} = 0$ because the matrix which represents the discrete derivative is null along diagonal. We choose loops and surfaces $\Omega$ in such a way to have $$n_\nu \oint A_\mu dx^\mu = \lambda A_\nu (x_i) + O(\lambda^2).$$ Applying this into (\[una\]), it becomes A\^[ii]{}\_&=& 1 [G]{} \_d\^2s n\_( 1 + A\_dx\^+ O(\^2) ) +K\_(x\_i)\ &=& 1 [G]{} (G n\_+ G A\_(x\_i)+GO(\^2)) + K\_(x\_i)\ &=& 1 (n\_+ A\_(x\_i)+ O(\^2))+ K\_(x\_i).If we set $K_\nu (x_i) = -n_\nu (x_i) / \lambda$, we obtain $$A^{ii}_\nu = A_\nu (x_i) + O(\lambda).$$ This verifies the consistence of our definition and proves the theorem. Note that $\l$ could be taken equal to $\Delta$ because $M$ contains a factor $\Delta^{-1}$ from definition (\[dderiv\]) of $\tilde{M}$. In such case we obtain $$A^{ii}_\nu = A_\nu (x_i)$$ in the continuous limit. Note that canonical quantization of gauge fields implies $$\left[ \pa_0 A^{ij}_\a (x_a), A^{ij}_\nu (x_b) \right] = \left[\left(\int d^4 x \pa_0 A_\mu (x) \fr {\d \na^{ij}_\a} {\d A_\mu (x)}\right)(x_a), \na^{ij}_\nu(x_b)\right] = \d_{\a\nu}\d^3(x_a -x_b).$$ Integration in the first factor is over continuous coordinates of Hausdorff space. Conversely, the argument $x_a$ indicates simply to what ensemble $\mathcal{U}^a$ the edge $(ij)$ belongs. Here we have used $\pa^{ij} = 0$, which holds not only for $i = j$ but also for $x_i$ and $x_j$ in the same ensemble $\mathcal{U}^a$. This implies $\na^{ij} = A^{ij}$. Moreover $\na^{ij}$ is a state in the Hilbert space of Loop Quantum Gravity and hence we have a sort of third quantization which applies on gravitational states and creates gauge fields: $$\left[\left( \int d^4 x \dot{A}_\mu(x) \fr {\d \Psi[\Lambda,A]}{\d A_\mu(x)}\right), \Psi^\dag[\Lambda',A]\right] = \d (\Lambda - \Lambda').$$ $$\left[\left( \int d^4 x \dot{A}_\mu(x) \fr {\d \Psi[\Lambda, A]}{\d A_\mu(x)}\right), \Psi^\dag[\Lambda, A']\right] = \d (A - A').$$ This implies $$\Psi[A] = \int D[d^\mu]\, a(d)\, exp\left(\fr 1 G{\int d^4 x \,d^\mu A_\mu}\right) + b^\dag(d) \, exp\left(\fr 1 G{\int d^4 x \,d^{\dag\mu} A^\dag_\mu}\right)$$ $$\left[a(d), a^\dag (d')\right] = \fr 1 {\int d^4 x \dot A_\nu d^\nu} \d(d-d^{\dag\prime})$$ $$\left[b(d), b^\dag (d')\right] = \fr 1 {\int d^4 x \dot A_\nu d^\nu} \d(d-d^{\dag\prime})$$ ![A spin network with symmetry $U(6,\mathbf{Y}$). The six vertices are assumed superimposed.[]{data-label="Spin-network"}](Spin-network.jpg){width="60.00000%"} In figure \[Spin-network\] we see a spin network which defines a $U(6,\mathbf{Y})$ gauge field $A^{ij}$ with $i,j =1,2,3,4,5,6$. The vertices are assumed superimposed. The symmetry group is bigger than $U(1,\mathbf{Y})^6 \sim SO(1,3)^6$ which acts separately on the single vertices. The group grows in fact to $U(6,\mathbf{Y})$ because we can exchange the vertices without change the graph. We have the same situation with open strings: six strings with endpoints on six separated branes define a state with symmetry $U(1)^6$ but, if the branes are superimposed, the symmetry becomes $U(6)$. Generators in $u(6,\mathbf{Y})$ are generators in $u(6,\mathbf{H})$ multiplied by $1$ or $I$. In turn, generators in $u(6,\mathbf{H})$ can be divided in three families of generators in $u(6)$, one for every choice of imaginary unit ($i,j$ or $k$). Note that commutation relations for $U(6)$ are satisfied if and only if $$U^{ij}U^{jk} = U^{ik},$$ where $U^{ij}$ is the holonomy from $x_i$ to $x_j$. Hence $$A_\mu = \pa_\mu \Gamma \qquad \,\,\,\text{with}\,\,\,\Gamma\,\,\,\text{scalar.}$$ This means that gauge fields in $U(6)$ could exist without gravity, ie when $A$ is a pure gauge. Otherwise, an holonomy with $A \neq \pa \Gamma$ exchanges gauge fields between different families. The actions $tr\,(M^\dag M)$ and $tr\,(M^\dag M M^\dag M)$ are sums of exponentiated string actions. We obtain from theorem \[Loop\]: M\^[ij]{} M\^[\*jk]{} M\^[kl]{} M\^[\*li]{} &=& exp(\_ A\_dx\^)\ &=& exp(\_ F\_ dx\^dx\^)\ &=& exp(\_ \^[ab]{} F\_ X\^\_[,a]{} X\^\_[,b]{} d\^2 s) This is the exponential of an action for open strings whose worldsheet is a square made by edges $(ij)$, $(jk)$, $(kl)$, $(li)$. The strings move in a curved background with antisymmetric metric $F_{\mu\nu} = (d \wedge A)_{\mu\nu}$. In a similar manner M\^[ij]{} M\^[\*jk]{} M\^[ki]{} &=& exp(\_ \^[ab]{} F\_ X\^\_[,a]{} X\^\_[,b]{} d\^2 s) This is the exponential of an action for open strings whose worldsheet is a triangle. M\^[ij]{} M\^[\*ji]{} &=& exp(\_[O]{} \^[ab]{} F\_ X\^\_[,a]{} X\^\_[,b]{} d\^2 s) This is the exponential of an action for open strings whose worldsheet is a circle. M\^[ii]{} &=& exp(\_[O]{} \^[ab]{} F\_ X\^\_[,a]{} X\^\_[,b]{} d\^2 s) The same of above. M\^[ii]{}M\^[jj]{} &=& exp(\_[Cil]{} \^[ab]{} F\_ X\^\_[,a]{} X\^\_[,b]{} d\^2 s) This is the exponential of an action for closed strings whose worldsheet is a cilinder. This concludes the proof. Standard model interactions \[electroweak\] ------------------------------------------- We suppose that a residual symmetry for $U(6,\mathbf{Y})^{n/6}$ survives. If we consider the ensembles $\mathcal{U}^a = (x^a_1, x^a_2, x^a_3, x^a_4, x^a_5, x^a_6)$ as the real physical points, $U(6,\mathbf{Y})^{n/6}$ can be considered as a local $U(6,\mathbf{Y})$. We have defined $u(6,\mathbf{Y})$ as the complexified Lie algebra of $U(6,\mathbf{H})$, generated by all matrices in $u(6,\mathbf{H})$ and $Iu(6,\mathbf{H})$. By exponentiating $u(6,\mathbf{Y})$ we obtain a simple Lie group with complex dimension $78$. This group is the symplectic group $Sp(12,\mathbf{C})$ and $U(6,\mathbf{H})$ is its real compact form, sometimes called $Sp(6)$. We consider the fields $A(x^a_i, x^b_j)$ with $a = b$ (we call them $A(x^a)$). They are $6 \times 6$ skew adjoint hyperionic matrices $\bar{A}^\dag = -A$. These matrices form the $Sp(12,\mathbf{C})$ algebra which has $156$ generators $\w$ with $\bar{\w}^\dag = -\w$. $$\w = \left( \begin{array}[c]{cccccc} \vec{y} & b+\vec{b} & c+\vec{c} & d+\vec{d} & e+\vec{e} & m+\vec{m} \\ -b+\vec{b}& \vec{a}_1 & f+\vec{f} & g+\vec{g} & h+\vec{h} & p+\vec{p} \\ -c+\vec{c}& -f+\vec{f} & \vec{a}_2 & s+\vec{s} & q+\vec{q} & r+\vec{r} \\ -d+\vec{d}& -g+\vec{g} & -s+\vec{s} & \vec{a}_3 & k+\vec{k} & t+\vec{t} \\ -e+\vec{e}& -h+\vec{h} & -q+\vec{q} & -k+\vec{k} & \vec{a}_4 & v+\vec{v} \\ -m+\vec{m}& -p+\vec{p} & -r+\vec{r} & -t+\vec{t} & -v+\vec{v} & \vec{a}_5 \\ \end{array} \right)$$ Consider now the subalgebra of the following form with complex (not hyperionic) components except for $y$ which remains hyperionic: $$\w = \left( \begin{array}[c]{cccccc} \vec{y} & 0 & 0 & 0 & 0 & 0 \\ 0 & \vec{a}_1 & f+\vec{f} & g+\vec{g} & h+\vec{h} & p+\vec{p} \\ 0 & -f+\vec{f} & \vec{a}_2 & s+\vec{s} & q+\vec{q} & r+\vec{r} \\ 0 & -g+\vec{g} & -s+\vec{s} & \vec{a}_3 & k+\vec{k} & t+\vec{t} \\ 0 & -h+\vec{h} & -q+\vec{q} & -k+\vec{k} & \vec{a}_4 & v+\vec{v} \\ 0 & -p+\vec{p} & -r+\vec{r} & -t+\vec{t} & -v+\vec{v} & \vec{a}_5 \\ \end{array} \right)$$ Moreover we put the additional condition $\vec{a} = \sum_l \vec{a}_l = 0$. The field $y = tr\,\w$ is the only one which contributes to Ricci scalar. Conversely, all other fields belong to a $SU(5)$ subgroup, which defines the Georgi - Glashow grand unification theory. The symmetry breaking in Georgi - Glashow model is induced by Higgs bosons in representations which contain triplets of color. These color triplet Higgs can mediate a proton decay that is suppressed by only two powers of GUT scale. However, our mechanism of symmetry breaking doesn’t use such Higgs bosons, but descends from the expectation values of quadratic terms $AA$, which derive from non trivial minimums of a potential $AA - AAAA$. So we circumvent the problem. Restrict now the attention to the $SO(1,3) \otimes SU(2) \otimes U(1) \otimes SU(3)$ generators, that are the generators of standard model plus gravity. $$\w = \left( \begin{array}[c]{cccccc} \vec{y} & 0 & 0 & 0 & 0 & 0 \\ 0 & \vec{a}_1 & f+\vec{f} & 0 & 0 & 0 \\ 0 & -f+\vec{f} & \vec{a}_2 & 0 & 0 & 0 \\ 0 & 0 & 0 & \vec{a}_3 & k+\vec{k} & t+\vec{t} \\ 0 & 0 & 0 & -k+\vec{k} & \vec{a}_4 & v+\vec{v} \\ 0 & 0 & 0 & -t+\vec{t} & -v+\vec{v} & \vec{a}_5 \\ \end{array} \right)$$ We’ll show in a moment that all standard model fields transform under this subgroup in the adjoint representation. In this way themselves are elements of $Sp(12,\mathbf{C})$ algebra, explicitly: $$\psi = \psi^1 + I\psi^2 = \left( \begin{array}[c]{cccccc} 0 & e & -\nu & d^c_{R} & d^c_{G} & d^c_{B} \\ -e^* & 0 & e^c & -u_{R} & -u_{G} & -u_{B} \\ \nu^* & -e^{c*} & 0 & -d_{R} & -d_{G} & -d_{B} \\ -d^{c*}_R & u^*_R & d^*_R & 0 & u^c_{B} & -u^c_{G} \\ -d^{c*}_G & u^*_G & d^*_G & -u^{c*}_{B} & 0 & u^c_{R} \\ -d^{c*}_B & u^*_B & d^*_B & u^{c*}_{G} & -u^{c*}_{R} & 0 \\ \end{array} \right)$$ We have used the convention of Georgi - Glashow model, where the basic fields of $\psi^1$ are all left and the basic fields of $I\psi^2$ are all right. We have indicated with $^c$ the charge conjugation. The subscripts $R,G,B$ indicates the color charge for the strong interacting particles (R=red, G=green, B=blue). In Georgi - Glashow model the fermionic fields are divided in two families. The first one transforms in the representation $\bar{5}$ of $SU(5)$ (the fundamental representation). It is exactly the array $(\w^{1j})$ in the matrix above, with $j = 2,3,4,5,6$. This array transforms in fact in the fundamental representation for transformations in every $SU(5) \subset Sp(12,\mathbf{C})$ which acts on indices values $2 \div 6$. The second family transforms in the representation $10$ of $SU(5)$ (the skew symmetric representation). Unfortunately it isn’t the sub matrix $(\w^{ij})$ with $i,j = 2,3,4,5,6$. This is in fact the skew adjoint representation of $U(5,\mathbf{Y})$, which is skew hermitian and not skew symmetric. Do not lose heart. We’ll see in a moment that such adjoint representation is a quaternionic combination of three skew symmetric representations, one for every fermionic family. This concept could appears cumbersome, but it will be clear along the following calculations. The skew adjoint representation of $U(m,\mathbf{H})$ is a quaternionic combination of three skew symmetric representations of $U(m) = U(m,\mathbf{C})$ plus a real skew symmetric representation (which is also skew hermitian). Consider a fermionic matrix $\psi$ which transforms in the adjoint representation of $U(m,\mathbf{H})$: $$\psi \ra U\psi U^\dag$$ Take then a matrix $\psi'$ with $\psi' k =\psi$. Its transformation law under $U(m) = U(m,\mathbf{C})$ is easily derived when this group is constructed by using imaginary unit $i$ or $j$: $$\psi' k \ra U \psi' k U^\dag = U \psi' U^T k .$$ Here we have used the relation $k \lambda = \lambda^* k$ for $\lambda \in \mathbf{H}$ without $k$ component. We see that $\psi'$ transforms in the skew symmetric representation: $$\psi' \ra U \psi' U^T$$ We obtain a complex matrix $\psi'$ (with $i$ as imaginary unit) when $\psi$ has the form $Ak+Bj$ with $A,B$ real matrices. Indeed: $$\psi' = - \psi k = -Akk-Bjk = A - Bi$$ Sending $\psi$ in $\psi^*$ we bring $\psi'$ to $-\psi'$ and so we satisfy the skew symmetry. Finally we can always write $$\psi = \psi_0 + \psi_1 k + \psi_2 i + \psi_3 j$$ In this decomposition $\psi_1, \psi_2, \psi_3$ are complex matrices with complex unit respectively $i, j, k$. Explicitly: \_1 &=& \_1 - i\_1 = \_1\^1 - i\_1\^1 + I(\_1\^2 - i\_1\^2)\ \_2 &=& \_2 - j\_2 = \_2\^1 - j\_2\^1 + I(\_2\^2 - j\_2\^2)\ \_3 &=& \_3 - k\_3 = \_3\^1 - k\_3\^1 + I(\_3\^2 - k\_3\^2). Here all $\phi^1$, $\phi^2$ and $\xi^1$, $\xi^2$ are real fields. In this way $\psi_{1,2,3}$ transform in the skew symmetric representation of $U(m)$ when we construct this group by using the correspondent imaginary unit ($i$ for $\psi_1$, $j$ for $\psi_2$ and $k$ for $\psi_3$). Hence they define the famous three fermionic families, relate each other by $U(1,\mathbf{H})$ transformations. Moreover $\psi_0$ is a real skew symmetric field. $\pt$ Consider the following lagrangian tr(\^ ) &=& tr(k\^\* \^\_1 \_1 k) + tr(i\^\* \^\_2 \_2 i) +tr(j\^\* \^\_3 \_3 j)\ && - tr(i\^\* \_2\^\_3 i) - tr(j\^\* \_3\^\_1 j) - tr(k\^\* \_1\^\_2 k)\ &&- tr(\_0\^\_0) &=& tr(\^\_1 \_1 kk\^\*) + tr(\^\_2 \_2 ii\^\*) +tr(\^\_3 \_3 jj\^\*)\ && - tr(\_2\^\_3 ii\^\*) - tr(\_3\^\_1 jj\^\*) - tr(\_1\^\_2 kk\^\*)\ && - tr(\_0\^\_0)\ &=& tr(\^\_1 \_1) + tr(\^\_2 \_2) +tr(\^\_3 \_3)\ && - tr(\_2\^\_3) - tr(\_3\^\_1) - tr(\_1\^\_2)\ &&- tr(\_0\^\_0) In the third last line we have the fermionic terms in Georgi-Glashow model for three families of fields in representation $10$. In this way we can use the lagrangian $tr(\psi^{\dag} \na \psi)$, with $\psi$ in the adjoint representation, in place of Georgi-Glashow terms with $\psi_{1,2,3}$ in the skew symmetric representation. Mixed terms in the second last line give a reason to CKM and PMNS matrices which appear in standard model. Consider now the equivalence $$tr(\psi^{\dag} \psi \na ) = tr( \psi \na \psi^{\dag}) = tr( (-\psi^{\dag}) \na (-\psi) )= tr(\psi^{\dag} \na \psi).$$ Hence tr(\^ ) = 12 tr(\^ {, }).\[anticom\] In this formalism, given $\w \in su(3)\otimes su(2) \otimes u(1)$, the transformation $\d\psi = [\w,\psi]$ corresponds to the usual transformation $\d\psi = \w\psi$ in the standard model formalism. We see that the only fields which transform correctly under $SO(1,3)$ are $e$, $\nu$ and $d^c$. For now we do not care. We note rather that, when we restrict the elements of $\w$ from the hyperions to the complex numbers, we have $3$ possibilities to do it. A complex number is not only in the form $a+ib$, with $a,b \in R$, but also $a+jb$ and $a+kb$. The same is true for a fixed linear combination $a+(ci+dj+fk)b$, where $c,d,f \in R$ and $c^2 + d^2 + f^2 =1$. The choice of $j$ in place of $i$ determines another set of ($\w,\psi)$ isomorphic to the first one. In the same way we obtain a third set choosing $k$. Note that for a $i$-complex left field we have an $Ii$-complex right field and so on for $j$ and $k$. The three sets are related by the group $SU(2)$ which rotates an unitary vector in $R^3$ with coordinates $(c,d,f)$. Its generators are $$\w = \fr {\vec{y}}{6} \left( \begin{array}[c]{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \right) .$$ Their diagonal form suggests an identification between this group and the gravitational group $SU(2)^{\subset SO(1,3)}$. If the two groups coincided, all fields would transform correctly under $SU(2)^{\subset SO(1,3)}$. By extending this group to the entire $SO(1,3)$, we see that boosts exchange left fields with right fields. Note that three families have to exist also for bosonic particles (photon, $W^\pm$, $Z$, gluons) although they are probably indistinguishable. Other interesting thing is that we have no warranty for the persistence of $Sp(12,\mathbf{C})$ in the entire universe. However we have surely at least the symmetry $U(1,\mathbf{Y}) = SO(1,3)$, which implies the secure existence of gravity. ### Fermions from an extended arrangement matrix We introduce the following entities: I-complex grassmannian coordinates $\theta = \theta^1 + I\theta^2$ and $\bar{\theta} = \theta^1 - I\theta^2$; Grassmannian derivatives $\pa_g$ and $\bar{\pa}_g$, with $\pa_g \theta = \bar{\pa}_g \bar{\theta} =1$ and $\pa_g \bar{\theta} = \bar{\pa}_g \theta =0$; Grassmannian covariant derivatives $\na_g = \pa_g +\psi$ and $\bar{\na}_g^\dag = \bar{\pa}_g+ \bar{\psi}^\dag$. The fundamental products return $$\theta\theta = \theta^1 \theta^1 + \theta^1 I \theta^2 + I \theta^2 \theta^1 - \theta^2\theta^2 = 0 + I \theta^1 \theta^2 - I \theta^1 \theta^2 - 0 = 0 \nonumber$$ || &=& \^1 \^1 - \^1 I \^2 - I \^2 \^1 - \^2\^2 = 0 - I \^1 \^2 + I \^1 \^2 - 0 = 0\ | &=& \^1 \^1 - \^1 I \^2 + I \^2 \^1 + \^2\^2 = - I \^1 \^2 - I \^1 \^2 = -2 I \^1 \^2 In the arrangement field formalism, covariant derivatives descend from a grassmanian matrix $M_g$ or $\bar{M}_g^\dag$. We can consider a unique generalized matrix $M_T = M_g + M$ that, up to a generalized $U(n,\mathbf{Y})$, becomes M\_T &=& \_g + d\^\_= \_g + + d\^\_\ |[M]{}\^\_T &=& |\^\_g | + |\^\_|[d]{}\^ = |\_g | + |\^| + |\^\_|[d]{}\^.\[deriv\] Expanding $tr\,(\bar{M}^\dag_T M_T)$ we obtain tr(|[M]{}\^\_T M\_T) &=& tr(d\^|[d]{}\^ |\^\_\_) = \_a h R(x\^a) \[ricci2\].To calculate $tr\,(\bar{M}^\dag_T M_T\bar{M}^\dag_T M_T)$ we write first $\bar{M}_T^{\dag 2}$ and $M_T^{2}$. M\^2\_T &=& \_g + + d\^{ \_, } + d\^\_d\^\_\ |[M]{}\^[2]{}\_T &=& |\_g | + |\^| + {|\^, |\^\_} |[d]{}\^ | + |\^\_|[d]{}\^|\^\_|[d]{}\^ If $M$ has the form (\[deriv\]), then $[M_T,\bar{M}^\dag_T] = 0$. This implies $$tr\,(\bar{M}^\dag_T M_T\bar{M}^\dag_T M_T) = tr\,(M_T^2 \bar{M}_T^{\dag 2}).$$ We calculate its value starting from the following product tr(d\^{\_, }{|\^, |\^\_}|[d]{}\^| ) &=& tr(| d\^{\_, }{|\^, |\^\_}|[d]{}\^).\ && Remember that operator $tr$ acts as a sum over vertices. Now every vertex is labeled by a couple $(\theta, x_i)$ and then $$tr\, (\theta \bar{\theta} (***)) = \left( \int d\bar{\theta} d\theta\,\theta \bar{\theta} \right)tr\,(***) = tr\,(***)$$ Hence tr(d\^{\_, }{|\^, |\^\_}|[d]{}\^|) &=& tr(d\^{\_, }{|\^, |\^\_}|[d]{}\^)\ &=& tr(|[d]{}\^ d\^|\^)\ &=& \_a h R(x\^a) |\^ In this way tr(|[M]{}\^\_T M\_T|[M]{}\^\_T M\_T) &=& tr (|\^d\^ {\_, } + {|\^, |\^\_} |[d]{}\^) +\ && + \_a h R(x\^a) |\^+ S\_[GB]{} \[ora\] We have seen that every family distinguishes itself by the choice of complex unity. Inserting in $\psi$ the definitions of $\psi_{1,2,3}$ we can write &=& \_0\^1 + i(\_2\^1 + \_3\^1) + j(\_3\^1 + \_1\^1) +k(\_1\^1 + \_2\^1) +\ && + I\_0\^2 + iI(\_2\^2 + \_3\^2) + jI(\_3\^2 + \_1\^2) + kI(\_1\^2 + \_2\^2) Using the correspondences $(1,I,i,j,k,iI,jI,kI) \leftrightarrow \g\g$ and $4 \leftrightarrow tr$, the first term in (\[ora\]) becomes $$2 \times \fr 14 \times tr\, \left( {\psi}^{lm} \overline{(\g_l\g_m)^\dag} \left( \g_0 \g_s e^{\mu s} \overset{G}{\na}_\mu \psi^{np} (\g_n \g_p) + A_\mu \psi \right)\right)$$ === $$\fr 12 \, tr\, \left( \psi^{lm} (\g_m\g_l) \left( \g_0 \g_s e^{\mu s} \overset{G}{\na}_\mu, \psi^{np} (\g_n \g_p) + A_\mu \psi_0 + \sum_{q,q'=1}^3 A^q_\mu \psi_{q'} i_{q'} \right)\right)$$ Here we have deleted the anticommutator by means of (\[anticom\]). In the covariant derivative we have included only the gravitational (track) contribution, while $A_\mu$ is intended to have null track. Moreover $i_1 =k$, $i_2 =i$ and $i_3 =j$. In the second line we have divided the $75$ generators $A_\mu$ in three families of $35$ generators. Obviously, only two families are linearly independent. When they act on spinorial fields which belong to their own family, they behave exactly as the $35$ generators of $SU(6)$ (which comprise the $24$ generators of $SU(5)$). Conversely, when a generator $A^q$ acts on a $q'$-field (with $q \neq q'$), it mimics the application of some generator $A^{q'}$ followed by a rotation in $SU(2)_{GRAVITY}$ which sends the family $q'$ in the remaining family $q''$. We explicit now one entry of $\psi = \psi^1 + I\psi^2$ by exploiting the correspondence with $\g\g$. We have $$\psi = \left( \begin{array}[c]{cc} \psi_0^1 + i(\phi^1_2+\xi_3^1)&(\phi_3^1+\xi_1^1) +i(\phi^1_1 +\xi^1_2) \\ -(\phi_3^1+\xi_1^1)+i(\phi^1_1 +\xi^1_2)&\psi_0^1-i(\phi^1_2+\xi_3^1) \\ i\psi_0^2-(\phi^2_2+\xi_3^2)&i(\phi_3^2+\xi_1^2)+(\phi^2_1 +\xi^2_2) \\ -i(\phi_3^2+\xi_1^2)+(\phi^2_1 +\xi^2_2)& i\psi_0^2+(\phi^2_2+\xi_3^2) \\ \end{array}\right.$$ $$\left. \begin{array}[c]{cc} i\psi_0^2-(\phi^2_2+\xi_3^2) & i(\phi_3^2+\xi_1^2)+(\phi^2_1 +\xi^2_2) \\ -i(\phi_3^2+\xi_1^2)+(\phi^2_1 +\xi^2_2) &i\psi_0^2+(\phi^2_2+\xi_3^2) \\ \psi_0^1+i(\phi^1_2+\xi_3^1) &(\phi_3^1+\xi_1^1)+i(\phi^1_1 +\xi^1_2) \\ -(\phi_3^1+\xi_1^1)+i(\phi^1_1 +\xi^1_2) &\psi_0^1-i(\phi^1_2+\xi_3^1) \\ \end{array}\right)$$ If we define the four components spinor $$\hat{\psi} = \left(\begin{array}[c]{c} \psi_0^1 + i(\phi^1_2+\xi_3^1) \\ -(\phi_3^1+\xi_1^1)+i(\phi^1_1 +\xi^1_2) \\ i\psi_0^2-(\phi^2_2+\xi_3^2) \\ -i(\phi_3^2+\xi_1^2)+(\phi^2_1 +\xi^2_2) \end{array} \right)$$ the derivative term can be rewritten as 2 \^ \_0 \_s e\^[s]{} \_ This is the Dirac action, although with a new interpretation of spinorial components. Moreover $$\psi^{AB} = W^{ABC} \hat{\psi}^C \qquad ; \qquad W^{ABC} W^{DBC} = \mathbf{1}_{AD}$$ $$\left(W^{ABC}\right) =$$ $$\pt$$ $$\left( \left( \begin{array}[c]{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array}\right), \left( \begin{array}[c]{cccc} 0 & -\ast & 0 & 0 \\ \ast & 0 & 0 & 0 \\ 0 & 0 & 0 & \ast \\ 0 & 0 & -\ast & 0 \\ \end{array}\right), \left( \begin{array}[c]{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array}\right), \left( \begin{array}[c]{cccc} 0 & 0 & 0 & \ast \\ 0 & 0 & -\ast & 0 \\ 0 & -\ast & 0 & 0 \\ \ast & 0 & 0 & 0 \\ \end{array}\right) \right)$$ $$\pt$$ with $\ast \hat{\psi} = \hat{\psi}^\ast$. Adding the other terms $$tr\,(\hat{M}^\dag \hat{M} \hat{M}^\dag \hat{M}) = S_{GB} +$$ $$+ 2 \int \left( \hat{\psi}^\dag\, \g_0 \g_s e^{\mu s} \overset{G}{\na}_\mu \hat{\psi} + \hat{\psi} \sum_{q,q'} A^q_\mu \hat{\psi}_{q'} i_{q'} + \sqrt h R(x)\sum_q \hat{\psi}_q^\dag \hat{\psi}_q \right) dx$$ In this way we include all the contents of standard model as elements in the generalized $Sp(12,\mathbf{C})$ algebra. Terms which mix families can be used to calculate values in CKM and PMNS matrices. Masses for fermionic fields arise, as usual, from non null expectation values of $A_\mu(x^a_i, x^b_j)$ with $a\neq b$ in $\na_\mu$. We obtain a contribute to Hilbert-Einstein action also from term $\int d^4x \sqrt h R \bar{\psi} \psi$, due to a non null expectation value of $\sum_q \bar{\psi}_q \psi_q $. It contains in fact the chiral condensate, whose non null vacuum value breaks the chiral flavour symmetry of QCD Lagrangian. Note that known fermionic fields fill up a matrix $\psi$ with null track. However, only if $tr\,\psi \neq 0$ our action has an extra invariance under A\_&& d\_\^[-1]{}\ && \_g d\^A\_.\[super\]Here $\overleftarrow{\pa}_g$ is a $\pa_g$ which acts backwards. This means we have the same number of fermions and bosons, so that the vacuum energies erase each other. Invariance (\[super\]) predicts the existence of a new colored fermionic sextuplet which sits on diagonal in $\psi$. Inside it we can include a conjugate neutrino ($\nu^c$), a sterile neutrino ($N$) and a conjugate sterile neutrino ($N^c$). Explicitly $$\psi = \left( \begin{array}[c]{cccccc} N & 0 & 0 & 0 & 0 & 0 \\ 0 & \nu^c & 0 & 0 & 0 & 0 \\ 0 & 0 & \nu^c & 0 & 0 & 0 \\ 0 & 0 & 0 & N^c & 0 & 0 \\ 0 & 0 & 0 & 0 & N^c & 0 \\ 0 & 0 & 0 & 0 & 0 & N^c \end{array} \right).$$ This field commutes with any gauge field in $U(1) \otimes SU(2) \otimes SU(3)$ and so it hasn’t electromagnetic, weak or strong interactions. Moreover it gives a Dirac mass to neutrinos via the term $$tr\,(\bar{\psi}^\dag d^\mu A_\mu \psi) = \bar{\psi}^{\dag ij} d^\mu A_\mu^{kl} \psi^{mn} f^{(ij)(kl)(mn)}.$$ Here $f^{(ij)(kl)(mn)}$ are structure constants for $SU(6)$ and masses for neutrinos are eigenvalues of $<d^\mu A_\mu >$. ### The vector superfield The invariance (\[super\]) suggests a connection with super-symmetric theories. We redefine the supersymmetry algebra as follows: Q = \_g - d\^| \_ &;& \[Q, \_\] = - d\^| \_\ |[Q]{} = |\_g - |[d]{}\^ \_ &;& {Q, |Q } = -2d\_H\^\_- \^ \_\ 2 d\_H\^&=& d\^+ |[d]{}\^ Here $\widetilde{\na}$ is a compatible covariant derivative which acts as a skew-adjoint operator. It is a functional of $d^\mu$ with $[\widetilde{\na}_\nu, d^\mu] = [\widetilde{\na}_\nu, \bar{d}^{\dag\mu}] = 0$. Note that off-shell we have $\widetilde{\na}_\nu \neq \overset{G}{\na}_\nu$. Moreover $${\Sigma}^{\mu\nu} = d^\mu \bar{d}^{\dag\nu} \theta \bar{\theta}$$ $\widetilde{R}_{\mu\nu}$ is the curvature tensor made with $\widetilde{\na}$, ie $\widetilde{R}_{\mu\nu} = [\widetilde{\na}_\mu, \widetilde{\na}_\nu]$. Consider that locally we can find a coordinate system where $\widetilde{R}_{\mu\nu} = 0$, recovering the usual SUSY algebra with $-i\widetilde{\na}_\mu$ in place of $P_\mu$. The vector superfield assumes the simple form[^8] V &=& + d\^A\_\ V\_[AB]{} &=& \_[AD]{} W\^[D]{}\_[BC]{} \^C + (d\^A\_)\_[AB]{} with $\theta_{AD} = \mathbf{1}_{AD}\,\theta^1 + \g_{5\,AD} \,\theta^2$. Note that $M_T = V+ \theta \pa_g + d^\mu \pa_\mu$ and then the usual kinetic term for $V$ includes the same terms we have found in $tr\,(\bar{M}_T^\dag M_T) - tr\,(\bar{M}^\dag_T M_T\bar{M}^\dag_T M_T)$. It’s remarkable that all the known fermionic fields take the role of gauginos for all the known bosonic fields. In this way the right up quarks are gauginos for gluons, while right electrons are gauginos for $W$ bosons. This is permitted because both fermions and bosons in AFT transform in the adjoint representation of $Sp(12,\mathbf{C})$. In this way our theory includes SUSY $N = 1$ with no need for new unknown particles. Inflation --------- Our final action is \[inflation\] $$S = tr\,\left(\fr {\bar{M}^\dag M }{16\pi G}- \bar{M}^\dag M \bar{M}^\dag M\right)$$ This is also an action for an $U(n,\mathbf{Y})$ gauge theory with coupling constant $1/G$ in a mono-vertex space-time. In these theories the scaling of coupling constant can be calculated exactly in the limit of large $n$. In several cases the coupling constant changes its sign for big values of scale: this has considerable consequences for the first times after Big Bang, when a measurement of $G$ has sense only at very high energies (very small distances). What said suggests that such measurement can return a negative value of $G$, which implies a repulsive force of gravity. In turn, repulsive gravity implies an accelerate expansion for the universe. Because the entries of $M$ are probability amplitudes, we would be it was dimensionless. However, when we pass from $M$ to $\na$, we need a scale $\Delta$ to define the matrix $\pa$. This justify the inclusion of $\Delta^{-1}$ inside $M$. If we extract this factor, the Hilbert Einstein action becomes $$\fr {\Delta^4}{16\pi G \Delta^2}tr\,(\bar{M}^\dag M) = \fr {\Delta^2}{16\pi G}tr\,(\bar{M}^\dag M)$$ where we have also added the correct volume form $\Delta^4$. This seems a more natural formulation when $M$ represents probability amplitudes. In this way we can take $\Delta$ very small but not zero. The most natural choice is $\Delta^2 \sim G$. In this case, what does it mean that $G$ is negative? Negative $G$ implies negative $\Delta^2 = ds^2$. In lorentzian spaces $\Delta^2 = dt^2 -ds^2 <0$. For purely temporal intervals we’ll have $dt^2 < 0$, so the time becomes imaginary. An imaginary time is indistinguishable from space. This hypothesis of a spatial” time had already been espoused by Hawking as a solution for eliminate the singularity in the Big Bang [@Hawking]. Classical solutions {#classical} ------------------- We rewrite our action in the form $$S = \fr 12 tr\, (\bar{M}^\dag M) - \fr 1{4g} tr\,(\bar{M}^\dag M \bar{M}^\dag M)$$ where we have defined $g = \fr {\Delta^2} {32\pi G}$. We diagonalize $M$ with a transformation in $U(n,\mathbf{Y})$ and define $M^{ii} \equiv \f(x_i)$, $\f(x) = a(x)+\vec{b}(x)$. The lagrangian becomes: $$L = \fr 12 \left[ a(x_i)^2 + |\vec{b}(x_i)|^2\right] -\fr 1{4g} \left[ a(x_i)^4 + |\vec{b}(x_i)|^2 + 2a(x_i)^2 |\vec{b}(x_i)|^2 \right]$$ The motion equations are $$g a(x) - a(x)^3 - a(x)|\vec{b}(x)|^2 = 0$$ $$g \vec{b}(x) - \vec{b}(x)|\vec{b}(x)|^2 - a(x)^2\vec{b}(x) = 0$$ There are two solutions: $$(1) \qquad a(x) = \vec{b}(x) = 0$$ $$(2) \qquad a(x)^2 + |\vec{b}(x)|^2 = \bar{M}^\dag M = g$$ The first one corresponds to the vacuum (all non-gravitational fields equal to zero) plus a solution of Einstein equations in the vacuum: $$\psi = A_\mu = 0 \qquad R(x) = 0$$ The solution $\bar{M}^\dag M = g$ corresponds to a vacuum expectation value for $\bar{M}^\dag M$ equal to $g$. $M$ contains a factor $A$, so that an expectation value for $\bar{M}^\dag M$ corresponds to an expectation value for $AA$. This means that $$AAAA = <AA>AA + \text{quantum perturbations}$$ $<AA>$ gives a mass for $A$. More precisely, for $A \in U(n,\mathbf{Y})/U(m,\mathbf{Y})^{n/m}$, $$m_A^2 \sim \fr {<\bar{M}^\dag M>}{\Delta^2} = \fr g {\Delta^2} = \fr 1 {32\pi G}$$ So the fields $A \in U(n,\mathbf{Y})/U(m,\mathbf{Y})^{n/m}$ have a mass in the order of Planck mass $m_P$. Moreover, in the primordial universe, when $k_B T \approx m_p$, all the fields behave like null mass fields. In that time the symmetry was then $U(n,\mathbf{Y})$ and no arrangement exists. Our conclusion is that Quantum Gravity cannot be treated as a quantum field theory in an ordinary space. In what follows we explain how overcome this trouble. Quantum theory {#quantize} -------------- [1]{} Quantum theory is defined via the following path integral: && D\[M(x,y)\]D\[|M\^\* (x,y)\]\ && Oe\^[M(x,y)|[M]{}\^\*(x,y) dx dy - M(x,y)|M\^\*(x,y’)M(x’,y’)|M\^\*(x’,y) dx dy dx’ dy’]{} with && Oe\^[F(x,y)dx dy]{} =\ && = 1+ F(x,y)dx dy + 12 F(x,y)F(x\^1,y\^1)dx dy dx\^1 dy\^1 +\ &&+ …+ 1 [n!]{} F(x,y)F(x\^1,y\^1)…F(x\^[n-1]{},y\^[n-1]{}) dx dy dx\^1 dy\^1 …\ && …dx\^[n-1]{} dy\^[n-1]{} Oe\^[F(x,x’,y,y’)dx dx’ dy dy’]{} &=& 1+ F(x,x’,y,y’)dx dy dx’ dy’+\ && + 12 F(x,x’,y,y’)F(x\^1,[x’]{}\^1,[y’]{}\^1,y\^1)dx dy dx’ dy’ dx\^1 dy\^1 d[x’]{}\^1 d[y’]{}\^1 +\ && + …+ 1 [n!]{} F(x,x’,y,y’)F(x\^1,[x’]{}\^1,y\^1,[y’]{}\^1)…\ && …F(x\^[n-1]{},[x’]{}\^[n-1]{},y\^[n-1]{},[y’]{}\^[n-1]{}) dx dy dx’ dy’ dx\^1 dy\^1 d[x’]{}\^1 d[y’]{}\^1…\ && …dx\^[n-1]{} dy\^[n-1]{} d[x’]{}\^[n-1]{} d[y’]{}\^[n-1]{}\ && 1 [n!]{} F\^n [2]{} The integration of $F^n$ is very simple and gives $$\fr 1 {n!} \int D^2[M] e^{\int M^2 dx dy} \,\,F^n = \fr {(4n)!}{n! 2^{2n}(2n)!} = \fr 1 {n!} P(4n)$$ Here $P(4n)$ gives the number of different ways to connect in couples $4n$ points. It’s clear that any universe configuration corresponds to an $F^k$ inside which some connections have been fixed and the corresponding integrations have been removed. For example: ![image](formula.jpg){width="100.00000%"} If the fixed connections are $m$, then $$<\hat F^k > = \fr{\sum_n \fr 1 {n!} P(4(n+k)- 2m)}{\sum_n \fr 1 {n!} P(4n)}$$ At relatively low energies we can tract $\overset{G}{A}$ as an ordinary gauge field. The arrangement field theory is then approximated with a common quantum theory on a curved background, determined by $e^{\mu a}$. Quantum Entanglement and Dark Matter\[entanglement\] ---------------------------------------------------- The elements of $M$ which do not reside in or near the diagonal, describe connections between points that are not necessarily adjacent to each other, in the common sense. These connections construct discontinuous paths as in figure \[cammini-entanglement\] and can be considered as quantum perturbations of the ordered space-time. Such components permit us to describe the quantum entanglement effect, as it could be shown in detail in a complete coverage that goes beyond the purpose of the present paper. It is remarkable that in this framework the discontinuity of paths is only apparent, and it is a consequence of imposing an arrangement to the space-time points. These discontinuous paths can be considered as continuous paths which cross wormholes. The trait of path inside a wormhole is described by a component of $M$ far away from diagonal. The information seems to travel faster than light, but in reality it only takes a byway. ![Discontinuous paths. The connection between $x_3$ and $x_4$ is done by a component of $M$ far away from diagonal.[]{data-label="cammini-entanglement"}](cammini-entanglement.jpg){width="60.00000%"} Imagine now a gravitational source with mass $M_S$ which emits some gravitons with energy $\sim E_{PLANCK}$, directed to an orbiting body with mass $M_B$ at distance $r$. In this case (respect such gravitons) the fields $M(x^a, x^b)$ with $a \neq b$ would behave as they had null mass. This implies the probable existence of connections (practicable by such gravitons) between every couple of vertices in the path from the source to the orbiting body. This means that if $r = \Delta j$, $j \in \mathbf{N}$, the graviton could reach the orbiting body by traveling a shorter path $\Delta j'$, $j>j' \in \mathbf{N}$. The question is: what is the average gravitational force perceived by the orbiting body? The probability for a graviton to reach a distance $r$ passing through $m$ vertices is $$P_m = (1-a)^{m-1}a \qquad with\,\,\,\sum_{m=1}^\infty P_m = 1$$ where $a = 1/j$. These are the probabilities for extracting one determined object in a box with $j$ objects at the $m$-th attempt. In this way the effective length traveled by the graviton will be $\Delta m$. We use these probabilities to compute the average gravitational force in a semiclassical approximation. &=& G a [1-a]{} \_1\^ dm\ &=& G a [1-a]{} \[log(1-a)\] \_[log(1-a)]{}\^[-]{} dx\ && \[dark\]The last integral gives $$\int_{log(1-a)}^{-\infty} \fr{e^x}{x^2}dx = -Ei(log(1-a)) + \fr {1-a}{log(1-a)}$$ We expand $\langle F \rangle$ near $a=0$ (which implies $j >> 1$), obtaining $$\fr a {(1-a)} [log(1-a)]\int_{log(1-a)}^{-\infty} \fr{e^x}{x^2}dx \approx a + a^2(log(a) + \g) + O(a^3).$$ Here $\g$ is the Eulero-Mascheroni constant. The dominant contribution is then && G a (1+alog(a) + a)\ && ( 1 - r ( log( r ) - ) ) If the massive object orbits at a fix distance $r$, its centrifugal force has to be equal to the gravitational force. This gives $$<F> \approx \fr{G}\Delta\fr{M_{B}M_{S}}{r}\left( 1 - \fr \Delta r \left( log\left( \fr r \Delta \right) - \g \right) \right) = \fr {M_B v^2} r$$ $$v^2 = \fr{G M_{B}M_{S}}{\Delta}\left( 1 - \fr \Delta r \left( log\left( \fr r \Delta \right) - \g \right) \right)$$ We see that, varying the radius, the velocity remains more or less constant (It increases slightly with $r$). Can this explain the rotation curves of galaxies without introducing dark matter? Surely not all gravitons have energy $> E_{PLANCK}$; at the same time we have to consider that $G$ scales for small distances (hence for small $m$ in (\[dark\])). It’s possible that these factors reduces the extremely high value of $r/\Delta$. Conclusion ---------- In this work we have applied the framework developed in [@Arrangement] to describe the contents of our universe, ie forces and matter. Doing this, we have discovered an unexpected road toward unification, which shows similarities with Loop Gravity, String Theory and Georgi - Glashow model. For the first time a natural symmetry justifies the existence of three particles families, not one more, not one less. Moreover a new version of supersymmetry seems to couple gauge fields with all known fermions, without necessity of imagining new particles never seen by experiments. Clearly this fact closes the door to dark matter. To compensate this big absence, AFT proposes an explanation to galaxy rotation curves which doesn’t make use of dark matter. Another considerable implication of AFT regards tangent space, which has symmetry $SO(1,3)$ only when gravity decouples from other forces. At that point also the real space-time can obtain the same symmetry. This fact is coherent with *no-go theorem* of Coleman-Mandula[@nogo], under which $S$-matrix is Lorentz invariant if and only if the action symmetry is $SO(1,3) \otimes internal\,\,symmetries$”. We don’t say that this theory is exact. However there are several good signals which must be taken into account. We hope that a future teamwork can verify this theory in detail, deepening all its implications. Antigravity in AFT ================== Introduction ------------ Arrangement field theory is a quantum theory defined by means of probabilistic spin-networks. These are spin-networks where the existence of an edges is regulated by a quantum amplitude. AFT is a proposal for an unifying theory which joins gravity with gauge fields. See [@Arrangement] and [@Arrangement2] for details. The unifying group is $Sp(12,\mathbf{C})$ for the lorentzian theory and its compact real form $Sp(6)$ for the euclidean theory. The unifying group contains three indistinguishable copies of gauge fields, mixed by gravitational field. Moreover, commutators between gravitational and gauge fields are non null and give new terms for the Einstein equations. In what follows we focus on the term which mixes gravity with electromagnetism, showing that its contribution to Einstein equations could generate antigravity. In the end we verify that new interactions don’t affect the making of nucleus and nucleons. Antigravity {#formalism2} ----------- [1.5]{} The term which mixes gravity with electromagnetism is given by space-time integration of the following expression: $$-\fr 14 f^{(G)(EM1)(EM2)} A^{(G)}_\mu A^{(EM1)}_\nu \bigg( F^{(EM2)\mu\nu} + \a f^{(EM3)(EM1)(EM2)} A^{(EM3)\mu} A^{(EM1)\nu} \bigg)$$ \[prima\]Remember that AFT includes three indistinguishable electro-magnetic fields, with non-trivial commutators. In this way $A^{(G)}$ is the gravitational gauge field, $A^{(EMn)}$ is the n-th electromagnetic field and $\a$ is the fine structure constant. In the realistic case of null torsion, the gravitational gauge field can be rewritten in function of the tetrad field: $$A_\mu^{(G)bc} = \fr 12 e^{\nu [b} \pa_{[\mu} e^{c]}_{\nu]} + \fr 14 e_{\mu d} e^{\nu b} e^{\sigma c} \pa_{[\sigma} e^d_{\nu]}$$ From now we take a low energy limit so defined: $e_{ii} = 1$ with $i=1,2,3$, $e_{00} = \theta(x)$ and $\pa_0 \theta(x) =0$. Varying with respect to $e$ we obtain: $$\fr {\d A_\mu^{(G)bc}}{\d e^s_\tau} = \fr 12 e^{\nu [b} \d^{c]}_s \d^\tau_{[\nu} \pa_{\mu]} + \fr 14 e_{\mu s} e^{\nu b} e^{\sigma c} \d^\tau_{[\nu} \pa_{\sigma]}$$ $$\fr {\d A_\mu^{(G)bc}}{\d g_{\w\tau}} = 2e^{\w s} \fr {\d A_\mu^{(G)bc}}{\d e^s_\tau} = e^{\w [c} e^{b] \nu} \d^\tau_{[\nu} \pa_{\mu]} + \fr 12 \d^\w_{\mu} e^{\nu b} e^{\sigma c} \d^\tau_{[\nu} \pa_{\sigma]}$$ The component with $c = \w =\tau = 0$ and $b \neq 0$ results: $$\fr {\d A_\mu^{(G)b0}}{\d g_{00}} = -\theta^{-1} \d^0_\mu \pa_b - \fr 12 \theta^{-1} \d^0_\mu \pa_b = -\fr 3{2\theta} \d^0_\mu \pa_b$$ $$A^{(EM)\rho}A^{(EM)}_\rho A^{(EM)\mu}\fr {\d A_\mu^{(G)b0}}{\d g_{00}} = \fr 3{2\theta} \pa_b A^{(EM)0} A^{(EM)\rho}A^{(EM)}_\rho$$ The minus sign has disappeared because we have reversed the derivative. The variation of quartic term in (\[prima\]) with respect to $\d g_{00}$ is then given by $$-\fr \a 4 \cdot \fr 3{2\theta} \pa_b f^b A^{(EM)0} A^{(EM)\rho}A^{(EM)}_\rho = -\pa_b f^b \fr {3\a} {8\theta} V(\theta^2 V^2 - A^2)$$ $$f^b = \sum_{cade} f^{(bo)ca} f^{dea} \approx 4\fr {x^b}{r}.$$ Here we have indicated with $V$ the electric potential and with $A$ the magnetic vector potential. The sum inside $f$ is over the three electromagnetic fields. It’s so clear that varying the complete action with respect to $g_{\mu\nu}$ we obtain a new term for Einstein equations. In the Newtonian limit we can substitute $g_{00} = -(1-2\phi)$ and $R_{00} - (1/2)Rg_{00} = \na^2 \phi$ where $\phi$ is the newtonian potential. Hence: 2 \^2 && 8T\^[00]{} = 8\ && \_b 24V(V\^2 - \^[-1]{} A\^2) For radial potential we have $$\pa_b \phi = \fr {x^b}{r} \pa_r \phi .$$ In such case $$C_G = \pa_r \phi \approx 12\pi\a V(\theta V^2 -\theta^{-1} A^2)$$ Now we insert the appropriate universal constants and approximate $\theta$ with $1$: C\_G 12 V(V\^2 - c\^2 A\^2) = k V(V\^2 - c\^2 A\^2) \[ultima\] Here $L_p$ is the Planck length, equal to $\sqrt {\hbar G/c^3}$. The multiplicative constant is $$k = \fr{12\pi}{137}\cdot \fr{(6,67\cdot 10^{-11}\cdot 8,85\cdot 10^{-12})^{3/2}}{(3\cdot 10^8)^4 \cdot(1,62\cdot 10^{-35})} = 30,27\cdot 10^{-33} \,\left(\fr{C^3 s^4}{Kg^3 m^5}\right).$$ This means that for having a weight variation (on Earth) of about $10\%$ ($\Delta C_G =1$) we need an electrical potential of $10^{11}$ Volts. These are $100$ billions of Volts. For $V = Q/r$ and $A=0$ we have: $$C_G = \fr {k}{(4\pi\e_0)^3}\cdot \fr {Q^3}{r^3} = 2,198 \cdot 10^{-2} \left(\fr {m^4}{s^2 C^3}\right)\fr {Q^3}{r^3}$$ Note that the sign of $C_G$ is the sign of $Q$ and then we obtain antigravity for negative $Q$. We associate to this interaction an equivalent mass $m$, substituting $C_G = Gm/{r^2}$. We have $$m = \fr k G V^3 r^2 = \fr {k}{G(4\pi \e_0)^3}\fr {Q^3} r = 3,293\cdot 10^8 \left(\fr {Kg\, m}{C^3}\right) \fr {Q^3}{r}$$ which is a negative mass for negative $Q$. Negative mass implies negative energy via the relation $E =mc^2$. Intuitively, if we search a similar relation for gravi-magnetic field (which is $\na \times (g^{0i})$, $i=1,2,3$), we should find the same formula (\[ultima\]) with an exchange between $V$ and $cA$. We calculate now at what distance the gravitational attraction between two protons is equal to their electromagnetic repulsion. $$G\fr {m^2}{r^2} = \fr {k^2}{G^2 (4\pi\e_0)^6} \fr {Q_p^6}{r^4} = \fr 1 {4\pi \e_0} \fr {Q_p^2}{r^2}$$ $$\fr {k^2 Q_p^4}{G^2 (4\pi\e_0)^5} = r^2$$ $$\Longrightarrow r^2 = 79,49 \cdot 10^{-70} m^2 \Longrightarrow r = 8,916 \cdot 10^{-35} m = 5,516\, L_p$$ Note that we are $20$ orders of magnitude under the range of strong force and $23$ orders of magnitude under the range of weak force. In this way the gravitational force doesn’t affect the making of nucleus and nucleons. Conclusion ---------- We have seen that a potential of $10^{11}$ Volts can induce relevant gravitational effects. They are too many for notice variations in the experiments with particles accelerators. However they sit at the border of our technological capabilities. The possibility to rule gravitation is very attractive and constitutes a good reason for try experiments with high electric potentials. Such experiments can be connected to the work of Nikola Tesla and can also be a good test for the arrangement field theory. Marin, D.: The arrangement field theory (AFT). Paper number 1 (2012). Marin, D.: The arrangement field theory (AFT) - PART. 2 -. Paper number 2 (2012). Penrose, R.: Twistor Algebra. Journal of Mathematical Physics, Volume 8 (1967), pp.345-366. Tian, Y.: Matrix Theory over the Complex Quaternion Algebra. ArXiv: 0004005 (2000). Barton, C., H., Sudbery A.: Magic squares and matrix models of Lie algebras. ArXiv: 0203010 (2002). De Leo, S., Ducati, G.: Quaternionic differential operators. Journal of Mathematical Physics, Volume 42, pp.2236-2265. ArXiv: math-ph/0005023 (2001). Zhang, F.: Quaternions and matrices of quaternions. Linear algebra and its applications, Volume 251 (1997), pp.21-57. Part of this paper was presented at the AMS-MAA joint meeting, San Antonio, January 1993, under the title Everything about the matrices of quatemions”. Farenick, D., R., Pidkowich, B., A., F.: The spectral theorem in quaternions. Linear algebra and its applications, Volume 371 (2003), pp.75–102. Hartle, J. B., Hawking, S. W.: Wave function of the universe. Physical Review D (Particles and Fields), Volume 28, Issue 12, 15 December 1983, pp.2960-2975. Bochicchio, M.: Quasi BPS Wilson loops, localization of loop equation by homology and exact beta function in the large-N limit of SU(N) Yang-Mills theory. ArXiv: 0809.4662 (2008). Minkowski, H., Die Grundgleichungen fur die elektromagnetischen Vorgange in bewegten Korpern. Nachrichten von der Gesellschaft der Wissenschaften zu Gottingen, Mathematisch-Physikalische Klasse, 53-111 (1908). Kant, I., Kritik der reinen Vernunft, (Critique of pure reason), Hartknoch, Riga (1781). Galilei, G., Dialogo sopra i due massimi sistemi del mondo, Giornata seconda. Landini, Fiorenza/Firenze/Florence (1632), Guerneri (1995), 238-241. Crew, H., de Salvio, A. (English translation), Dialogue concerning the two chief world systems, Second day. Dover Publications, New York (1954). Einstein, A., Zur Elektrodynamik bewegter Korper. Annalen der Physik,17, 891-921 (1905). Michelson, A. A., Morley, E. W., On the Relative Motion of the Earth and the Luminiferous Aether. American Journal of Science, Third Series, 203, 333-345 (1887). Minkowski, H., Die Grundgleichungen für die elektromagnetischen Vorgänge in bewegten Körpern. Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 53–111 (1908). Einstein, A., Die Grundlage der allgemeinen Relativitatstheorie. Annalen der Physik, 49, 769-822 (1916). Planck, M., Entropy and Temperature of Radiant Heat. Annalen der Physik, 4, 719–37 (1900). Einstein, A., Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt. Annalen der Physik, 17, 132–148 (1905). Rutherford, E., The Scattering of Alpha and Beta Particles by Matter and the Structure of the Atom. Philosophical Magazine, 21 (1911). Bohr, N., On the Constitution of Atoms and Molecules. Philosophical Magazine, 26, 1-24, 476-502 (1913). Heisenberg, W., Physics and Philosophy, Section 3. Harper, New York (1958). Schrödinger, E., An Undulatory Theory of the Mechanics of Atoms and Molecules. Physical Review, 28, 1049–1070 (1926). Born, M., Zur Quantenmechanik der Stoßvorgänge. Zeitschrift für Physik, 37, 863–867 (1926). Heisenberg, W., Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift für Physik, 43, 172–198 (1927). Jordan, P., Anschauliche Quantentheorie. Springer, Berlin (1936). von Neumann, J., Mathematical Foundations of Quantum Mechanics. Princeton University Press, Princeton (1932). Wheeler, A., The anthropic universe, radio interview. Science show, ABC Radio National, Australia, February 18, 2006. Stapp, H.P., Mind, Matter and Quantum Mechanics. Foundations of Physics, 12, 363-399 (1982). Stapp, H.P., Quantum Theory and the Role of Mind in Nature. Foundations of Physics, 31, 1465-1499 (2001). Einstein, A., Podolski, B., Rosen, N., Can quantum-mechanical description of physical reality be considered complete? Phys. Rev., 47, 777-780 (1935). Einstein, A., Rosen, N., The Particle Problem in the General Theory of Relativity. Phys. Rev., 48, 73-77 (1935). Bohm, D., Quantum Theory. Prentice Hall, New York (1951). Bell, J.S., On the Einstein-Podolsky-Rosen paradox. Physics, 1, 195-200 (1964). Aspect, A., Grangier, P., Roger, G., Experimental realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: a new violation of Bell’s inequalities. Phys. Rev. Lett. 49, 2, 91-94 (1982). Penrose, R., Angular Momentum: an approach to combinatorial space-time. Originally appeared in Quantum Theory and Beyond, edited by Ted Bastin, Cambridge University Press, 151-180 (1971). LaFave, N.J., A Step Toward Pregeometry I.: Ponzano-Regge Spin Networks and the Origin of Spacetime Structure in Four Dimensions. ArXiv:gr-qc/9310036 (1993). Reisenberger, M., Rovelli, C., ’Sum over surfaces’ form of loop quantum gravity. Phys. Rev. D, 56, 3490-3508 (1997). Engle, G., Pereira, R., Rovelli, C., Livine, E., LQG vertex with finite Immirzi parameter. Nucl. Phys. B, 799, 136-149 (2008). Banks, T., Fischler, W., Shenker, S.H., Susskind, L., M Theory As A Matrix Model: A Conjecture. Phys. Rev. D, 55, 5112-5128 (1997). Available at URL http://arxiv.org/abs/hep-th/9610043 as last accessed on May 19, 2012. Garrett Lisi, A., An Exceptionally Simple Theory of Everything. ArXiv:0711.0770 (2007). Available at URL http://arxiv.org/abs/0711.0770 as last accessed on March 29, 2012. Nastase, H., Introduction to Supergravity ArXiv:1112.3502 (2011). Available at URL http://arxiv.org/abs/1112.3502 as last accessed on March 29, 2012. Coleman, S., Mandula, J., All Possible Symmetries of the S Matrix. Physical Review, Volume 159 (1967), pp.1251-1256. Penrose, R., On the Nature of Quantum Geometry. Originally appeared in Wheeler. J.H., Magic Without Magic, edited by J. Klauder, Freeman, San Francisco, 333-354 (1972). [^1]: dmarin.math@gmail.com [^2]: www.gruppopangea.com/?page\_ id=682& lang=en [^3]: fabrcop@aliceposta.it [^4]: extrabyte2000@yahoo.it [^5]: www.istitutoscientia.it, Via Ortola 65, 54100, Massa (MS), Italy [^6]: The operator $R_i$ acts on any array $\psi$ as $R_i \psi = \psi i$. [^7]: In Loop Gravity the gauge field appears usually in the form $iA$ with $A$ hermitian. We incorporate the $i$ inside $A$ so that $A^{ab}\g_a\g_b$ corresponds to a hyperionic number. [^8]: As usually in this work, we absorb an $i$ in the fields to make them skew-hermitian.
--- abstract: 'The ground state of the antiferromagnetic $XY$ model with a [*kagomé*]{} lattice is known to be characterized by a well developed accidental degeneracy. As a consequence the phase transition in this system consists in unbinding of pairs of fractional vortices. Addition of the next-to-nearest neighbor (NNN) interaction leads to stabilization of the long-range order in chirality (or staggered chirality). We show that the phase transition, related with destruction of this long-range order, can happen as a separate phase transition below the temperature of the fractional vortex pairs unbinding only if the NNN coupling is extremely weak, and find how the temperature of this transition depends on coupling constants. We also demonstrate that the antiferromagnetic ordering of chiralities and, accordingly, the presence of the second phase transition are induced by the free energy of spin wave fluctuations even in absence of the NNN coupling.' address: 'L.D.Landau Institute for Theoretical Physics, Kosygina 2, Moscow 117940, RUSSIA' author: - 'S. E. Korshunov' date: 'June 22, 2001' title: Phase transitions in the antiferromagnetic XY model with a kagomé lattice --- [2]{} Introduction ============ The antiferromagnetic $XY$ model can be defined by the Hamiltonian $$H=J_1\sum_{NN}\cos(\varphi_{i}-\varphi_{j}), \label{NN}$$ where $J_1>0$ is the coupling constant, $\varphi_{i}$ describes the orientation of the classical planar spin , belonging to the site $i$ of some regular lattice, and the summuation is performed over the pairs of nearest neighbors (NN) on this lattice. The ground state of such model on a [*kagomé*]{} lattice (Fig. 1) is known to have a huge accidental (that is not related to the symmetry of the Hamiltonian) degeneracy [@HR]. For $J_1>0$ the minimum of the energy of each triangular plaquette is achieved when the three spins belonging to it form the angles $\pm 2\pi/3$ with each other. In addition to the possibility of a simultaneous rotation of all three spins such arrangement is also characterized by the two-fold discrete degeneracy. When on going clockwise around the plaquette the spins rotate clockwise (anticlockwise), the plaquette can be ascribed the positive (negative) chirality $\sigma =\pm 1$. In the ground state of the antiferromagnetic $XY$ model with triangular lattice the plaquettes with positive and negative chiralities regularly alternate with each other [@MS]. In any ground state on a [*kagomé*]{} lattice the variables $\varphi_{i}$ analogously acquire only three values which differ from each other by $2\pi/3$. However, the requirements for the arrangement of chiralities are less rigid than on triangular lattice and accordingly the ground state in addition to the continuous $U(1)$ degeneracy (related with an arbitrary simultaneous rotation of all spins) is characterized by a well developed discrete degeneracy (of the same type as in the 3-state antiferromagnetic Potts model) leading to a finite residual entropy per site [@HR; @Els]. The accidental degeneracy persists if the interaction function in Eq. (\[NN\]) differs from the pure cosine (remaining even), but is removed by the presence of interactions with more distant neighbors. In particular, for the ferromagnetic sign ($J_2<0$) of the next-to-nearest neighbors (NNN) interaction the minimum of $$H_{2}=J_1\sum_{NN}\cos(\varphi_{i}-\varphi_{j}) +J_2\sum_{NNN}\cos(\varphi_{i}-\varphi_{j}) \label{NNN}$$ is achieved in one of the so-called $\sqrt{3}\times\sqrt{3}$ states [@HKB] with a regular alternation of positive and negative chiralities. An example of such state is shown in Fig. 2(a). Here and further on we use the letters A, B and C to denote three values of $\varphi_{i}$ which differ from each other by $\pm 2\pi/3$ (for definiteness let us assume $\varphi_{\rm B}=\varphi_{\rm A}+2\pi/3$, $\varphi_{\rm C}=\varphi_{\rm A}+4\pi/3$). This state has the same structure as the ground state of a planar antiferromagnet with triangular lattice (or, to put it more accurately, can be obtained by the natural truncation of it). On the other hand the NNN interaction of the opposite sign ($J_2>0$) favors the ferromagnetic arrangements of chiralities, which is achieved in the so-called ${\bf q}=0$ states [@HKB], see Fig. 2(b). For both signs of the NNN interaction the degeneracy of the ground state is reduced to $U(1)\times Z_2$ [@GB]. This suggests the possibility of the two phase transitions, one of which is associated with unbinding of vortex pairs \[the Berezinskii-Kosterlitz-Thouless (BKT) transition [@Ber1; @Ber2; @KT; @Kost]\] and the other with proliferation of the Ising type domain walls. The number of the systems with the same degeneracy of the ground state includes, in particular, the antiferromagnetic $XY$ model with triangular lattice [@MS; @KU] and the fully frustrated $XY$ model with square lattice [@Vill; @Hals]. For the case of weak NNN interactin ($|J_2|\ll J_1$) the energy of a domain wall (per unit length) is proportional to $|J_2|$ and the logarithmical interaction of vortices to $J_1$. If domain walls and vortices were completely independent one would then expect the phase transition related with breaking of the discrete symmetry group to take place at lower temperature than the BKT transition. In the present work we analyse the mutual influence between different classes of topological excitations in the antiferromagnetic $XY$ model with a [*kagomé*]{} lattice and weak NNN interaction and demostrate that the phase transition related with the domain walls proliferation (i) can happen as a separate phase transition only for extremely weak NNN interaction, (ii) is not of the Ising type and (iii) the temperature of this transition is [*not*]{} proportional to the strength of the NNN interaction as one could naively expect. We also show that at very low temperatures the free energy of spin waves leads to stabilization of the antiferromagnetic ordering of chiralities even in absence of the NNN interaction. The results can be of interest in relation with possible presence of weak easy-plane anisotropy in Heisenberg [*kagomé*]{} antiferromagnets [@RCC], which is indirectly confirmed by recent investigations of susceptibility [@WDV] in (H$_3$O)Fe$_3$(SO$_4$)$_2$(OH)$_6$ . The other class of physical systems, which allows for experimental realization of the considered model, consists of Josephson junction arrays [@ML] and two-dimensional superconducting wire networks [@HXB] in perpendicular magnetic field. In such systems the role of $\varphi_{i}$ is played by the phase of the superconducting order parameter, and equivalence with the antiferromagnetic $XY$ model is achieved when the magnitude of the magnetic field corresponds to half-integer number of superconducting flux quanta per each triangular plaquette. A superconducting array, which can be described by the same model in absence of the external magnetic field, can be constructed with the help of so-called $\pi$-junctions [@PJ]. Zero temperature: the equivalent solid-on-solid model ===================================================== It has been already mentioned in the Introduction that the set of the ground states of the Hamiltonian (\[NN\]) is equivalent (up to a simultaneous rotation of all spins) to that of the 3-state antiferromagnetic Potts model [@HR; @Els]. Any triangular plaquette of a [*kagomé*]{} lattice has to contain some permutation of the three values of $\varphi_{i}$ ($\varphi_{\rm A}$, $\varphi_{\rm B}$ and $\varphi_{\rm C}$), which can be identified with the three states ($\alpha=1,2,3$) of the antiferromagnetic Potts model. The degeneracy of such set of ground states can be discussed in terms of the zero-energy domain walls. If one consideres a $\sqrt{3}\times\sqrt{3}$ ground state \[Fig. 2(a)\], it turnes out possible to construct the state with the same energy by permutation (for example) of the form $\varphi_{\rm B}\Longleftrightarrow\varphi_{\rm C}$ inside any closed loop formed by the sites with , [*etc*]{}. Such closed loop \[a simplest example of which is shown in Fig. 3(a)\] can be considered as the zero-energy domain wall separating two different $\sqrt{3}\times\sqrt{3}$ states with the opposite signs of staggered chirality. Any domain wall with zero energy is formed by elementary links which have to join each other at the angles of $\pm 2\pi/3$ \[Fig. 3(b)\]. Each link separates two triangular plaquettes with the same chirality, that is with the opposite signs of staggered chirality. The states with infinite (unclosed) domain walls are also possible. There exists [@HeR] the exact mapping of the set of the ground states of the 3-state antiferromagnetic Potts model onto the states of the solid-on-solid (SOS) model in which the “height” variables ${\bf u}({\bf r})$ are defined on the sites ${\bf r}$ situated at the centres of hexagonal plaquettes of a [*kagomé*]{} lattice. These cites are shown in Fig. 1 by empty circles. They form the triangular lattice we shall denote $\cal T$. Each site of the [*kagomé*]{} lattice can be associated with some bond ${\bf rr}'$ of $\cal T$ and each variable $\varphi_{i}=\varphi_{\rm A},\varphi_{\rm B},\varphi_{\rm C}$ with the Potts variable $\alpha({\bf rr'})\equiv\alpha({\bf r'r})$ defined on this bond. Since each triangular plaquette of a [*kagomé*]{} lattice should always contain three different variables $\varphi_{\rm A}$, $\varphi_{\rm B}$ and $\varphi_{\rm C}$, one can associate them with three basic vectors ${\bf a}_\alpha$ (${\bf a}_{1}+{\bf a}_2+{\bf a}_3=0$, see Fig. 4) of some auxilary triangular lattice, which we shall denote ${\cal T}_a$ ($a=|{\bf a}_\alpha|$ being its lattice constant). The height variables ${\bf u}({\bf r})$, which acquire the values ${\bf u}\in{\cal T}_a$, can be then introduced following the rule $${\bf u}({\bf r}')=\left\{\begin{array}{ll} {\bf u}({\bf r})+{\bf a}_{\alpha({\bf rr}')} & \mbox{for } {\bf r}'={\bf r}+{\bf e}_\alpha \\ {\bf u}({\bf r})-{\bf a}_{\alpha({\bf rr}')} & \mbox{for } {\bf r}'={\bf r}-{\bf e}_\alpha, \end{array}\right. \label{3}$$ where ${\bf e}_\alpha$ are the three basic vectors of ${\cal T}$ (${\bf e}_1+{\bf e}_2+{\bf e}_3=0$), as soon as the value of ${\bf u}({\bf r}_0)$ is chosen for an arbitrary site ${\bf r}_0$ [@HeR]. This defines the correspondence between the states of the antiferromagnetic Potts model and of the “vector” SOS model, in which the height variables ${\bf u}({\bf r})\in {\cal T}_a$ have to satisfy the constraint $$|{\bf u}({\bf r})-{\bf u}({\bf r}')|=a \label{UU}$$ on all pairs of neighboring sites of ${\cal T}$. By using the known properties of the exact solution [@B] of the 3-state antiferromagnetic Potts model with external field coupled to staggered chirality Huse and Rutenberg [@HR] have demonstrated (and also have confirmed this conclusion by numerical calculation) that such vector SOS model, in the partition function of which all allowed configurations of heights are counted with the same weight, is situated exactly at the point of the roughening transition, where (for $|{\bf r}_1-{\bf r}_2|\gg 1$) $$\langle[{\bf u}({\bf r}_1)-{\bf u}({\bf r}_2)]^2\rangle\approx \frac{3a^2}{\pi^2}\ln|{\bf u}({\bf r}_1)-{\bf u}({\bf r}_2)|. \label{ln}$$ Therefore any additional perturbation supressing the fluctuations will lead to transition of the system into the flat phase, in which the fluctuations of ${\bf u}$ are convergent. According to constraint (\[UU\]) the variables ${\bf u}({\bf r})$ on neighboring sites have to be different from each other, so that even the most flat state is formed by the regular alternation of three different values of ${\bf u}$. The transition into the flat phase can be more transparently discussed in terms of the variables $${\bf n}({\bf R})\equiv\frac{{\bf u}({{\bf r}})+{\bf u}({{\bf r}'}) +{\bf u}({{\bf r}''})}{3}$$ describing the average height at each of the plaquettes of $\cal T$. The variables ${\bf n}({{\bf R}})$ are defined at the sites ${\bf R}$ of the honeycomb lattice $\cal H$, which is dual to $\cal T$, and acquire the values ${\bf n}({\bf R})\in{\cal H}_a$, where ${\cal H}_a$ is the honeycomb lattice which is dual to ${\cal T}_a$ (Fig. 4). In terms of the original spin variables the flat states \[in which all variables ${\bf n}({\bf R})$ are equal to each other\] correspond to $\sqrt{3}\times\sqrt{3}$ states, and zero-energy steps, the presence of which leads to their roughening, to the zero-energy domain walls separating different $\sqrt{3}\times\sqrt{3}$ states. The large scale properties of the vector SOS model introduced above (and of its further generalizations) can be analysed with the help of the multi-component sine-Gordon model with the same symmetry. The (dimensionless) Hamiltonian of such sine-Gordon model can be chosen in the form $$H_{SG}=\int_{}^{}d^2{\bf R}\left\{\frac{KQ^2}{2} [\nabla{\bf n}({\bf R})]^2+ y\sum_{\alpha=1}^{3}\cos[{\bf Q}_\alpha{\bf n}({\bf R})]\right\}. \label{SG}$$ The first term in Eq. (\[SG\]) describes the effective stiffness (of entropic origin) which can be associated with the fluctuations of ${\bf n}({\bf R})$, whereas the second term favors the values of ${\bf n}({\bf R})$ which belong to ${\cal H}_a$. Here ${\bf Q}_\alpha$ are the three basic vectors of the triangular lattice reciprocal to $T_a$, so $${\bf Q}_\alpha^2=\frac{16\pi^2}{3a^2};~~~ {\bf Q}_1+{\bf Q}_2+{\bf Q}_3=0.$$ Analogous Hamiltonian (with the opposite sign of the second term) and equivalent vector Coulomb gas have been investigated by Nelson in relation with dislocation mediated melting in two-dimensional crystals [@N]. Alternatively the Hamiltonian of the form (\[SG\]) can be interpreted as a simplified model for pinning of a two-dimensional crystal by a periodic substrate (cf. with Ref. ). Note, however, that in contrast to real two-dimensional crystals, the accurate description of which requires to distinguish between compression and shear moduli, in our system the displacement ${\bf n}$ takes place in some auxilary space (and not in the real space) and, therefore, only one elastic modulus can be introduced. The renormalization group equations of Ref. [@N], describing the evolution of $K$ and $y$ with the change of the length scale $L$, in our notation can be rewritten as $$\begin{aligned} \frac{dK}{dl} & = & \frac{3\pi}{8}y^2 \eqnum{8a} \\ \frac{dy}{dl} & = & \left(2-\frac{1}{4\pi K}\right)y-\pi y^2, \eqnum{8b} \addtocounter{equation}{1}\end{aligned}$$ where $l=\ln L$. The corresponding flow diagram is schematically shown in Fig. 5, where $K_c\equiv 1/(8\pi)$. It suggests that the roughening transition takes place when the renormalized value of the effective stiffness $K$ is equal to $K_c$. The vector SOS model described above (which is known to be at the point of its roughening transition [@HR]) can therefore be associated with some point belonging to the left separatrix. Phase transition(s) associated with vortex pairs unbinding ========================================================== At finite temperature T other types of fluctuations (requiring finite energy) become possible, in particular formation of vortex pairs. Vortices are point-like topological excitations (the local minima of the Hamiltonian), the existence of which is related with the multivaluedness of the field $\varphi$. On going around each vortex $\varphi$ experiences a continuous twist which adds to $\pm 2\pi$. At low temperatures all vortices are bound in neutral pairs by their logarithmical interaction [@Ber2]. With increase in temperature this interaction becomes renormalized due to mutual influence of different vortex pairs and becomes screened at the temperature $T_{\rm BKT}$ of the BKT transition [@Ber2; @KT; @Kost], which leads to dissociation of vortex pairs and exponential decay of correlations of $\exp(i\varphi)$ (in contrast to algebraic decay [@Ber1] at $T<T_{\rm BKT}$). The value of the helicity modulus $\Gamma$, describing the effective stiffness of spin system with respect to infinitely small twist, at the temperature of vortex pairs dissociation is known to satisfy the universal relation [@NK]: $$T_{\rm BKT}=\frac{\pi}{2}\Gamma(T_{\rm BKT}). \label{BKT}$$ Huse and Rutenberg [@HR] have argued that since at $T=0$ the antiferromagnetic $XY$ model with [*kagomé*]{} lattice is characterized by the long range order in $\exp(i3\varphi)$ rather than in $\exp(i\varphi)$, the phase transition in this system should consist in unbinding of pairs of fractional vortices with topological charges $\pm 1/3$ and not of the ordinary (integer) vortices. The strength of the logarithmical interaction of fractional vortices is decreased by the factor of 9 in comparison with that of integer vortices [@RCC], therefore relation (\[BKT\]) should be replaced by $$T_{\rm FV}=\frac{\pi}{18}\Gamma(T_{\rm FV}), \label{GFV}$$ where $T_{\rm FV}$ is the temperature of the phase transition, associated with unbinding of pairs of fractional vortices. The value of $\Gamma$ in any ground state of the antiferomagnetic $XY$ model with [*kagomé*]{} lattice is equal to $\Gamma_0=(\sqrt{3}/4)J_1$. Substitution of $\Gamma_0$ into Eq. (\[GFV\]) \[instead of $\Gamma(T_{\rm FV})$\] allows to obtain for $T_{\rm FV}$ the estimate (from above) of the form $$T_{\rm FV}\approx\frac{\pi\sqrt{3}}{72}J_1\approx0.075 J_1, \label{TFV}$$ which turns out to be in reasonable agreement with the results of numerical simulations by Rzchowski [@R], who by using two different criteria has found $T_{\rm FV}\approx 0.070$ and $T_{\rm FV}\approx 0.076$. In Ref. [@CKP] the same estimate for $T_{\rm FV}$ has been obtained in a less straightforward way with the help of the duality transformation [@JKKN; @Knops]. The fractional vortices cannot exist by themselves (in absence of domain walls). A fractional vortex appears at every point where elementary links forming a domain wall meet each other at a wrong angle ($\pi/3$ or $\pi$ instead of $2\pi/3$). The same happens in the antiferromagnetic $XY$ model with triangular lattice [@KU], the ground state of which also has $\sqrt{3}\times\sqrt{3}$ structure. Fig. 6(a) shows an example of a domain wall containing one such special point. It separates the domain wall into two segments, one of which is formed by the sites with $\varphi_{i}=\varphi_{\rm C}$ and the other by the sites with $\varphi_{i}=\varphi_{\rm B}$. When crossing the first segment the state to the right of the wall should be obtained from the state to the left by the permutation of A and B, whereas for the second segment the state to the right should be obtained by the permutation of A and C. This introduces the descrepancy of $2\pi/3$ which can be localized on a semi-infinite line terminating at the special point (for example on the line X-Y-Z). In order to locally minimize the energy X, Y and Z should be replaced by A, B and C respectively when going from above, and by B, C and A when going from below. The misfit of $2\pi/3$ has to be compensated by a continuous twist of $\varphi$, which is equivalent to the fractional vortex with the topological charge $-1/3$. In terms of the vector SOS model each fractional vortex corresponds to the point on going around which the height variable ${\bf n}$ changes by $\Delta {\bf n}$ with $|\Delta {\bf n}|=a$. That means that at each fractional vortex a step with the height $\Delta {\bf n}$ (or, to put it more presicely, a set of steps with the total height $\Delta {\bf n}$) terminates or begins. Accordingly the fluctuations of the SOS model provide the additional contribution to the interaction of the fractional vortices related with the difference in entropy between the configurations with different positions of step termination points. At the point of roughening transition of the SOS model, as well as in the rough phase, this additional interaction (which can be expressed in terms of the correlation function of the dual $XY$ model [@Sw]) is also logarithmic. Its presence shifts $T_{\rm FV}$ upwards and diminishes the mutual influence between fractional and integer vortices. It is known [@LGT] that such mechanism in principle can lead to appearance of the separate phase transition, associated with unbinding of pairs of integer vortices, at temperatures above $T_{\rm FV}$. On the other hand at finite temperatures the equivalence with the SOS model is no longer exact. One has to remember that the whole multitude of what we describe as flat states of the vector SOS model in terms of the original spin variables corresponds (for given $\varphi_{\rm A}$) to only six different $\sqrt{3}\times\sqrt{3}$ states, which can be obtained from the state shown in Fig. 2(a) by all possible permutations of A, B and C. In Fig. 4 the sites of ${\cal H}_a$, which correspond to equivalent states in terms of $\varphi_{i}$, are designated by the same numbers. In the zero-temperature partition function of the vector SOS model the properties of the zero-energy domain walls separting such states (in particular, two closed loops formed by such walls cannot cross each other but can be situated inside each other or touch each other) allow to count them as different states of the SOS model [@HeR; @vB]. At finite temperature it becomes possible for a set of steps separating two physically equivalent states to terminate at the point where all these steps merge together [@Lyu]. The energy $E_D$ of such defect is finite and proportinal to $J_1$: $E_D=c_D J_1$, where $c_D$ is of the order of one. In terms of the multi-component sine-Gordon model (\[SG\]) such defects correspond to dislocations of the field ${\bf n}$, the (elementary) Burgers vectors of which ${\bf b}_\alpha$ ($\alpha=1,2,3$) are given, as can be seen from Fig. 4, by $${\bf b}_1={\bf a}_3-{\bf a}_2,~~ {\bf b}_2={\bf a}_1-{\bf a}_3,~~ {\bf b}_3={\bf a}_2-{\bf a}_1. \label{BV}$$ An example of a dislocation is schematically shown in Fig. 6(b). It is formed by a neutral pair of fractional vortices that are sitting on two domain walls which cannot be transformed into a single domain wall. The letter X denotes the site on which $\varphi_{i}\approx(\varphi_{\rm A}+\varphi_{\rm B})/2$. In the vicinity of this site the values of $\varphi_{i}$ slightly deviate from those implied by the letters A, B and C. Successive application of the rule (\[3\]) along the perimeter of any closed loop surrounding point X sums up to $\Delta{\bf n}={\bf a}_2-{\bf a}_1$. The renormalization of the dislocation fugacity $z=\exp(-c_D J_1/T)$ with the change of the length scale can be described [@NH] by $$\frac{dz}{dl} = \lambda_z z+2\pi z^2 \label{RG3}$$ where $$\lambda_z=2-\frac{KQ^2b^2}{4\pi}\equiv 2-4\pi K \label{LaZ}$$ In the vicinity of the roughening transition ($K\approx K_c$) the exponent $\lambda_z$, describing the renormalization of $z$, is close to $\lambda_z^0=3/2$, which corresponds to the fast growth of $z$. Comparison of $\lambda_z$ with $\lambda_y=2-1/(4\pi K)$ shows that $y$ and $z$ are never simultaneously irrelevant. In that respect the situation is quite analogous to what is encountered when considering the conventional (ferromagnetic) $XY$ model with weak but relevant (low-order) anisotropy [@JKKN; @PU]. The presence of dislocations (or, to put it more accurately, of dislocation pairs) leads also to appearance in the right-hand side of Eq. (8a) of the additional (negative) term proportional to $z^2$. The presence of this term shifts the flow (see Fig. 5) from the separatrix to the area which corresponds to the rough phase of the SOS model. On the other hand the unrestricted growth of $z$ under the renormalisation means that the system will contain the finite concentration of free dislocations, which transforms the rough phase of the SOS model into the disordered phase of the six-state model. The decay of correlations in this phase can be characterized by a finite correlation radius $\xi_z$, which can be found as the length-scale at which $z_{\rm R}$ (the renormalized value of $z$) becomes of the order of one. $\xi_z$ defines the scale at which the additional (entropic) interaction of the fractional vortices induced by the fluctuation of the domain walls is screened. The finiteness of $\xi_z$ closes even the hypothetical possibility for dissociation of pairs of integer vortices to take place as a separate phase transition at $T>T_{\rm FV}$. The case of the ferromagnetic NNN interaction ============================================= Inclusion into consideration of the interaction with more distant neighbors leads to removal of the accidental degeneracy and stabilizes the states with either ferromagnetic or antiferromagnetic ordering of chiralities of triangular plaquettes. In the case of the ferromagnetic NNN interaction the energy is minimal in one of the $\sqrt{3}\times\sqrt{3}$ states with uniform staggered chirality \[Fig. 2(a)\]. Fig. 3 shows two examples of a domain wall separating two different $\sqrt{3}\times\sqrt{3}$ states with opposite signs of staggered chirality. In the case of only NN interaction such domain wall (consisting of elementary links making angles of $2\pi/3$ with each other) simply costs no energy. The presence of a weak ferrromagnetic NNN interaction makes the energy of such domain wall (per elementary link) $E_{\rm DW}$ equal to $-3J_2>0$. Here and further on we are interested only in the case $|J_2|\ll J_1$, when the values of the variables $\varphi_{i}$ remain close to those shown in Fig. 3 not only away from the wall, but also in the vicinity of the wall. Note that such wall can fluctuate (make turnes, form kinks, etc.) without having to pay the energy proportional to $J_1$ and, therefore, naively one could expect that the temperature of the phase transition, related with proliferation of such walls and leading to destruction of the long-range order in staggered chirality, should be determined entirely by $J_2$. Such conclusion, implicitely based on the comparison of the energy of an infinite domain wall with the negative entropic contribution to its free energy (the Peierls argument [@Pei]), does not take into account that the presence of an infinite domain wall leads also to supression of the entropy (because it decreases the possibilities for formation of closed domain wall loops) and in some cases does not work. In Sec. II we have discussed the properties of the vector SOS model to which the antiferromagnetic $XY$-model with [*kagomé*]{} lattice is equivalent at $T=0$. In the partition function of this model all allowed configurations of heights are counted with the same weight. In the case of the analogous model with a finite positive energy of a step (which corresponds to $J_2<0$) the same partition function is reproduced in the limit of $T\rightarrow \infty$. For any small but finite ratio $-J_2/T>0$ the SOS model is shifted from the point of roughening transition into the ordered (flat) phase. On the other hand we also know that at finite temperatures the possibility of dislocation creartion tends to shift the same system into the disordered phase. One has to consider the competition of these two effects. In the vicinity of $K=K_c$ the renormalization equations (8) can be rewritten as $$\begin{aligned} \frac{dX}{dl} & = & {Y}^2 \eqnum{15a} \\ \frac{dY}{dl} & = & X{Y}+\alpha{Y}^2 , \eqnum{15b}\end{aligned}$$ where $$X=2\left(1-\frac{K_c}{K}\right),~~{Y}=\frac{1}{\pi \alpha},~~ \alpha=-\frac{1}{\sqrt{6}} .$$ The solution of Eqs. (15) for arbitrary $\alpha$ allows one to find that the critical behavior of the correlation radius $\xi$ in the vicinity of the transition is given [@Y] by $$\ln\xi\propto\left(\frac{K_c}{\Delta K}\right)^{\overline{\nu}} , \label{Xi}$$ where $\Delta K$ is the deviation from the phase transition. In our problem for $E_{\rm DW}\ll T$ the ratio $\Delta K/K_c$ is proportional to $E_{\rm DW}/T$. The case of $\alpha=0$ corresponds to Kosterlitz renormalization group equations [@Kost] for the standard BKT transition, which give $\overline{\nu}=1/2$. The case of $\alpha=+1/\sqrt{6}$ has been considered by Nelson [@N], who has found $\overline{\nu}=2/5$. The solution of the same equations for $\alpha=-1/\sqrt{6}$ gives $\overline{\nu}=3/5$. If the fugacity of dislocations $z$ is so small that even when growing under renormalization it remains much smaller than one up to $L\sim\xi$ \[where the renormalization following Eqs. (15) stops anyway and fluctuations of ${\bf n}$ are frozen\], the system remains in the ordered (flat) phase of the SOS model, that is in the phase with long range order in staggered chirality. On the other hand if $z_R$ (the renormalized value of $z$) manages to become of the order of one when the renormalized value of $y$ is still small, the system finds itself in the disordered phase. The estimate for the temperature $T_{\rm DW}$ of the phase transition separating these two regimes (and associated with the proliferation of domain walls) can be obtained from the relation $z_R(\xi)\sim 1$, which is equivalent to $$\ln(1/z)\sim\lambda^0_z\ln\xi.$$ In our problem $$\ln(1/z)= c_D J_1/T$$ and $$\ln\xi\sim\left(\frac{T}{E_{\rm DW}}\right)^{\overline{\nu}}, \label{LNX2}$$ which leads to $$T_{\rm DW}\sim\left(\frac{E_{\rm DW}}{J_1}\right)^ {\frac{\overline{\nu}}{1+\overline{\nu}}}J_1 \propto J_2^{3/8}J_1^{5/8} , \label{TDW1}$$ Away from the critical region the behaviour of $\xi$ can be described with the help of the self-consistent harmonic approximation [@Sait], which gives $\overline{\nu}=\overline{\nu}_0=1$. That means that with increase of $J_2/J_1$ the dependence (\[TDW1\]) is replaced by $T_{\rm DW}\propto(J_2 J_1)^{1/2}$. Note that the analysis which has led to Eq. (\[TDW1\]) has been based on the assumption that all fractional vortices are bound in pairs, and, accordingly, is valid only for $T_{\rm DW}<T_{\rm FV}$. On the other hand the pairs of fractional vortices cannot dissociate at temperatures lower than $T_{\rm DW}$, because for $T<T_{\rm DW}$ the fractional vortices in addition to their logarithmical interaction are bound also by domain walls (with a finite free energy per unit length) which connect them with each other. Therefore the two available possibilities are $T_{\rm DW}<T_{\rm FV}$ and a single phase transition, whereas the scenario with $T_{\rm DW}>T_{\rm FV}$ is impossible. Analogous conclusions have been earlier achieved in relation with hypothetical unbinding of fractional vortices in planar antiferromagnet with triangular lattice [@KU]. It is hardly surprising that the same conclusions are valid for the system, the ground state of which is practically identical to that of the antiferromagnetic $XY$ model with triangular lattice, the only difference being that one quarter of sites is absent. The proliferation of the low energy domain walls \[of the type shown in Fig. 3(b)\] leads to intermixing of six different states \[which can be obtained from the state shown in Fig. 2(a) by arbitrary permutation of A, B, and C\] and therefore should not be expected to be of the Ising type. Note that the domain walls are possible only between the states with the different signs of staggered chirality. The six-state model with analogous statistics of domain walls can be defined [@KU] by the partition function $$Z_{\rm 6st}=\left(\prod_{{\bf R}}\sum_{t_{\bf R}=1}^6\right) \prod_{NN}W(t_{\bf R}-t_{\bf R'}) \label{6ST}$$ where $$W(t)=\left\{\begin{array}{lll} 1 & \mbox{ for } t=0 & \pmod{6} \\ w & \mbox{ for } t=1,3,5 & \pmod{6} \\ 0 & \mbox{ for } t=2,4 & \pmod{6} . \end{array}\right. \label{W}$$ The last line of Eq. (\[W\]) implies that the domain wall between the states with the same parity of $t_{\bf R}$ is impossible. Application of the duality transformation [@Dots] to the partition function (\[6ST\]) transforms it into analogous partition function with $W(t)$ replaced by $$\widetilde{W}(t)=\left\{\begin{array}{lll} 1+3w & \mbox{ for } t=0 & \pmod{6} \\ {1} & \mbox{ for } t=1,2,4,5 & \pmod{6} \\ {1-3w} & \mbox{ for } t=3 & \pmod{6} . \end{array}\right.$$ The symmetry of $\widetilde{W}(t)$ corresponds to the so-called cubic model, the phase transition in the six-state version of which for $\widetilde{W}(1)>\widetilde{W}(3)$ is known to be of the first order [@NRS]. The phase transition at $T_{\rm DW}$ in our system (at least when it happens at $T_{\rm DW}<T_{\rm FV}$) therefore also can be expected to be of the first order. Comparison of the estimate (\[TDW1\]) with Eq. (\[TFV\]) shows that the fulfillment of the relation $T_{\rm DW}<T_{\rm FV}$ requires $0<-J_2<J_{{\rm max}}$, where $J_{{\rm max}}$ can be estimated to be of the order of $10^{-3}J_1$. For $-J_2>J_{\rm max}$ there should be only one phase transition in the system, at which the proliferation of domain walls is accompanied by the unbinding of all types of vortices. A detailed description of how it happens still remains to be constructed, but when the dissociation of pairs of fractional vortices is forced by the dissapearance of their linear interaction (mediated by the domain walls which connect them) at temperatures, for which their logarithmic interaction is already too weak, one can expect the value of the helicity modulus at $T_c$ to be nonuniversal: $$\frac{2}{\pi}<\frac{\Gamma(T_c)}{T_c}<\frac{18}{\pi} \label{GTC}$$ Note that the estimate for $J_{{\rm max}}$ has been found by taking the estimate (\[TDW1\]) on its face value, that is without the unknown numerical coefficient, and therefore should be considered with great caution. The case of the antiferromagnetic NNN interaction ================================================= For the antiferromagnetic sign ($J_2>0$) of the NNN interaction the minimum of the Hamiltonian (\[NNN\]) is achieved in one of the states with the ferromagnetic ordering of chiralites \[Fig. 2(b)\]. Such state also allows for construction of a domain wall, the energy of which (per elementary link) ${E}{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}$ is proportional to the strength of second neighbor coupling: $E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}\approx 3J_2$, see Fig. 7(a). However, comparison of Fig. 7(a) with Fig. 7(b) shows that (in contrast to the case considered in Sec. IV) for $J_2>0$ the form of the state on the other side of the wall is uniquely defined by the orientation of the wall and is different for different orientations of the wall. The descrepancy in $\varphi_{}$ that appears when crossing domain walls of different orientations should be taken care of by the fractional vortices which have to appear on [*all*]{} corners of domain walls (the same happens in the fully-frustrated $XY$ model with square lattice [@Hals]). This makes impossible the construction of a closed domain wall the energy of which is determined entirely by $J_2$ and does not depend on $J_1$. For $J_2\ll J_1$ a typical thermally excited defect (leading to the change of the sign of chirality) has the form of a long strip formed by two low energy domain walls \[Fig. 7(c)\]. Like in Fig. 6 the letter X designates the sites with $\varphi\approx(\varphi_{\rm A}+\varphi_{\rm B})/2$. In the vicinity of these sites the values of other variables $\varphi_{i}$ slightly deviate from that shown in the figure. Analogous strip defects are dominant at low temperatures in the frustrated $XY$-model with triangular lattice and $f=1/4$ or $f=1/3$ [@KVB]. The energy of such defect is given by $2E_0+2E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}L$, where $E_0=c_0 J_1$ ($c_0\approx 0.55$) is the energy of its termination point and $L$ its length. For $J_2\ll T\ll J_1$ the average length $\langle L\rangle$ of such defects is given by the ratio $T/(2E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW})\gg 1$, whereas their concentration $c$ is proportional to $\langle L\rangle\exp(-2E_0/T)$. The relation $c\langle L\rangle^2\sim 1$ defines the temperature $$T_*\approx \frac{2}{3}\frac{E_0}{\ln(T_{*}/E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW})}$$ above which such defects no longer can be considered as independent. The same temperature can serve as the (lower bound) estimate for the temperature $T{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}$ of the phase transition associated with proliferation of the domain walls and leading to destruction of the long range order in chirality. For $J_2/J_1\rightarrow 0$ $$T{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}\sim T_*\propto J_1/\ln(J_1/J_2) \label{TDW2}$$ Analogous estimate can be obtained by comparison of the domain wall energy $E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}$ with its entropy due to possibility of formation of kinks \[Fig. 7(d)\]. The energy of a kink $E_K$ is very close to $E_0$. The requirement $F{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}\equiv E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}-TS{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}=0$ [@Pei] gives $$T{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}\sim E_K/\ln(2T{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}/E{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}) \label{TDW2'}$$ which is again the estimate from below. Like in the previous case (of the antiferromagnetic ordering of chiralities) the proliferation of domain walls can take place as the independent phase transition only at temperatures lower than $T_{\rm FV}$. Comparison of Eq. (\[TDW2’\]) with Eq. (\[TFV\]) shows that the fulfillment of relation $T{^{\raisebox{0.5mm}{\tiny +}}}_{\rm DW}<T_{\rm FV}$ requires $J_2<{J}{^{\raisebox{0.5mm}{\tiny +}}}_{\rm max}$, where $J{^{\raisebox{0.5mm}{\tiny +}}}_{\rm max'}$ can be estimated as $(10^{-4}\div 10^{-5})J_1$. Also like in the previous case, the proliferation of domain walls is related with intermixing of six different states and therfore can hardly be expected to demonstrate the Ising type behavior. Spin wave fluctuations ====================== Another mechanism for removal of accidental degeneracy (which is traditionally refered to as “ordering due to disorder” [@VBCC]) is related with the free energy of continuous fluctuations (spin waves) [@Kaw]. Expansion of the Hamiltonian (\[NN\]) up to the second order in deviations $\psi_{i}\equiv\varphi_{i}-\varphi_{i}^{(0)}$ of the variables $\varphi_{i}$ from their values $\varphi_{i}^{(0)}$ in some ground state gives the same answer $$H^{(2)}=\frac{J_1}{4}\sum_{NN}[-1+(\psi_{i}-\psi_{j})^2] \label{HE}$$ for all possible ground states, which means that the difference in the free energy between them can appear only in the second order in temperature [@HR]. That is believed to be not sufficient for stabilization of a true long-range order related with chiralities. This conclusion does not take into account the percularities of the statistical mechanics of the considered system and has to be corrected. With the help of the numerical calculation (see Appendix) we have found that the lowest order contribution to the effective interaction of chiralities of neighboring triangular plaquettes is of the antiferromagnetic sign (that is favors $\sqrt{3}\times\sqrt{3}$ state) and corresponds to $$E^{(0)}_{\rm DW}=\gamma\frac{T^2}{J_1} \label{SW}$$ where $\gamma\approx 2\cdot 10^{-3}$. Quantum and thermal anharmonic fluctuations in Heisenberg [*kagomé*]{} antiferromagnet are also known to favour (at least locally) a planar state with $\sqrt{3}\times\sqrt{3}$ structure [@Chub]. The same can be said about the fluctuations of the order parameter amplitude in superconducting wire networks [@PH]. Although $E^{(0)}_{\rm DW}$ defined by Eq. (\[SW\]) is always much smaller than the temperature and in the case of (for example) Ising model would be insufficient for appearance of the long-range order, in the considered system the situation is qualitatively different. Substitution of Eq. (\[SW\]) into Eq. (\[TDW1\]) gives a finite value of $T_{\rm DW}$ induced by spin wave fluctuations: $$T^{(0)}_{\rm DW}\sim \gamma^{3/2}J_1 \label{TDW0}$$ which means that for $\gamma\sim 1$ the long range order in staggered chirality would survive even up to $T\sim J_1$. However substitution of the numerically calculated value of $\gamma$ cited above produces an extremely low estimate: $T^{(0)}_{\rm DW}\sim 10^{-4}J_1$. Note that the ordering in staggered chirality is noticable only at length-scales larger than the correlation radius $\xi$. Substitution of Eq. (\[SW\]) into Eq. (\[LNX2\]) shows that for $T\ll T^{(0)}_{\rm DW}$ the behavior of $\xi(T)$ is given by $\ln\xi\propto(J_1/\gamma T)^{\overline{\nu}}$. That means that at $T\rightarrow 0$ there takes place a continuous reentrant phase transition into the phase without true long-range order in staggered chirality. Conclusion ========== This work has been devoted to investigation of the phase transitions in the antiferomagnetic $XY$ model on a [*kagomé*]{} lattice with the special emphasis on accurate consideration of mutual influence between different classes of topological excitations (fractional vortices and domain walls). In particular, we have shown that in the model with only NN interaction the additional interaction of fractional vortices related with the entropic contribution from zero-energy domain walls at finite temperatures becomes short-ranged. Therefore it can not interfere with the BKT dissociation of fractional vortex pairs proposed in Refs. and . For the case of a finite NNN coupling (leading to removal of the accidental degeneracy) we have demonstrated that the phase transition related with proliferation of the domain walls can happen as a separate phase transition below $T_{\rm FV}$ only for very weak NNN interaction, and have found how the temperature of this transition depends on $J_1$ and $J_2$. These dependences are essentially different for different signs of the NNN coupling. The same results are also applicable for other mechanisms of removal of the accidental degeneracy, which lead to a finite $E_{\rm DW}$. Note that our analysis has been restricted to the case $|J_2|\ll J_1$, so we have not considered the possibility of the domain wall proliferation happening above the temperature of the ordinary BKT transition, associated with appearance of free integer vortices (like it happens in the case of triangular lattice [@KU; @LL; @new]). Our conclusions are compatible with the results of the numerical simulations of Geht and Bondarenko [@GB], who have found (for not too weak NNN interaction, $|J_2|\gtrsim 0.1J_1$) that the disordering of all degrees of freedom in the antiferromagnetic $XY$ model with [*kagomé*]{} lattice takes place at the same temperature, the singularities of the thermodynamic quantities being of the Ising type. Recently it has been shown [@new] (for the case of triangular lattice) that when the domain wall proliferation happens as a continuous phase transition \[at $T/\Gamma(T)>2T_{\rm FV}/\Gamma(T_{\rm FV})$\], the dissociation of pairs of integer vortices has to take place at $T<T_{\rm DW}$. Since the same arguments are also applicable for a [*kagomé*]{} antiferromagnet with $J_2<0$, it may be of interest to check the results of Ref. [@GB] with better accuracy. The long-range order in staggered chirality is favored also by the spin-wave fluctuations. Our analysis suggests that the antiferromagnetic $XY$ model with [*kagomé*]{} lattice and only NN interaction presents a unique example of a model without free parameter in which one of the phase transitions can be expected to happen at dimensionless temperature of the order of $10^{-4}$. Therefore one can conclude that the numerical simulations of Rzchowskii [@R] have demonstrated no evidence for selection of a single ground state down to $T/J_1\approx 10^{-3}$ not because $E_{\rm DW}\propto T^2/J_1$ is not sufficient for that, but simply because the temperature was not low enough. Comparison with Eq. (\[TDW2\]) shows that if the effective interaction of chiralities, induced by the free energy of spin waves, would be of the opposite sign, the long range order in chirality would persist up to much higher temperatures. Experimentally the phase transitions discussed in this work can be observed in superconducting wire networks or Josephson junction arrays in the external magnetic field providing one-half of the superconducting flux quantum $\phi_0=hc/2e$ per each triangular plaquette of a [*kagomé*]{} lattice. In such systems the removal of the accidental degeneracy is related with the magnetic interaction of the currents and a finite width of the wires [@PH]. The former of these mechanisms favors the ferromagnetic ordering of chiralities, whereas for the latter the effect depends on the width of the wires. Recent experimental investigation of the aluminum wire network with [*kagomé*]{} structure [@HXB] has demonstrated for $\phi=\phi_0/2$ the presence on the current-volatage curve of the regions corresponding to different mechanisms of dissipation, one of which (with an algebraic behavior) can be associated with unbinding of vortex pairs and the other with spreading of domains with inversed chiralities [@MT]. The authors of Ref. [@HXB] have interpreted this as an evidence for the presence of two phase transitions. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported in part by the Program “Statistical Physics” of the Russian Ministry of Science, by the Program “Scientific Schools of the Russian Federation” (grant No. 00-15-96747), by the Swiss National Science Foundation, and by the Netherlands Organisation for Scientific Research (NWO) in the framework of Russian-Dutch Cooperation Program. The author is greatful to A. V. Kashuba for useful discussion and to I. V. Andronov for assistance in preparation of the figures. {#section .unnumbered} The lowest order contribution to the interaction of chiralities of neighboring triangular plaquettes induced by the spin wave free energy appears when the partition function of the Hamiltonian (\[NN\]) is expanded up to the second order in $$H^{(3)}=\frac{J_1}{6}\sum_{NN}[\sin(\varphi_{i}-\varphi_{j})] (\psi_{i}-\psi_{j})^3, \label{H3}$$ and then is averaged with the help of $H^{(2)}$. The fourth-order term is the same for all the ground states and therefore of no importance. The parameter $K_\sigma$ describing the effective interaction of chiralities of neighboring triangular plaquettes ($a$ and $b$): $$E(\sigma_a,\sigma_b)=K_\sigma\sigma_a\sigma_b \label{A2}$$ can be then found by calculating the average of $$V=-\frac{H^{(3)}_a H^{(3)}_b}{T} \label{A3}$$ where $$H^{(3)}_a=\frac{\sin\frac{2\pi}{3}}{6}J_1 [(\psi_{1}-\psi_{0})^3+(\psi_{2}-\psi_{1})^3 +(\psi_{0}-\psi_{2})^3] \label{A4}$$ and expression for $H^{(3)}_b$ can be obtained by replacing in Eq. (\[A4\]) $\psi_{1}$ by $\psi_{3}$ and $\psi_{2}$ by $\psi_{4}$. The indices from $0$ to $4$ are used here to denote the five sites belonging to a pair of neighboring triangular plaquettes as shown in Fig. 8. With the help of the Wick’s theorem and symmetry arguments the average of $V$ can be reduced to the form $$\langle V\rangle=\frac{3J_1^2}{4T}(G_4^3-2G_4^2G_3+2G_4G_3^2-G_3^3) \label{A5}$$ where $$G_3 = g_{01}-\frac{1}{2}g_{13},~~ G_4 = g_{01}-\frac{1}{2}g_{14} \label{A7}$$ and $$g_{ij}\equiv\langle(\varphi_{i}-\varphi_{j})^2\rangle \label{A8}$$ describes the amplitude of flutuations of $\varphi_{i}-\varphi_{j}$ calculated with the help of the harmonic Hamiltonian (\[HE\]). The value of $g_{ij}$ for the nearest neighbors ($g_{01}$) can be calculated exactly: $$g_{01}=\frac{T}{J_1} \label{A9}$$ whereas numerical calculation of the integrals over Brillouin zone defining $g_{13}$ and $g_{14}$ gives $$g_{13}=\left(\frac{3}{2}+\delta\right)\frac{T}{J_1},~~ g_{14}=\left(\frac{3}{2}-\delta\right)\frac{T}{J_1}, \label{A11}$$ where $\delta\approx 0.0213$. Substitution of Eqs. (\[A9\])-(\[A11\]) into Eqs. (\[A5\])-(\[A7\]) then gives $$K_\sigma=\frac{\gamma T^2}{2J_1} \label{A12}$$ where $$\gamma=\frac{3}{32}\delta(1+12\delta^2)\approx 2.01\cdot 10^{-3} \label{A13}$$ which leads to Eq. (\[SW\]). [99]{} D. A. Huse and A. D. Rutenberg, Phys. Rev. B [**45**]{}, 7536 (1992). D. H. Lee, R. G. Caflisch, J. D. Joannopoulos and F. Y. Wu, Phys. Rev. B [**29**]{}, 2680 (1984); D. H. Lee, J. D. Joannopoulos, J. W. Negele and D. P. Landau, Phys. Rev. Lett. [**52**]{}, 433 (1984); S. Miyashita and J. Shiba, J. Phys. Soc. Jpn. [**53**]{}, 1145 (1984). V. Elser, Phys. Rev. Lett. [**62**]{}, 2405 (1989). A. B. Harris, C. Kallin and A. J. Berlinsky, Phys. Rev. B [**45**]{}, 2899 (1992). R. S. Geht and I. N. Bondarenko, Zh. Eksp. Teor. Fiz. [**113**]{}, 2209 (1998) \[Sov. Phys. JETP [**86**]{}, 1209 (1998)\]. V. L. Berezinskii, Zh. Eksp. Teor. Fiz. [**59**]{}, 907 (1970) \[Sov. Phys. JETP [**32**]{}, 493 (1971)\]. V. L. Berezinskii, Zh. Eksp. Teor. Fiz. [**61**]{}, 1144 (1971) \[Sov. Phys. JETP [**34**]{}, 610 (1972)\]. J. M. Kosterlitz and D. J. Thouless, J. Phys. C [**5**]{}, L124 (1972); J. Phys. C [**6**]{}, 1181 (1973). J. M. Kosterlitz, J. Phys. C [**7**]{}, 1046 (1974). S. E. Korshunov and G. V. Uimin, J. Stat. Phys. [**43**]{}, 1 (1986). J. Villain, J. Phys. C [**10**]{}, 1717 (1977); S. Teitel and C. Jayaprakash, Phys. Rev. B [**27**]{}, 598 (1983). T. C. Halsey, J. Phys. C [**18**]{}, 2437 (1985); S. E. Korshunov, J. Stat. Phys. [**43**]{}, 17 (1986). I. Ritchey, P. Chandra and P. Coleman, Phys. Rev. B [**47**]{}, 15342 (1993); P. Chandra, P. Coleman and I. Ritchey, J. Phys. (France) I [**3**]{}, 591 (1993). A. S. Wills, V. Dupuis, E. Vincent, J. Hammann and R. Calemczuk, Phys. Rev. B [**62**]{}, R9264 (2000). P. Martinoli and Ch. Leemann, J. Low Temp. Phys. [**118**]{}, 699 (2000). M. J. Higgins, Yi Xiao, S. Bhattacharya, P. M. Chaikin, S. Sethuraman, R. Bojko and D. Spencer, Phys. Rev. B [**61**]{}, R894 (2000). L. N. Bulaevskii, V. V. Kuzii and A. A. Sobyanin, Pis’ma Zh. Eksp. Teor. Fiz. [**25**]{}, 314 (1977) \[JETP Lett. [**25**]{}, 290 (1977)\]; A. V. Andreev, A. I. Buzdin and R. M. Osgood, Phys. Rev. B [**43**]{}, 10 124 (1991); V. V. Ryazanov, V. A. Oboznov, A. Yu. Rusanov, A. V. Veretennikov, A. A. Golubov and J. Aarts, Phys. Rev. Lett. [**86**]{}, 2427 (2001). C. L. Henley (unpublished), N. Read (unpublished), cited by Ref. [@HR]. R. J. Baxter, J. Math. Phys. [**11**]{}, 784 (1970). D. R. Nelson, Phys. Rev. B [**18**]{}, 2318 (1978). B. I. Halperin and D. R. Nelson, Phys. Rev. Lett. [**41**]{}, 121 and 519E (1978); D. R. Nelson and B. I. Halperin, Phys. Rev. B [**19**]{}, 2457 (1979). D. R. Nelson and J. M. Kosterlitz, Phys. Rev. Lett. [**39**]{}, 1201 (1977). M. S. Rzchowski, Phys. Rev. B [**55**]{}, 11 745 (1997). V. B. Cherepanov, I. V. Kolokolov and E. V. Podivilov, cond-mat/9603115 (1996). J. V. José, L. P. Kadanoff, S. Kirkpatrick and D. R. Nelson, Phys. Rev. B [**16**]{}, 1217 (1977). H. J. F. Knops, Phys. Rev. Lett. [**39**]{}, 776 (1977). J. P. van der Eerden and H. J. F. Knops, Phys. Lett. [**66A**]{}, 334 (1978); R. H. Swendsen, Phys. Rev. B [**17**]{}, 3710 (1978). D. H. Lee, G. Grinstein and J. Toner, Phys. Rev. Lett. [**56**]{}, 2318 (1986). Analogous properties of domain walls in the six-vertex (F) model of a segnetoelectric \[F. Rys, Helv. Phys. Acta [**36**]{}, 537 (1963)\] with two-fold degeneracy of the ground state are known to allow its mapping on the BCSOS model with infinite degeneracy of the SOS type \[H. van Beijeren, Phys. Rev. Lett. [**38**]{}, 993 (1977)\]. Analogous point-like defects, at which $N\geq 2$ domain walls merge together have been studied in relation with commensurate-incommensurate transition, see I. F. Lyuksutov, Pis’ma Zh. Eksp. Teor. Fiz, [**32**]{}, 593 (1980) \[JETP Lett. [**32**]{}, 579 (1980)\]; S. N. Coppersmith, D. S. Fisher, B. I. Halperin, P. A. Lee and W. F. Brinkman, Phys. Rev. Lett. [**46**]{}, 549 (1981); Phys. Rev. B [**25**]{}, 349 (1982); J. Villain and P. Bak, J. Phys. (France) [**42**]{}, 657 (1981). V. L. Pokrovsky and G. V. Uimin, Phys. Lett. [**45A**]{}, 467 (1973); Zh. Eksp. Teor. Fiz. [**65**]{}, 1691 (1973) \[Sov. Phys. JETP [**38**]{}, 847 (1974)\]. R. E. Peierls, Proc. Cambridge Phil. Soc., [**32**]{}, 477 (1936); C. Domb, in [*Phase Transitions and Critical Phenomena*]{}, vol. 3, ed. by C. Domb and M. S. Green (New York: Academic Press, 1974) A. P. Young, Phys. Rev. B [**19**]{}, 1855 (1979); S. A. Bulgadaev, Phys. Lett. [**86A**]{}, 213 (1981); Teor. Mat. Fiz. [**49**]{}, 77 (1981) \[English transl.: [**49**]{}, 897 (1981)\]. Y. Saito, Z. Phys. B [**32**]{}, 75 (1978). A. B. Zamolodchikov, Zh. Eksp. Teor. Fiz. [**75**]{}, 341 (1978) \[Sov. Phys. JETP [**48**]{}, 168 (1978)\]; Vl. S. Dotsenko, Zh. Eksp. Teor. Fiz. [**75**]{}, 1083 (1978) \[Sov. Phys. JETP [**48**]{}, 546 (1978)\]. B. Nienhuis, E. K. Riedel and M. Schick, Phys. Rev. B [**27**]{}, 5625 (1983). S. E. Korshunov, A. Vallat and H. Beck, Phys. Rev. B [**51**]{}, 3071 (1995). J. Villain, R. Bidaux, J. P. Carton and R. Conte, J. Phys. (France) [**41**]{}, 1263 (1980). H. Kawamura, J. Phys. Soc. Jpn. [**53**]{}, 2452 (1984); S. E. Korshunov, Pis’ma Zh. Eksp. Teor. Fiz. [**41**]{}, 525 (1985) \[JETP Lett. [**41**]{}, 641 (1985)\]; J. Phys. C [**19**]{}, 5927 (1986); C. L. Henley, J. Appl. Phys. [**61**]{}, 3962 (1987); Phys. Rev. Lett. [**62**]{}, 2056 (1989). A. Chubukov, Phys. Rev. Lett. [**69**]{}, 832 (1992); J. Appl. Phys. [**73**]{}, 5639 (1993); S. Sachdev, Phys. Rev. B [**45**]{}, 12377 (1992). K. Park and D. A. Huse, cond-mat/0007195 (2000). S. Lee and K.-C. Lee, Phys. Rev. [**57**]{}, 8472 (1998). S. E. Korshunov, cond-mat/0106151 (2001). K. K. Mon and S. Teitel, Phys. Rev. Lett. [**62**]{}, 673 (1989).
--- author: - | Antony Valentini\ Augustus College --- =1 [De Broglie-Bohm Pilot-Wave Theory: Many Worlds in Denial?]{} Antony Valentini *Centre de Physique Théorique, Campus de Luminy,* *Case 907, 13288 Marseille cedex 9, France* and *Theoretical Physics Group, Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2AZ, United Kingdom.[^1]* email: a.valentini@imperial.ac.uk We reply to claims (by Deutsch, Zeh, Brown and Wallace) that the pilot-wave theory of de Broglie and Bohm is really a many-worlds theory with a superfluous configuration appended to one of the worlds. Assuming that pilot-wave theory does contain an ontological pilot wave (a complex-valued field in configuration space), we show that such claims arise from not interpreting pilot-wave theory on its own terms. Specifically, the theory has its own (‘subquantum’) theory of measurement, and in general describes a ‘nonequilibrium’ state that violates the Born rule. Furthermore, in realistic models of the classical limit, one does not obtain localised pieces of an ontological pilot wave following alternative macroscopic trajectories: from a de Broglie-Bohm viewpoint, alternative trajectories are merely mathematical and not ontological. Thus, from the perspective of pilot-wave theory itself, many worlds are an illusion. It is further argued that, even leaving pilot-wave theory aside, the theory of many worlds is rooted in the intrinsically unlikely assumption that quantum measurements should be modelled on classical measurements, and is therefore unlikely to be true. 1 Introduction 2 Ontology versus Mathematics 3 Pilot-Wave Theory on its Own Terms 4 Some Versions of the Claim 5 ‘Microscopic’ Many Worlds? 6 ‘Macroscopic’ Many Worlds? 7 Further Remarks 8 Counter-Claim: A General Argument Against Many Worlds 9 Conclusion To appear in: *Everett and his Critics*, eds. S. W. Saunders *et al*. (Oxford University Press, 2009). Introduction ============ It used to be widely believed that the pilot-wave theory of de Broglie (1928) and Bohm (1952a,b) had been ruled out by experiments demonstrating violations of Bell’s inequality. Such misunderstandings have largely been overcome, and in recent times the theory has come to be widely accepted by physicists as an alternative (and explicitly nonlocal) formulation of quantum theory. Even so, some workers claim that pilot-wave theory is not really a physically distinct formulation of quantum theory, that instead it is actually a theory of Everettian many worlds. The principal aim of this paper is to refute that claim. We shall also end with a counter-claim, to the effect that Everett’s theory of many worlds is unlikely to be true, as it is rooted in an intrinsically unlikely assumption about measurement. Pilot-wave theory is a first-order, nonclassical theory of dynamics, grounded in configuration space. It was first proposed by de Broglie, at the 1927 Solvay conference (Bacciagaluppi and Valentini 2009). From de Broglie’s dynamics, together with an assumption about initial conditions, it is possible to derive the full phenomenology of quantum theory, as was first shown by Bohm in 1952. In pilot-wave dynamics, a closed system with configuration $q$ has a wave function $\Psi(q,t)$ — a complex-valued field on configuration space obeying the Schrödinger equation $i\partial\Psi/\partial t=\hat{H}\Psi$. The system has an actual configuration $q(t)$ evolving in time, with a velocity $\dot{q}\equiv dq/dt$ determined by the gradient $\nabla S$ of the phase $S$ of $\Psi$ (for systems with standard Hamiltonians $\hat{H}$).[^2] In principle, the configuration $q$ includes all those things that we normally call ‘systems’ (particles, atoms, fields) as well as pieces of equipment, recording devices, experimenters, the environment, and so on. Let us explicitly write down the dynamical equations for the case of a nonrelativistic many-body system, as they were given by de Broglie (1928). For $N$ spinless particles with positions $\mathbf{x}_{i}(t)$ and masses $m_{i}$ ($i=1,2,....,N$), in an external potential $V$, the total configuration $q=(\mathbf{x}_{1},\mathbf{x}_{2},....,\mathbf{x}_{N})$ evolves in accordance with the de Broglie guidance equation$$m_{i}\frac{d\mathbf{x}_{i}}{dt}=\mathbf{\nabla}_{i}S \label{geqn}$$ (where $\hslash=1$ and $\Psi=\left\vert \Psi\right\vert e^{iS}$), while the ‘pilot wave’ $\Psi$ (as it was originally called by de Broglie) satisfies the Schrödinger equation$$i\frac{\partial\Psi}{\partial t}=\sum_{i=1}^{N}-\frac{1}{2m_{i}}\nabla_{i}^{2}\Psi+V\Psi\ . \label{Seqn}$$ Mathematically, these two equations define de Broglie’s dynamics — just as, for example, Maxwell’s equations and the Lorentz force law may be said to define classical electrodynamics. The theory was revived by Bohm in 1952, though in a pseudo-Newtonian form. Bohm regarded the equation$$m_{i}\frac{d^{2}\mathbf{x}_{i}}{dt^{2}}=-\mathbf{\nabla}_{i}(V+Q) \label{neqn}$$ as the true law of motion, with a ‘quantum potential’$$Q\equiv-\sum_{i=1}^{N}\frac{1}{2m_{i}}\frac{\nabla_{i}^{2}\left\vert \Psi\right\vert }{\left\vert \Psi\right\vert }$$ acting on the particles. (Taking the time derivative of (\[geqn\]) and using (\[Seqn\]) yields (\[neqn\]).) On Bohm’s view, (\[geqn\]) is not a law of motion but a condition $\mathbf{p}_{i}=\mathbf{\nabla}_{i}S$ on the initial momenta — a condition that happens to be preserved in time by (\[neqn\]), and which could in principle be relaxed (leading to corrections to quantum theory) (Bohm 1952a, pp. 170–71). One should therefore distinguish between de Broglie’s first-order dynamics of 1927, defined by (\[geqn\]) and (\[Seqn\]), and Bohm’s second-order dynamics of 1952, defined by (\[neqn\]) and (\[Seqn\]). In particular, Bohm’s rewriting of de Broglie’s theory had the unfortunate effect of making it seem much more like classical physics than it really was. De Broglie’s original intention had been to depart from classical dynamics at a fundamental level, and indeed the resulting theory is highly non-Newtonian. As we shall see, it is crucial to avoid making classical assumptions when interpreting the theory. Over an ensemble of quantum experiments, beginning at time $t=0$ with the same initial wave function $\Psi(q,0)$ and with a Born-rule or ‘quantum equilibrium’ distribution$$P(q,0)=\left\vert \Psi(q,0)\right\vert ^{2} \label{Br}$$ of initial configurations $q(0)$, it follows from de Broglie’s dynamics that the distribution of final outcomes is given by the usual Born rule (Bohm 1952a,b). On the other hand, for an ensemble with a ‘quantum nonequilibrium’ distribution$$P(q,0)\neq\left\vert \Psi(q,0)\right\vert ^{2}\ , \label{notBr}$$ in general one obtains a distribution of final outcomes that *disagrees* with quantum theory (for as long as $P$ has not yet relaxed to $|\Psi|^{2}$, see below) (Valentini 1991a,b, 1992, 1996, 2001, 2002, 2004a; Pearle and Valentini 2006). The initial distribution (\[Br\]) was assumed by both de Broglie and Bohm, and subsequently most workers have regarded it as one of the axioms of the theory. As we shall see, this is a serious mistake that has led to numerous misunderstandings, and is partially responsible for the erroneous claim that pilot-wave theory is really a theory of many worlds. We shall not attempt to provide an overall assessment of the relative merits of de Broglie-Bohm pilot-wave theory and Everettian many-worlds theory. Instead, here we focus on evaluating the following claim — hereafter referred to as ‘the Claim’ — which has more or less appeared in several places in the literature (Deutsch 1996, Zeh 1999, Brown and Wallace 2005) (author’s paraphrase): - Claim: *If one takes pilot-wave theory seriously as a possible theory of the world, and if one thinks about it properly and carefully, one ought to see that it really contains many worlds — with a superfluous configuration* $q$ *appended to one of those worlds*. Were the Claim correct, one could reasonably add a corollary to the effect that one should then drop the superfluous configuration $q$, and arrive at (some form of) many-worlds theory. Deutsch’s way of expressing the Claim has inspired the title of this paper (Deutsch 1996, p. 225): > In short, pilot-wave theories are parallel-universes theories in a state of chronic denial. We should emphasise that here we shall interpret pilot-wave theory (for a given closed system) as containing an ontological — that is, physically real — complex-valued field $\Psi(q,t)$ on configuration space, where this field drives the motion of an actual configuration $q(t)$. The Claim asserts that, if the theory is regarded in these terms, then proper consideration shows that $\Psi$ contains many worlds, with $q$ amounting to a superfluous appendage to one of the worlds. One might try to side-step the Claim by asserting that $\Psi$ has no ontological status in pilot-wave theory, that it merely provides a mathematical account of the motion $q(t)$. In this case, one could not even begin to make the Claim, for the complete ontology would be defined by the configuration $q$. For all we currently know, this view might turn out to be true in some future derivation of pilot-wave theory from a deeper theory. But in pilot-wave theory as we know it today — the subject of this paper — such a view seems implausible and physically unsatisfactory (see below). In any case, even if only for the sake of argument, let us here assume that the pilot wave $\Psi$ is ontological, and let us show how the Claim may still be refuted. It will be helpful first to review the distinction between ontological and mathematical structure in current physical theory, and then to give a brief overview of pilot-wave theory interpreted on its own terms. Generally speaking, theories should be evaluated on their own terms, *without assumptions that make sense only in rival theories*. We shall see that, in essence, the Claim in fact arises from not interpreting and understanding pilot-wave theory on its own terms. Ontology versus Mathematics =========================== Physics provides many examples of the distinction between ontological and mathematical structure. Let us consider three. \(1) *Classical mechanics*. This may be formulated in terms of a Hamiltonian trajectory $(q(t),p(t))$ in phase space. For a given individual system, there is only one real trajectory. The other trajectories, corresponding to alternative initial conditions $(q(0),p(0))$, have a purely mathematical existence. Similarly, in the Hamilton-Jacobi formulation, the Hamilton-Jacobi function $S(q,t)$ is associated with a whole family of trajectories (with $\dot{q}$ determined by $\nabla S$), only one of which is realised. \(2) *A test particle in an external field*. This provides a particularly good parallel with pilot-wave theory. A charged test particle, placed in an external electromagnetic field $\mathbf{E}(\mathbf{x},t)$, $\mathbf{B}(\mathbf{x},t)$, will follow a trajectory $\mathbf{x}(t)$. One would normally say that the field is real, and that the realised particle trajectory is real; while the alternative particle trajectories (associated with alternative initial positions $\mathbf{x}(0)$) are not real, even if they might be said to be contained in the mathematical structure of the electromagnetic field. Similarly, if a test particle moves along a geodesic in a background spacetime geometry, one can think of the geometry as ontological, and the mathematical structure of the geometry contains alternative geodesic motions — but again, only one particle trajectory is realised, and the other geodesics have a purely mathematical existence. \(3) *A classical vibrating string*. Consider a string held fixed at the endpoints, $x=0$, $L$. (This example will also prove relevant to the quantum case.) A small vertical displacement $\psi(x,t)$ obeys the partial differential equation$$\frac{\partial^{2}\psi}{\partial t^{2}}=\frac{\partial^{2}\psi}{\partial x^{2}}$$ (setting the wave speed $c=1$). This is conveniently solved using the standard methods of linear functional analysis. One may define a Hilbert space of functions $\psi$, with a Hermitian operator $\hat{\Omega}=-\partial ^{2}/\partial x^{2}$ acting thereon. Solutions of the wave equation may then be expanded in terms of a complete set of eigenfunctions $\phi_{m}(x)=\sqrt{2/L}\sin\left( m\pi x/L\right) $, where $\hat{\Omega}\phi _{m}=\omega_{m}^{2}\phi_{m}$ with $\omega_{m}^{2}=\left( m\pi/L\right) ^{2}$ ($m=1,2,3,....$). Assuming for simplicity that $\dot{\psi}(x,0)=0$, we have the general solution$$\psi(x,t)=\sum_{m=1}^{\infty}c_{m}\phi_{m}(x)\cos\omega_{m}t\ \ \ \ \ \left( c_{m}\equiv\int_{0}^{L}dx\ \phi_{m}(x)\psi(x,0)\right)$$ or (in bra-ket vector notation)$$\left\vert \psi(t)\right\rangle =\sum_{m=1}^{\infty}\left\vert m\right\rangle \langle m\left\vert \psi(0)\right\rangle \cos\omega_{m}t$$ (where $\hat{\Omega}\left\vert m\right\rangle =\omega_{m}^{2}\left\vert m\right\rangle $). Any solution may be written as a superposition of oscillating ‘modes’. Even so, the true ontology consists essentially of the total displacement $\psi(x,t)$ of the string (perhaps also including its velocity and energy). Individual modes in the sum would not normally be regarded as physically real. One would certainly not assert that $\psi$ is composed of an ontological multiplicity of strings, with each string vibrating in a single mode. Instead one would say that, in general, the eigenfunctions and eigenvalues have a mathematical significance only. All this is not to say that the question of ontology in physical theories is trivial or always obvious. On the contrary, it is not always self-evident as to whether mathematical objects in our physical theories should be assigned ontological status or not. For example, classical electrodynamics may be viewed in terms of a field theory (with an ontological electromagnetic field), or in terms of direct action-at-a-distance between charges (where the electromagnetic field is merely an auxiliary field, if it appears at all). Most physicists today prefer the first view, probably because the field seems to contain a lot of independent and contingent structure (see below). The question to be addressed here is: in the pilot-wave theory of de Broglie and Bohm, if one regards the pilot wave $\Psi$ as ontological (which seems the most natural view at present), does this amount to an ontology of many worlds? Pilot-Wave Theory on its Own Terms ================================== In the author’s view, pilot-wave theory continues to be widely misinterpreted and misrepresented, even by some of its keenest supporters. Here, for illustration, we confine ourselves to de Broglie’s original dynamics for a system of nonrelativistic (and spinless) particles, defined by (\[geqn\]) and (\[Seqn\]). *Basic History* Let us begin by setting the historical record straight,[^3] as historical arguments sometimes play a role in evaluations of pilot-wave theory. Pilot-wave dynamics was constructed by de Broglie in the period 1923–27. His motivations were grounded in experiment. He wished to explain the quantisation of atomic energy levels and the interference or diffraction of single photons. To this end, he proposed a unification of the physics of particles with the physics of waves. De Broglie argued that Newton’s first law of motion had to be abandoned, because a particle diffracted by a screen does not touch the screen and yet does not move in a straight line. During 1923–24, de Broglie then proposed a new, non-Newtonian form of dynamics in which the *velocity* of a particle is determined by the phase of a guiding wave. As a theoretical guide, de Broglie sought to unify the classical variational principles of Maupertuis ($\delta\int m\mathbf{v}\cdot d\mathbf{x}=0$, for a particle with velocity $\mathbf{v}$) and of Fermat ($\delta\int dS=0$, for a wave with phase $S$). The result was the guidance equation (\[geqn\]) (at first applied to a single particle and later generalised), which de Broglie regarded as the basis of a new form of dynamics. At the end of a rather complicated development in the period 1925–27 (including a crucial contribution by Schrödinger, who found the correct wave equation for de Broglie’s waves), de Broglie proposed the many-body dynamics defined by (\[geqn\]) and (\[Seqn\]). De Broglie regarded his theory as provisional, much as Newton regarded his own theory of gravity as provisional. And de Broglie regarded the observation of electron diffraction, by Davisson and Germer in 1927, as a vindication of his prediction (first made in 1923), and as clear evidence for his new (first-order) dynamics of particle motion. Clearly, de Broglie’s construction of pilot-wave dynamics was motivated by experimental puzzles and had its own internal logic. Note in particular that de Broglie did not construct his theory to ‘solve the measurement problem’, nor did he construct it to provide a (deterministic or realistic) ‘completion of quantum theory’: for in 1923, there was no measurement problem and there was no quantum theory. Getting the history right is important, for its own sake and also because some criticisms of pilot-wave theory are based on a mistaken appraisal of history. For example, Deutsch (1986, pp. 102–103) has said the following about the theory: > .... to append to the quantum formalism an additional structure .... solely for the purpose of interpretation, is I think a very dangerous thing to do in physics. These structures are being introduced solely to solve the interpretational problems, without any physical motivation. .... the chances of a theory which was formulated for such a reason being right are extremely remote. But there is no sense in which de Broglie ‘appended’ something to quantum theory, for quantum theory did not exist yet. And de Broglie had ample physical motivation, grounded in experimental puzzles and in a compelling analogy between the principles of Maupertuis and Fermat. A proper historical account also undermines discussions in which pilot-wave theory is presented as being motivated by the desire to ‘solve the measurement problem’. For example, Brown and Wallace (2005) — who discuss Bohm’s motivations but ignore de Broglie’s — argue that many-worlds theory provides a more natural solution to the measurement problem than does pilot-wave theory. The discussion is framed as if the measurement problem were the prime motivation for considering pilot-wave theory in the first place. As a matter of historical fact, this is false. The widespread misleading historical perspective has been exacerbated by some workers who present de Broglie’s 1927 dynamics as a way to ‘complete’ quantum theory by adding trajectories to the wave function (Dürr *et al*. 1992, 1996), an approach that furthers the mistaken impression that the theory is a belated reformulation of an already-existing theory. Matters are further confused by some workers who refer to de Broglie’s first-order dynamics by the misnomer ‘Bohmian mechanics’, a term that should properly be applied to Bohm’s second-order dynamics. De Broglie’s dynamics pre-dates quantum theory; and it was given in final form in 1927, not as an after-thought (or reformulation of quantum theory) in 1952. We may then leave aside certain spurious objections that are grounded in a mistaken version of historical events. In the author’s view, the proper way to pose the question addressed in this paper is: *given* de Broglie’s dynamics (as it was in 1927), if we examine it carefully on its own terms, does it turn out to contain many worlds?   $\ $ *Basic Ontology* As stated in the introduction, we regard the theory as having a dual ontology: the configuration $q(t)$ together with the pilot wave $\Psi\lbrack q,t]$. We need to give the relation between this ontology and what we normally think of as physical reality. De Broglie constructed the theory as a new dynamics of particles: specifically, the basic constituents of matter and radiation (as understood at the time). It is then natural to assume that physical systems, apparatus, people, and so on, are ‘built from’ the configuration $q$. (In extensions of the theory, $q$ may of course include configurations of fields, the geometry of 3-space, strings, or whatever may be thought of as the modern fundamental constituents. Further, macroscopic systems — such as experimenters — will usually supervene on $q$ under some coarse-graining.) This view has been explicitly stated in the literature by several workers — for example Bell (1987, p. 128), Valentini (1992, p. 26), Holland (1993, pp. 337, 350), and others — though perhaps it is not clearly stated in some of the de Broglie-Bohm literature (as Brown and Wallace (2005) suggest). In any case, we shall take this to be the correct and natural viewpoint. That $\Psi$ is also to be regarded as ontological is often not explicitly stated. A notable exception was Bell (1987, p. 128, original italics): > .... the wave is supposed to be just as ‘real’ and ‘objective’ as say the fields of classical Maxwell theory .... . *No one can understand this theory until he is willing to think of* $\psi$ *as a real objective field .... . Even though it propagates not in 3-space but in 3N-space*. Could $\Psi$ instead be regarded as ‘fictitious’, that is, as a merely mathematical field appearing in the law of motion for $q$? As already mentioned, this does not seem reasonable, at least not for the theory in its present form, where — like the electromagnetic field — $\Psi$ contains a lot of independent and contingent structure, and is therefore best regarded as part of the state of the world (Valentini 1992, p. 17; Brown and Wallace 2005, p. 532). Valentini (1992, p. 13) considered the possibility that $\Psi$ might merely provide a convenient mathematical summary of the motion $q(t)$; to this end, he drew an analogy between $\Psi$ and physical laws such as Maxwell’s equations, which also provide a convenient mathematical summary of the behaviour of physical systems. On this view, ‘the world consists purely of the evolving variables $X(t)$, whose time evolution may be summarised mathematically by $\Psi$’ (*ibid*., p. 13). But Valentini argued further (p. 17) that such a view did not do justice to the physical information stored in $\Psi$, and he concluded instead that $\Psi$ was a new kind of causal agent acting in configuration space (a view that the author still takes today). The former view, that $\Psi$ is law-like, was adopted by Dürr *et al*. (1997).[^4] They proposed further that the time dependence and contingency of $\Psi$ — properties that argue for it to be ontological (see Brown and Wallace 2005, p. 532) — may be illusions, as the wave function for the whole universe is (so they claim) expected to be static and unique. However, the present situation in quantum gravity indicates that solutions for $\Psi$ (satisfying the Wheeler-DeWitt equation and other constraints) are far from unique, and display the same kind of contingency (for example in cosmological models) that we are used to for quantum states elsewhere in physics (Rovelli 2004). Should the universal wave function be static — and the notorious ‘problem of time’ in quantum gravity urges caution here — this alone is not enough to establish that it should be law-like: contingency, or under-determination by physical law, is the more important feature.[^5] Therefore, current theoretical evidence speaks against the idea. And in any case, our task here is to consider the theory we have now, not ideas for theories that we may have in the future: in the present form of pilot-wave theory, the time-dependence and (especially) the contingency of $\Psi$ makes it best regarded as ontological. Note that in 1927 de Broglie regarded $\Psi$ as providing — as a temporary measure — a mathematically convenient and phenomenological summary of motions generated from a deeper theory, in which particles were singular regions of 3-space waves (Bacciagaluppi and Valentini 2009, section 2.3.2). De Broglie hoped the theory would later be derived from something deeper (as Newton believed of gravitational attraction at a distance). Should this eventually happen, ontological questions will have to be addressed anew. Alternatively, perhaps de Broglie’s ‘deeper theory’ (the theory of the double solution) should be regarded merely as a conceptual scaffolding which he used to arrive at pilot-wave theory, and the scaffolding should now be forgotten.[^6] But in any case, the theory has come to be regarded as a theory in its own right, and the question at hand is whether *this* theory contains many worlds or not. *Equilibrium and Nonequilibrium* Many workers take the quantum equilibrium distribution (\[Br\]) as an axiom, alongside the laws of motion (\[geqn\]) and (\[Seqn\]). It has been argued at length that this is incorrect and deeply misleading (Valentini 1991a,b, 1992, 1996, 2001, 2002; Valentini and Westman 2005; Pearle and Valentini 2006). A postulate concerning the distribution of initial conditions has no fundamental status in a theory of dynamics. Instead, quantum equilibrium is to pilot-wave dynamics as thermal equilibrium is to classical dynamics. In both cases, equilibrium may be understood as arising from a process of relaxation. And in both cases, the equilibrium distributions are mere contingencies, not laws: the underlying theories allow for more general distributions, that violate quantum physics in the first case and thermal-equilibrium physics in the second. Taken on its own terms, then, pilot-wave theory is *not* a mere alternative formulation of quantum theory. Instead, the theory itself tells us that quantum physics is a special case of a much wider ‘nonequilibrium’ physics (with $P\neq|\Psi|^{2}$), which may exist for example in the early inflationary universe, or for relic particles that decoupled soon after the big bang, or for particles emitted by black holes (Valentini 2004b, 2007, 2008a,b). *True (Subquantum) Measurements* The wider physics of nonequilibrium has its own theory of measurement — ‘subquantum measurement’ (Valentini 1992, 2002; Pearle and Valentini 2006). This is to be expected, since measurement is theory-laden: given a (perhaps tentative) theory, one should look to the theory itself to tell us how to perform correct measurements (cf. section 8). In pilot-wave theory, an ‘ideal subquantum measurement’ (analogous to the ideal, non-disturbing measurement familiar from classical physics) enables an experimenter to measure a de Broglie-Bohm system trajectory without disturbing the wave function. This is possible if the experimenter possesses an apparatus whose ‘pointer’ has an arbitrarily narrow nonequilibrium distribution (Valentini 2002, Pearle and Valentini 2006). Essentially, the system and apparatus are allowed to interact so weakly that the joint wave function hardly changes; yet, the displacement of the pointer contains information about the system configuration, information that is visible if the pointer distribution is sufficiently narrow. A sequence of such operations allows the experimenter to determine the system trajectory without disturbing the wave function, to arbitrary accuracy. *Generally False Quantum ‘Measurements’ (Formal Analogues of Classical Measurements)* We are currently unable to perform such true measurements, because we are trapped in a state of quantum equilibrium. Instead, today we generally carry out procedures that are known as ‘quantum measurements’. This terminology is misleading, because such procedures are — at least according to pilot-wave theory — generally *not* correct measurements: they are merely experiments of a certain kind, designed to respect a formal analogy with *classical* measurements (cf. Valentini 1996, pp. 50–51). Thus, in classical physics, to measure a system variable $\omega$ using an apparatus pointer $y$, Hamilton’s equations tell us that we should switch on a Hamiltonian $H=a\omega p_{y}$ (where $a$ is a coupling constant and $p_{y}$ is the momentum conjugate to $y$). One obtains trajectories $\omega(t)=\omega _{0}$ and $y(t)=y_{0}+a\omega_{0}t$. From the displacement $a\omega_{0}t$ of the pointer, one may infer the value of $\omega_{0}$. An experimental operation represented by $H=a\omega p_{y}$ then indeed realises a correct measurement of $\omega$ (according to classical physics). But there is no reason to expect the same experimental operation to constitute a correct measurement of $\omega$ for a nonclassical system. Even so, remarkably, so-called quantum ‘measurements’ are in general designed using classical measurements as a guide. Specifically, in quantum theory, to measure an observable $\omega$ using an apparatus pointer $y$, one switches on a Hamiltonian operator $\hat{H}=a\hat{\omega}\hat{p}_{y}$. The quantum procedure is obtained, in effect, by ‘quantising’ the classical procedure. But what does this analogous quantum procedure actually accomplish? According to pilot-wave theory, it merely generates a branching of the total wave function, with branches labelled by eigenvalues $\omega_{n}$ of the linear operator $\hat{\omega}$, and with the total configuration $q(t)$ ending in the support of one of the (non-overlapping) branches. Thus, for example, if the system is a particle with position $x$, the initial wave function$$\Psi_{0}(x,y)=\left( \sum_{n}c_{n}\phi_{n}(x)\right) g_{0}(y)$$ (where $\hat{\omega}\phi_{n}=\omega_{n}\phi_{n}$ and $g_{0}$ is the initial (narrow) pointer wave function) evolves into$$\Psi(x,y,t)=\sum_{n}c_{n}\phi_{n}(x)g_{0}(y-a\omega_{n}t)\ .$$ The effect of the experiment is simply to create this branching.[^7] From a pilot-wave perspective, the eigenvalues $\omega_{n}$ have no particular ontological status: we simply have a complex-valued field $\Psi$ on configuration space, obeying a linear wave equation, whose time evolution may in some situations be conveniently analysed using the methods of linear functional analysis (as we saw for the classical vibrating string). It cannot be sufficiently stressed that, generally speaking, by means of this procedure one has *not measured anything* (so pilot-wave theory tells us). In quantum theory, if the pointer is found to occupy the $n$th branch, it is common to assert that therefore ‘the observable $\omega$ has the value $\omega_{n}$’. But in pilot-wave theory, all that has happened is that, at the end of the experiment, the system trajectory $x(t)$ is guided by the (effectively) reduced wave function $\phi_{n}(x)$.[^8] This does not usually imply that the system has or had some property with value $\omega_{n}$ (at the end of the experiment or at the beginning), because in pilot-wave theory there is no general relation between eigenvalues and ontology.[^9] Thus, a so-called ‘ideal quantum measurement of $\omega$’ is not a true measurement (a notable exception being the case $\omega=x$). And in general, it is usually incorrect to identify eigenvalues with values of real physical quantities: one must beware of ‘eigenvalue realism’. Some Examples of the Claim ========================== Before evaluating the Claim, let us quote some examples of it from the literature. First, Deutsch (1996, p. 225) argues that parallel universes are > .... a logical consequence of Bohm’s ‘pilot-wave’ theory (Bohm \[1952\]) and its variants (Bell \[1986\]). .... The idea is that the ‘pilot-wave’ .... guides Bohm’s single universe along its trajectory. This trajectory occupies one of the ‘grooves’ in that immensely complicated multidimensional wave function. The question that pilot-wave theorists must therefore address, and over which they invariably equivocate, is what are the *unoccupied* grooves? It is no good saying that they are merely a theoretical construct and do not exist physically, for they continually jostle both each other and the ‘occupied’ groove, affecting its trajectory (Tipler \[1987\], p. 189). .... So the ‘unoccupied grooves’ must be physically real. Moreover they obey the same laws of physics as the ‘occupied groove’ that is supposed to be ‘the’ universe. But that is just another way of saying that they are universes too. .... In short, pilot-wave theories are parallel-universes theories in a state of chronic denial. Zeh (1999, p. 200) puts the matter thus: > It is usually overlooked that Bohm’s theory contains the *same* ‘many worlds’ of dynamically separate branches as the Everett interpretation (now regarded as ‘empty’ wave components), since it is based on precisely the same (‘*absolutely* real’) global wave function .... . Only the ‘occupied’ wave packet itself is thus meaningful, while the assumed classical trajectory would merely point at it: ‘This is where *we* are in the quantum world.’ Similarly, Brown and Wallace (2005, p. 527) write the following: > .... the corpuscle’s role is minimal indeed: it is in danger of being relegated to the role of a mere epiphenomenal ‘pointer’, irrelevantly picking out one of the many branches defined by decoherence, while the real story — dynamically and ontologically — is being told by the unfolding evolution of those branches. The ‘empty wavepackets’ in the configuration space which the corpuscles do not point at are none the worse for its absence: they still contain cells, dust motes, cats, people, wars and the like. In the case of Zeh, and of Brown and Wallace, the key assertion is that pilot-wave theory and many-worlds theory contain the same multitude of wave-function branches, and that in pilot-wave theory the ‘empty’ branches nevertheless constitute parallel worlds (which ‘still contain cells, dust motes, cats, people, wars and the like’). Deutsch’s argument leads to the same assertion — if one interprets his word ‘grooves’ to mean what are normally called ‘branches’. However, Deutsch may in fact have used ‘grooves’ to mean the set of de Broglie-Bohm trajectories, in which case his version of the Claim states that pilot-wave theory is really a theory of ‘many de Broglie-Bohm worlds’.[^10] (This version of the Claim is addressed in section 7.) In any case, in essence Deutsch argues that the unoccupied grooves are real, and that they ‘obey the same laws of physics’ as the occupied groove, thereby constituting a ‘multiverse’. Today, it is often said that in Everettian quantum theory the notion of parallel ‘worlds’ or ‘universes’ applies only to the macroscopic worlds defined (approximately) by decoherence. Formerly, it was common to assert the existence of many worlds at the microscopic level as well. Without entering into any controversy that might still remain about this, here for completeness we shall address the Claim for both ‘microscopic’ and ‘macroscopic’ cases. ‘Microscopic’ Many Worlds? ========================== In pilot-wave theory, is there a multiplicity of parallel worlds at the microscopic level? To see that there is not, let us consider some examples. \(1) *Superposition of eigenvalues*. Let a single particle moving in one dimension have the wave function $\psi(x,t)\propto e^{-iEt}\left( e^{ipx}+e^{-ipx}\right) $, which is a mathematical superposition of two distinct eigenfunctions of the momentum operator $\hat{p}=-i\partial/\partial x$. Are there in any sense two particles, with two different momenta $+p$ and $-p$? Clearly not. While the field $\psi\propto\cos px$ has two Fourier components $e^{ipx}$ and $e^{-ipx}$, there is only one single-valued field $\psi$ (as in our example of the classical vibrating string). And a true (subquantum) measurement of the particle trajectory $x(t)$ would reveal that the particle is at rest (since $S=-Et$ and $\partial S/\partial x=0$). In a so-called ‘quantum measurement of momentum’, at the end of the experiment $x(t)$ is guided by $e^{ipx}$ or $e^{-ipx}$: during the experiment the particle is accelerated and *acquires* a momentum $+p$ or $-p$, as could be confirmed by a true subquantum measurement. Any impression that there may be two particles present arises from a mistaken belief in eigenvalue realism. \(2) *Double-slit experiment*. Let a single particle be fired at a screen with two slits, where the incident wave function $\psi$ passes through both slits, leading to an interference pattern on the far side of the screen. Are there in any sense two particles, one passing through each slit? Again, clearly not. There is a single-valued field $\psi$ passing through both slits, and there is one particle trajectory $\mathbf{x}(t)$ in 3-space, passing through one slit only (as again could be tracked by a true subquantum measurement). \(3) *Superposition of ‘Ehrenfest’ packets for a hydrogen atom*. Finally, consider a single hydrogen atom, with a centre-of-mass trajectory $\mathbf{x}(t)$ and with a wave function that is a superposition$$\psi=\frac{1}{\sqrt{2}}\left( \psi_{1}+\psi_{2}\right)$$ of two localised and spatially-separated ‘Ehrenfest’ packets $\psi_{1}$ and $\psi_{2}$. Each packet, with centroid $\left\langle \mathbf{x}\right\rangle _{1}$ or $\left\langle \mathbf{x}\right\rangle _{2}$, follows an approximately classical trajectory, and let us suppose that the actual trajectory $\mathbf{x}(t)$ lies in $\psi_{2}$ only. Is there any sense in which we have *two* hydrogen atoms? The answer is no, because, once again, a true subquantum measurement could track the unique atomic trajectory $\mathbf{x}(t)$ (without affecting $\psi$). This last example has a parallel in the macroscopic domain, to be discussed in the next section. Before proceeding, it will prove useful to consider the present example further. In particular, one might argue that each packet $\psi_{1}$ and $\psi_{2}$ *behaves like* a hydrogen atom, under operations defined by changes in the external potential $V$. Specifically, the motion of the empty packet $\psi_{1}$ will respond to changes in $V$, in exactly the same way as will the motion of the occupied packet $\psi_{2}$. One might then claim that, if one regards each packet as physically real, one may as well conclude that there really are two hydrogen atoms present. But this argument fails, because the similarity of behaviour of the two packets holds only under the said restricted class of operations (that is, modifying the classical potential $V$). In pilot-wave theory, in principle, other experimental operations are possible, under which the behaviours of $\psi_{1}$ and $\psi_{2}$ will be quite different. For example, suppose one first carries out an ideal subquantum measurement, which shows that the particle is in the packet $\psi_{2}$. One may then carry out an additional experiment — say an ordinary quantum experiment, using a piece of macroscopic apparatus — designed to find out whether or not a given packet is occupied. One may *predict* that, in the second experiment, if the operation is performed on packet $\psi_{1}$ the apparatus pointer will point to ‘unoccupied’, while if the operation is performed on $\psi_{2}$ the pointer will point to ‘occupied’.[^11] It will then become operationally apparent that $\psi _{1}$ consists solely of a bundle of the complex-valued $\psi$-field, whose centroid happens to be *simulating* the approximately classical motion of a hydrogen atom in an external field (under the said restricted class of operations). It is of course hardly mysterious that in some circumstances one may have an ontological but empty $\psi$-packet whose motion approximately traces out the trajectory of a classical body — just as, in some circumstances, a localised classical electromagnetic pulse travelling through an appropriate medium (with variable refractive index) might trace out a trajectory similar to that of a moving body. In both cases, it would be clear from other experiments that the moving pulse is not really a moving body. ‘Macroscopic’ Many Worlds? ========================== Let us now ask if there is any sense in which pilot-wave theory contains many worlds at the ‘macroscopic’ level. We shall begin with an utterly unrealistic example, involving a superposition of two ‘Ehrenfest’ packets each (supposedly) representing a classical macroscopic world. This example has the virtue of illustrating the Claim in what we believe to be its strongest possible form. We shall see that, even for this example, the Claim may be straightforwardly refuted, along the lines given in the last section for the case of the hydrogen atom. We then turn to a further unrealistic example, involving a superposition of two delocalised ‘WKB’ packets which, again, are each supposed to represent a classical macroscopic world. This example has the virtue of showing that, if one cannot point to some piece of localised ‘$\Psi$-stuff’ following an alternative classical trajectory, then the Claim simply cannot be formulated. The lesson learned from this example is then readily applied to realistic cases with decoherence, for which the wave functions involved are also generally delocalised, and for which, therefore, the Claim again cannot be formulated. *The Claim in a ‘Strong Form’* Let us again consider an ‘Ehrenfest’ superposition$$\Psi(q,t)=\frac{1}{\sqrt{2}}\left( \Psi_{1}(q,t)+\Psi_{2}(q,t)\right) \ ,$$ where now the configuration $q$ represents not just a single hydrogen atom but all the contents of a macroscopic region — for example, a region including the Earth, with human experimenters, apparatus, and so on. We shall imagine that the centroids $\left\langle q\right\rangle _{1}$, $\left\langle q\right\rangle _{2}$ of the respective packets $\Psi_{1}$, $\Psi_{2}$ follow approximately classical trajectories, corresponding to alternative histories of events on Earth. This is of course not at all a realistic formulation of the classical limit for a complex macroscopic system: wave packets spread, and they do so particularly rapidly for chaotic systems. But we shall ignore this for a moment, because the example is nevertheless instructive. Let us assume that $\Psi$ consists initially of a single narrow packet, and that the subsequent splitting of $\Psi$ into the (non-overlapping) branches $\Psi_{1}$, $\Psi_{2}$ occurs as a result of a ‘quantum measurement’ with two possible outcomes $+1$ and $-1$. (See Fig. 1.) One might imagine that, at first, the branches $\Psi_{1}$, $\Psi_{2}$ develop a non-overlap with respect to the apparatus pointer coordinate $y$, which then generates a non-overlap with respect to other (macroscopic) degrees of freedom — beginning, perhaps, with variables in the eye and brain of the experimenter who looks at the pointer. We may imagine that it had been decided in advance that if the outcome were $+1$, the experimenter would stay at home; while if the outcome were $-1$, the experimenter would go on holiday. These alternative histories for the experimenter are supposed to be described by the trajectories of the narrow packets $\Psi_{1}$ and $\Psi_{2}$ (whose arguments include all the relevant variables, constituting the centre-of-mass of the experimenter, his immediate environment, the plane he may or may not catch, and so on). Let us assume that the actual de Broglie-Bohm trajectory $q(t)$ ends in the support of $\Psi_{2}$, as shown in Fig. 1. \[ptb\] [Fig1.jpg]{} One could of course extend the example to superpositions of the form $\Psi=\Psi_{1}+\Psi_{2}+\Psi_{3}+....$, where $\Psi_{1}$, $\Psi_{2}$, $\Psi_{3}$.... are non-overlapping narrow packets that trace out — in configuration space — approximately classical motions corresponding to alternative macroscopic histories of the world, with each history containing, in the words of Brown and Wallace, ‘cells, dust motes, cats, people, wars and the like’. Now, with these completely unrealistic assumptions, the Claim seems to be at its strongest. For if $\Psi$ is ontological, then in the example of Fig. 1 the narrow packets $\Psi_{1}$ and $\Psi_{2}$ are both real objects moving along approximately classical paths in configuration space. There is certainly *something real* moving along each path. One of the paths has an extra component too — the actual configuration $q(t)$ — but even so the fact remains that something real is moving along the other path as well. This situation seems to be the strongest possible realisation of the Claim. One might say, for example with Brown and Wallace (section 4 above), that ‘\[t\]he ‘empty wavepackets’ in the configuration space which the corpuscles do not point at are none the worse for its absence’.[^12] One might assert that here there really are two macroscopic worlds, one built from $\Psi_{1}$ alone, and one built from $\Psi_{2}$ together with $q$. And again, as in the case of the hydrogen atom discussed in section 5, one might argue that there is no difference in the behaviour of these two worlds, and that the motion of $\Psi_{1}$ represents a world every bit as *bona fide* as the world represented by $\Psi_{2}$ (together with $q$, which one might assert is superfluous). But again, as in the case of the hydrogen atom, pilot-wave theory tells us that a remote experimenter with access to nonequilibrium particles could in principle track the true history $q(t)$, without affecting $\Psi$. Further, once it is known which packet is empty and which not, the experimenter could perform additional experiments showing that $\Psi_{1}$ and $\Psi_{2}$ (predictably) behave *differently* under certain operations. Again, the empty packet is merely simulating a classical world, and the simulation holds only under a class of operations more restrictive than those allowed in pilot-wave theory. The situation is conceptually the same as in the case of the single hydrogen atom.[^13] We conclude that the Claim fails, even in a ‘strong form’. *The Claim in a ‘Weak Form’* Before considering more realistic approaches (with decoherence), it is instructive to reconsider the above scenario in terms of a different — and equally unrealistic — approach to the classical limit, namely the WKB approach, in which the amplitude of $\Psi$ is taken to vary slowly over relevant lengthscales. It is often said that the resulting wave function may be ‘associated with’ a family of classical trajectories, defined by the equation $p=\nabla S$ giving the classical momentum $p$ in terms of the phase gradient. (This approach is frequently used, for example, in quantum cosmology.) Where such trajectories come from is not clear in standard quantum theory, but in pilot-wave theory it is clear enough: in the WKB regime, the de Broglie-Bohm trajectory $q(t)$ (within the extended wave) will indeed follow a classical trajectory defined by $p=\nabla S$. Now let the superposition$$\Psi(q,t)=\frac{1}{\sqrt{2}}\left( \Psi_{1}(q,t)+\Psi_{2}(q,t)\right)$$ be composed of two non-overlapping ‘WKB packets’ $\Psi_{1}$, $\Psi_{2}$, that formed from the division of a single WKB packet $\Psi$, where again $q$ represents the contents of a macroscopic region including the Earth. As in the earlier example, we imagine that the division occurred because a quantum experiment was performed, with two possible outcomes indicated by a pointer coordinate $y$: and again, $\Psi_{1}$ corresponds to the outcome $+1$, while $\Psi_{2}$ corresponds to the outcome $-1$, and the actual $q(t)$ ends in the support of $\Psi_{2}$. Unlike the earlier example, though, in this case the packets $\Psi_{1}$, $\Psi_{2}$ are narrow with respect to $y$ but broad with respect to the other (relevant) degrees of freedom — so broad, in fact, that with respect to these other degrees of freedom the packets are effectively plane waves. The only really significant difference between $\Psi_{1}$ and $\Psi_{2}$ is in their support with respect to $y$. (See Fig. 2.) \[ptb\] [Fig2.jpg]{} To be sure, this is not a realistic model of the macroscopic world, no more than the Ehrenfest model was. But it is instructive to see the effect this alternative approach has on the Claim. Under the above assumptions, the actual trajectory $q(t)$ will be approximately classical (except in the small branching region), and might be taken to correctly model the macroscopic history with outcome $-1$ and the experimenter going on holiday. But is there now any other discernible realisation of an alternative classical macroscopic motion, such as the experimenter staying at home? Clearly not. While the empty branch $\Psi_{1}$ is ontological, it is spread out over all degrees of freedom except $y$, so that its time evolution does *not* trace out a trajectory corresponding to an approximately classical alternative motion. The experimenter ‘staying at home’ is nowhere to be seen. Unlike in the Ehrenfest case, one cannot point to some piece of localised ‘$\Psi$-stuff’ following an alternative classical trajectory. Of course, different initial configurations $q(0)$ (with the same initial $\Psi$) would yield different trajectories $q(t)$. And the ‘information’ about these alternative paths certainly exists in a mathematical sense, in the structure of the complex field $\Psi$. But there is no reason to ascribe anything other than mathematical status to these alternative trajectories — just as we saw in section 2, for the analogous classical case of a test particle moving in an external electromagnetic field or in a background spacetime geometry. The alternative trajectories are mathematical, not ontological. *Realistic Models (with Environmental Decoherence)* A more realistic account of the macroscopic, approximately classical realm may be obtained from models with environmental decoherence. (For a review, see Zurek (2003).) Consider a system with configuration $q$, coupled to environmental degrees of freedom $y=(x_{1},x_{2},...,x_{N})$. For a pure state the wave function is $\Psi(q,y,t)$, and one often considers mixtures with a density operator$$\hat{\rho}(t)=\sum_{\alpha}p_{\alpha}|\Psi_{\alpha}(t)\rangle\langle \Psi_{\alpha}(t)|\ .$$ (For example, in ‘quantum Brownian motion’, the system is a single particle in a potential and the environment consists of a large number of harmonic oscillators in a thermal state.) By tracing over $y$ one obtains a reduced density operator for the system, with matrix elements$$\rho_{\mathrm{red}}(q,q^{\prime},t)\equiv\sum_{\alpha}p_{\alpha}\int dy\ \Psi_{\alpha}(q,y,t)\Psi_{\alpha}^{\ast}(q^{\prime},y,t)\ ,$$ from which one may define a quasi-probability distribution in phase space for the system:$$W_{\mathrm{red}}(q,p,t)\equiv\frac{1}{2\pi}\int dz\ e^{ipz}\rho_{\mathrm{red}}(q-z/2,q+z/2,t)$$ (the reduced Wigner function). In certain conditions, one obtains an approximately non-negative function $W_{\mathrm{red}}(q,p,t)$ whose time evolution approximates that of a classical phase-space distribution. For some elementary systems, such as a harmonic oscillator, the motion of a narrowly-localised packet $W_{\mathrm{red}}(q,p,t)$ can trace out a thin ‘tube’ approximating a classical trajectory in phase space (Zurek *et al*. 1993). However, such simple quantum-classical correspondence breaks down for chaotic systems, because of the rapid spreading of the packet: even an initial minimum-uncertainty packet spreads over macroscopic regions of phase space within experimentally-accessible timescales (Zurek 1998). On the other hand, at least for some examples it can be shown that, even in the chaotic case, the evolution of $W_{\mathrm{red}}(q,p,t)$ approximates the evolution of a classical phase-space distribution $W_{\mathrm{class}}(q,p,t)$ (a Liouville flow with a diffusive contribution from the environment), where both distributions *rapidly delocalise* (Habib *et al*. 1998; Zurek 2003, pp. 745–47).[^14] In pilot-wave theory, a mixed quantum state is described by a preferred decomposition of $\hat{\rho}$ into a statistical mixture (with weights $p_{\alpha}$) of ontological pilot waves $\Psi_{\alpha}$ (Bohm and Hiley 1996). For a given element of the ensemble, the de Broglian velocity of the configuration is determined by the actual pilot wave $\Psi_{\alpha}$. (A different decomposition generally yields different velocities, and so is physically distinct at the fundamental level.) Now, the pilot-wave theory of quantum Brownian motion has been studied by Appleby (1999). Under certain conditions it was found that, as a result of decoherence, the de Broglie-Bohm trajectories of the system become approximately classical (as one might have expected). While Appleby made some simplifying assumptions in his analysis, pending further studies of this kind it is reasonable to assume that Appleby’s conclusions hold more generally. We may now evaluate the Claim in the context of realistic models. First of all, as in the unrealistic examples considered above, the Claim fails because an ideal subquantum measurement will always show that there is just one trajectory $q(t)$; and, further experiments will show that empty wave packets (predictably) behave differently from packets containing the actual configuration. This alone suffices to refute the Claim. Even so, it is interesting to ask if it is possible to have localised ontological packets (‘built out of $\Psi$’) whose motions execute alternative classical histories: that is, it is interesting to ask if the ‘strong form’ of the Claim discussed above — which in any case fails, but is still rather intriguing — could ever occur in practice in realistic models. The answer, again, is no. For an elementary non-chaotic system, one can obtain a narrow ‘Wigner packet’ $W_{\mathrm{red}}(q,p,t)$ approximating a classical trajectory, and one could also have a superposition of two or more such packets (with macroscopic separations). One might then argue that, since $W_{\mathrm{red}}$ is built out of $\Psi$, we have (in a realistic setting, with decoherence) something like the ‘strong form’ of the Claim discussed above. However, the models usually involve a mixture of $\Psi$’s, of which $W_{\mathrm{red}}$ is not a local functional. So the ontological status of a narrow packet $W_{\mathrm{red}}$ is far from clear. But even glossing over this, having a narrow packet $W_{\mathrm{red}}$ following an approximately classical path is in any case unrealistic in a world containing chaos, where, as we have already stated, one can show only that $W_{\mathrm{red}}$ — an approximately non-negative function, with a large spread over phase space — has a time evolution that approximately agrees with the time evolution of a classical (delocalised) phase-space distribution; that is, $W_{\mathrm{red}}$ follows an approximately Hamiltonian or Liouville flow (with a diffusive contribution). Again, one cannot obtain anything like ‘localised ontological $\Psi$-stuff’ (or something locally derived therefrom) executing an approximately classical trajectory — not even for one particle in a chaotic potential, and certainly not for a realistic world containing turbulent fluid flow, double pendulums, people, wars, and so on. One *can* obtain localised trajectories from a quantum description of a chaotic system, if the system is continuously measured — which in practice involves an experimenter continuously monitoring an apparatus or environment that is interacting with the system (Bhattacharya *et al*. 2000). Such trajectories for the Earth and its contents might in principle be obtained by monitoring the environment (the interstellar medium, the cosmic microwave background, etc.), but in the absence of an experimenter performing the required measurements it is difficult to see how this could be relevant to our discussion. And in any case, in a pilot-wave treatment, there is no reason why such a procedure would yield ‘localised ontological $\Psi$-stuff’ executing the said trajectories. In a realistic quantum-theoretical model, then, the outcome is a highly delocalised distribution $W_{\mathrm{red}}(q,p,t)$ on phase space, obeying an approximately Hamiltonian or Liouville evolution (with a diffusive contribution). As in the unrealistic WKB example above, in pilot-wave theory there will be one trajectory for each system. And, while different initial conditions will yield different trajectories, there is no reason to ascribe anything other than mathematical status to these alternatives — just as in the analogous classical case of a test particle moving in an external field or background geometry. Once again, the alternative trajectories are mathematical, not ontological. Of course, given such a distribution $W_{\mathrm{red}}(q,p,t)$, *if one wishes* one may identify the flow with a set of trajectories representing parallel (approximately classical) worlds, as in the decoherence-based approach to many worlds of Saunders and Wallace. This is fair enough from a many-worlds point of view. But if we start from pilot-wave theory understood on its own terms, there is no motivation for doing so: such a step would amount to a reification of mathematical structure (assigning reality to all the trajectories associated with the velocity field at all points in phase space). If one does so reify, one has constructed a different physical theory, with a different ontology; one may do so if one wishes, but from a pilot-wave perspective there is no special reason to take this step. *Other Approaches to Decoherence* Finally, decoherence and the emergence of the classical limit has also been studied using the decoherent histories formulation of quantum theory.[^15] In these treatments, there will still be no discernible ‘localised ontological $\Psi$-stuff’ following alternative classical trajectories, for realistic models containing chaos. Therefore, again, the ‘strong form’ of the Claim (which in any case fails by virtue of subquantum measurement) could never occur in practice. Further Remarks =============== *Many de Broglie-Bohm Worlds?* In the Saunders-Wallace approach to many worlds, one ascribes reality to the full set of trajectories associated with the reduced Wigner function $W_{\mathrm{red}}(q,p,t)$ in the classical limit (for some appropriately-defined macrosystem with configuration $q$). This raises a question. Why not *also* ascribe reality to the full set of de Broglie-Bohm trajectories outside the classical limit, for arbitrary (pure) quantum states, resulting in a theory of ‘many de Broglie-Bohm worlds’?[^16] After all, just as $W_{\mathrm{red}}$ has a natural velocity field associated with it (on phase space), so an arbitrary wave function $\Psi$ has a natural velocity field associated with it (on configuration space) — namely, de Broglie’s velocity field derived from the phase gradient $\nabla S$ (or more generally, from the quantum current). In both cases, the velocity fields generate a set of trajectories, and one may ascribe reality to them all if one wishes. Why do so in the first case, but not in the second? Furthermore, if the results due to Appleby (1999) (mentioned in section 6) for quantum Brownian motion hold more generally, the parallel de Broglie-Bohm trajectories will reduce to the parallel classical trajectories in an appropriate limit; in which case, the theory of ‘many de Broglie-Bohm worlds’ will reproduce the Saunders-Wallace multiverse in the classical limit, and will provide a simple and natural extension of it outside that limit — that is, one will have a notion of parallel worlds that is defined generally, even at the microscopic level, and not just in the classical-decoherent limit.[^17] However, since the de Broglie velocity field is single-valued, trajectories $q(t)$ cannot cross. There can be no splitting or fusion of worlds. The above ‘de Broglie-Bohm multiverse’ then has the same kind of ‘trivial’ structure that would be obtained if one reified all the possible trajectories for a classical test particle in an external field: the parallel worlds evolve independently, side by side. Given such a theory, on the grounds of Ockham’s razor alone, there would be a conclusive case for taking only one of the worlds as real. On this point we remark that, in Deutsch’s version of the Claim, if his word ‘grooves’ is interpreted as referring to the set of de Broglie-Bohm trajectories, then the Claim amounts to asserting that pilot-wave theory implies the de Broglie-Bohm multiverse. But again, because the parallel worlds never branch or fuse, it would be natural to reduce the theory to a single-world theory with only one trajectory. A theory of many de Broglie-Bohm worlds, then, can only be a mere curiosity — a foil, perhaps, against which to test conventional Everettian ideas, but not a serious candidate for a physical theory. On the other hand, it appears to provide the basis for an argument against the Saunders-Wallace multiverse. For as we have seen, it is natural to extend the Saunders-Wallace multiverse to a deeper and more general de Broglie-Bohm multiverse.[^18] And this, in turn, reduces naturally to a single-universe theory — that is, to standard de Broglie-Bohm theory. Thus, we have an argument that begins by extending the Saunders-Wallace worlds to the microscopic level, and ends by declaring only one of the resulting worlds to be real. *Quantum Nonequilibrium and Many Worlds* Since pilot-wave theory generally violates the Born rule, while conventional many-worlds theory (apparently) does not, on this ground alone any attempt to argue that the two theories are really the same must fail. Further, if such violations were discovered,[^19] then Everett’s theory would be disproved and that of de Broglie and Bohm vindicated. On the other hand, it might be suggested that violations of the Born rule could be incorporated into an Everett-type framework, by adopting the theory of ‘many de Broglie-Bohm worlds’ sketched above. Restricting ourselves for simplicity to the pure case, if one assumes a nonequilibrium probability measure $P_{0}\neq|\Psi_{0}|^{2}$ on the set of (parallel) initial configurations $q(0)$, then for as long as relaxation to quantum equilibrium has yet to occur completely, one will obtain a nonequilibrium set of parallel trajectories $q(t)$, and one expects (in general) to find violations of the Born rule within individual parallel worlds.[^20] If one accepts this, then observation of quantum nonequilibrium would not suffice to disprove many worlds (though of course conventional Everettian quantum theory *would* be disproved). On the other hand, however, as stated above it is natural to reduce the theory of many de Broglie-Bohm worlds to a single-world theory, and this is equally true in the nonequilibrium case. Therefore, the de Broglie-Bohm multiverse would not provide a plausible refuge for the Everettian faced with nonequilibrium phenomena. Even so, it might be worth exploring the theory of many de Broglie-Bohm worlds with a nonequilibrium measure, in particular to highlight the assumptions made in the Deutsch-Wallace derivation of the Born rule (Deutsch 1999, Wallace 2003a). *On Arguments Concerning ‘Structure’* One might argue that the mathematical structure in the quantum state that is reified by many-worlds theorists plays such an explanatory and predictive role that it should indeed be regarded as real. To quote Wallace (2003b, p. 93): > A tiger is any pattern which behaves as a tiger. .... the existence of a pattern as a real thing depends on the usefulness — in particular, the explanatory power and predictive reliability — of theories which admit that pattern in their ontology. However, the behaviour of a system depends on the allowed set of experimental operations. If one considers subquantum measurements, the patterns reified by many-worlds theorists will cease to be explanatory and predictive. From a pilot-wave perspective, then, such mathematical patterns are explanatory and predictive only in the confines of quantum equilibrium: outside that limited domain, subquantum measurement theory would provide a more explanatory and predictive framework. At best, it can only be argued that, if approximately classical experimenters are confined to the quantum equilibrium state, so that they are unable to perform subquantum measurements, then they will encounter a phenomenological *appearance* of many worlds — just as they will encounter a phenomenological appearance of locality, uncertainty, and of quantum physics generally. *On Arguments Concerning Computation* It might be argued that quantum computation provides evidence for the existence of many worlds (Deutsch 1985, 1997). Deutsch asks ‘how’ and ‘where’ the supposedly huge number of parallel computations are performed, and has challenged those who doubt the existence of parallel universes to provide an explanation for quantum-computational processes such as Shor’s factorisation algorithm (Deutsch 1997, p. 217). However, while it often used to be asserted that the advantages of quantum computation originated from quantum superposition, the matter has become controversial. Some workers, such as Jozsa (1998) and Steane (2003), claim that entanglement is the truly crucial feature. Further, the ability to find periods seems to be the mechanism underlying Shor’s algorithm, and this is arguably more related to the ‘wave-like’ aspect of quantum physics than it is to superposition (Mermin 2007). Leaving such controversies aside, we know in any case that, in quantum equilibrium, pilot-wave theory yields the same predictions as ordinary quantum theory, including for quantum algorithms. In an assessment of precisely how pilot-wave theory provides an explanation for a specific quantum algorithm, it should be borne in mind that: (a) the theory contains an ontological pilot wave propagating in many-dimensional configuration space; (b) the theory is nonlocal; and (c) with respect to quantum ‘measurements’, the theory is contextual. There is then ample scope for exploring the pilot-wave-theoretical account of quantum-computational processes, if one wishes to do so, just as there is for any other type of quantum process. Counter-Claim: A General Argument Against Many Worlds ===================================================== We have refuted the Claim, that pilot-wave theory is ‘many worlds in denial’. Here, we put forward a Counter-Claim: - Counter-Claim: *The theory of many worlds is unlikely to be true, because it is ultimately motivated by the puzzle of quantum superposition, which arises from a belief in eigenvalue realism, which is in turn based (ultimately) on the intrinsically unlikely assumption that quantum measurements should be modelled on classical measurements.* We saw in section 3 that quantum theorists call an experiment ‘a measurement of $\omega$’ only because it formally resembles what *would have been* a correct measurement of $\omega$ had the system been classical. Thus, the system-apparatus interaction Hamiltonian is chosen by means of (for example) the mapping$$H=a\omega p_{y}\longrightarrow\hat{H}=a\hat{\omega}\hat{p}_{y}\ , \label{Map}$$ so that quantum ‘measurements’ are in effect modelled on classical measurements. That this is a mistake is clear from a pilot-wave perspective.[^21] But the key point is more general, and does not depend on pilot-wave theory. In fact, it was made by Einstein in 1926 (see below). *The Argument* Everett’s initial motivation for introducing many worlds was the puzzle of quantum superposition, in particular the apparent transfer of superposition from microscopic to macroscopic scales during a quantum measurement (Everett 1973, pp. 4–6). While our understanding of the theory today differs in many respects from Everett’s, it is highly doubtful that the theory would ever have been proposed, were it not for the puzzle of quantum superposition. Now, the puzzle of superposition stems from what we have called ‘eigenvalue realism’: the assignment of an ontological status to the eigenvalues of linear operators acting on the wave function. For if an initial wave function$$\psi_{0}(x)=\sum_{n}c_{n}\phi_{n}(x)$$ is a superposition of different eigenfunctions $\phi_{n}(x)$ of $\hat{\omega}$ with different eigenvalues $\omega_{n}$, then if one takes eigenvalue realism literally it appears as if all the values $\omega_{n}$ should somehow be regarded as simultaneous ontological attributes of a single system. Why do so many physicists believe in eigenvalue realism? The answer lies, ultimately, in their belief in the quantum theory of measurement. For example, it is widely thought that an experimental operation described by the Hamiltonian operator $\hat{H}=a\hat{\omega}\hat{p}_{y}$ constitutes a correct measurement of an observable $\omega$, as indicated by the value of the pointer coordinate $y$. To see that this leads to a belief in eigenvalue realism, consider a system with wave function $\phi_{n}(x)$. Under such an operation, the pointer $y$ will indicate the value $\omega_{n}$. Because the experimenter *believes* that this pointer reading provides a correct measurement, the experimenter will then believe that the system must have a property $\omega$ with value $\omega_{n}$ — that is, the experimenter will believe in eigenvalue realism. Now, why do so many physicists believe that an operation described by (for example) $\hat{H}=a\hat{\omega}\hat{p}_{y}$ constitutes a correct measurement of $\omega$, for any observable $\omega$? The answer, as we have seen, is that the said operation formally resembles a classical measurement of $\omega$, via the mapping (\[Map\]). We claim that this is the heart of the matter: it is widely assumed, in effect, that classical physics provides a reliable guide to measurement for nonclassical systems. We claim further that this assumption is intrinsically unlikely, so that the conclusions stemming from it — eigenvalue realism, superposition of properties, multiplicity of worlds — are in turn intrinsically unlikely (Valentini 1992, pp. 14–16, 19–29; 1996, pp. 50–51). The assumption is unlikely because, generally speaking, one cannot use a theory as an accurate guide to measurement outside the domain of validity of the theory. For experiment is theory-laden, and correct measurement procedures must be laden with the correct theory. As an example, consider what might happen if one used Newton’s theory of gravity to interpret observations close to a black hole: one would encounter numerous puzzles and paradoxes, that would be resolved only when the observations were interpreted using general relativity. It is intrinsically improbable that measurement operations taken from an older, superseded physics will remain valid in a fundamentally new domain for all possible observables. It is much more likely that a new domain will be better understood in terms of a new theory based on new concepts, with its own new theory of measurement — as shown by the example of general relativity, and indeed by the example of de Broglie’s nonclassical dynamics.[^22] ‘*Einstein’s Hot Water’* This very point was made by Einstein in 1926, in a well-known conversation with Heisenberg (Heisenberg 1971, pp. 62–69). This conversation is often cited as evidence of Einstein’s view that observation is theory-laden. But a crucial element is usually missed: Einstein also warned Heisenberg that his treatment of observation was unduly laden with the superseded theory of classical physics, and that this would eventually cause trouble (Valentini 1992, p. 15; 1996, p. 51). During the conversation, Heisenberg made the (at the time fashionable) claim that ‘a good theory must be based on directly observable magnitudes’ (p. 63). Einstein replied that, on the contrary (p. 63): > .... it is quite wrong to try founding a theory on observable magnitudes alone. In reality the very opposite happens. *It is the theory which decides what we can observe*. \[Italics added.\] Einstein added that there is a long, complicated path underlying any observation, which runs from the phenomenon, to the production of events in our apparatus, and from there to the production of sense impressions. And theory is required to make sense of this process: > Along this whole path .... we must be able to tell how nature functions .... before we can claim to have observed anything at all. Only theory, that is, knowledge of natural laws, enables us to deduce the underlying phenomena from our sense impressions. Einstein’s key point so far is that, as we have said, there is no *a priori* notion of how to perform a correct measurement: one requires some knowledge of physics to do so. If we wish to design a piece of apparatus that will correctly measure some property $\omega$ of a system, then we need to know the correct laws governing the interaction between the system and the apparatus, to ensure that the apparatus pointer will finish up pointing to the correct reading. (One cannot, for example, design an ammeter to measure electric current without some knowledge of electromagnetic forces.) Now, Einstein went on to note that, when new experimental phenomena are discovered — phenomena that require the formulation of a new theory — in practice the old theory is at first assumed to provide a reliable guide to interpreting the observations (pp. 63–64): > When we claim that we can observe something new, we ought really to be saying that, although we are about to formulate new natural laws that do not agree with the old ones, we nevertheless assume that the existing laws — covering the whole path from the phenomenon to our consciousness — function in such a way that we can rely upon them and hence speak of ‘observations’. Note that this is a practical necessity, for the new theory has yet to be formulated. However — and here is the crucial point — once the new theory *has* been formulated, one ought to be careful to use the new theory to design and interpret measurements, and not continue to rely on the old theory to do so. For one may well find that consistency is obtained only when the new laws are found *and applied to the process of observation*. If one fails to do this, one is likely to cause difficulties. That Einstein saw this very point is clear from a subsequent passage (p. 66): > I have a strong suspicion that, precisely because of the problems we have just been discussing, your theory will one day get you into hot water. .... When it comes to observation, you behave as if everything can be left as it was, that is, as if you could use the old descriptive language. Here, then, is Einstein’s warning to Heisenberg: not to interpret observations of quantum systems using the ‘old descriptive language’ of classical physics. The point, again, is that while observation is in general theory-laden, in quantum theory observations are incorrectly laden with a *superseded* theory (classical physics), and this will surely lead to trouble. We claim that the theory of many worlds is precisely an example of what one might call ‘Einstein’s hot water’. Specifically, the apparent multiplicity of the quantum domain is an illusion, caused by an over-reliance on a superseded (classical) physics as a guide to observation and measurement — a mistake that is the ultimate basis of the belief in eigenvalue realism, which in turn led to the puzzle of superposition and to Everett’s valiant attempt to resolve that puzzle. Conclusion ========== Pilot-wave theory is intrinsically nonclassical, with its own (‘subquantum’) theory of measurement, and it is in general a ‘nonequilibrium’ theory that violates the quantum Born rule. From the point of view of pilot-wave theory itself, an apparent multiplicity of worlds at the microscopic level (envisaged by some theorists) stems from the generally mistaken assumption that eigenvalues have an ontological status (‘eigenvalue realism’), which in turn ultimately derives from the generally mistaken assumption that ‘quantum measurements’ are true and proper measurements. At the macroscopic level, it might be thought that the universal (and ontological) pilot wave can develop non-overlapping and localised branches that evolve just like parallel classical worlds. But in fact, such localised branches are unrealistic (especially over long periods of time, and even for short periods of time in a world containing chaos). And in any case, subquantum measurements could track the actual de Broglie-Bohm trajectory, so that in principle one could distinguish the branch containing the configuration from the empty ones, where the latter would be regarded merely as concentrations of a complex-valued configuration-space field. In realistic models of decoherence, the pilot wave is delocalised, and the identification of a set of parallel (approximately) classical worlds does not arise in terms of localised pieces of actual ‘$\Psi$-stuff’ executing approximately classical motions. Instead, such identification amounts to a reification of purely mathematical trajectories — a move that is fair enough from a many-worlds perspective, but which is unnecessary and unjustified from a pilot-wave perspective because according to pilot-wave theory there is nothing actually moving along any of the trajectories except one (just as in the classical theory of a test particle in an external field or background spacetime geometry). In addition to being unmotivated, such reification begs the question of why the mathematical trajectories should not also be reified outside the classical limit for general wave functions, resulting in a theory of ‘many de Broglie-Bohm worlds’ (which in turn naturally reduces to a single-world theory). Properly understood, pilot-wave theory is not ‘many worlds in denial’: it is a different physical theory. Furthermore, from the perspective of pilot-wave theory itself, many worlds are an illusion. And indeed, even leaving pilot-wave theory aside, we have seen that the theory of many worlds is rooted in the intrinsically unlikely assumption that quantum measurements should be modelled on classical measurements, and is therefore in any case unlikely to be true. **Acknowledgements.** This work was partly supported by grant RFP1-06-13A from the Foundational Questions Institute (fqxi.org). For their hospitality, I am grateful to Carlo Rovelli and Marc Knecht at the Centre de Physique Théorique (Luminy), to Susan and Steffen Kyhl in Cassis, and to Jonathan Halliwell at Imperial College London. BIBLIOGRAPHY Appleby, D. M. (1999). Bohmian trajectories post-decoherence. *Foundations of Physics*, **29**, 1885–1916. Bacciagaluppi, G. and Valentini, A. (2009). *Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference*. Cambridge: Cambridge University Press \[quant-ph/0609184\]. Bell, J. S. (1987). *Speakable and Unspeakable in Quantum Mechanics*. Cambridge: Cambridge University Press. Bhattacharya, T., Habib, S., and Jacobs, K. (2000). Continuous quantum measurement and the emergence of classical chaos. *Physical Review Letters*, **85**, 4852–4855. Bohm, D. (1952a). A suggested interpretation of the quantum theory in terms of ‘hidden’ variables, I. *Physical Review*, **85**, 166–179. Bohm, D. (1952b). A suggested interpretation of the quantum theory in terms of ‘hidden’ variables, II. *Physical Review*, **85**, 180–193. Bohm, D. and Hiley, B. J. (1996). Statistical mechanics and the ontological interpretation. *Foundations of Physics*, **26**, 823–846. Bohr, N. (1931). Maxwell and modern theoretical physics. *Nature*, **128**, 691–692. Reprinted in *Niels Bohr: Collected Works*, vol. 6, ed. J. Kalckar. Amsterdam: North-Holland, 1985, p. 357. Brown, H. R. and Wallace, D. (2005). Solving the measurement problem: de Broglie–Bohm loses out to Everett. *Foundations of Physics*, **35**, 517–540. Colin, S. (2003). A deterministic Bell model. *Physics Letters A*, **317**, 349–358 \[quant-ph/0310055\]. Colin, S. and Struyve, W. (2007). A Dirac sea pilot-wave model for quantum field theory. *Journal of Physics A: Mathematical and Theoretical*, **40**, 7309–7341 \[quant-ph/0701085\]. de Broglie, L. (1928). La nouvelle dynamique des quanta. In *Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique*. Paris: Gauthier-Villars, pp. 105–132. \[English translation: Bacciagaluppi, G. and Valentini, A. (2009).\] Deutsch, D. (1985). Quantum theory, the Church-Turing principle and the universal quantum computer. *Proceedings of the Royal Society of London A*, **400**, 97–117. Deutsch, D. (1986). Interview. In *The Ghost in the Atom*, eds. P. C. W. Davies and J. R. Brown. Cambridge: Cambridge University Press, pp. 83–105. Deutsch, D. (1996). Comment on Lockwood. *British Journal for the Philosophy of Science*, **47**, 222–228. Deutsch, D. (1997). *The Fabric of Reality*. London: Penguin. Deutsch, D. (1999). Quantum theory of probability and decisions. *Proceedings of the Royal Society of London A*, **455**, 3129–3137. Dürr, D., Goldstein, S., and Zanghì, N. (1992). Quantum equilibrium and the origin of absolute uncertainty. *Journal of Statistical Physics*, **67**, 843–907. Dürr, D., Goldstein, S., and Zanghì, N. (1996). Bohmian mechanics as the foundation of quantum mechanics. In *Bohmian Mechanics and Quantum Theory: an Appraisal*, eds. J. T. Cushing *et al*.. Dordrecht: Kluwer, pp. 21–44. Dürr, D., Goldstein, S., and Zanghì, N. (1997). Bohmian mechanics and the meaning of the wave function. In *Experimental Metaphysics: Quantum Mechanical Studies for Abner Shimony*, eds. R. S. Cohen *et al*.. Dordrecht: Kluwer, pp. 25–38. Everett, H. (1973). The theory of the universal wave function. In *The Many-Worlds Interpretation of Quantum Mechanics*, eds. B. S. DeWitt and N. Graham. Princeton: Princeton University Press, pp. 3–140. Gell-Mann, M., and Hartle, J. B. (1993). Classical equations for quantum systems. *Physical Review D*, **47**, 3345–3382. Habib, S., Shizume, K., and Zurek, W. H. (1998). Decoherence, chaos, and the correspondence principle. *Physical Review Letters*, **80**, 4361–4365. Halliwell, J. J. (1998). Decoherent histories and hydrodynamic equations. *Physical Review D*, **58**, 105015. Heisenberg, W. (1971). *Physics and Beyond*. New York: Harper [&]{} Row. Holland, P. R. (1993). *The Quantum Theory of Motion: An Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics*. Cambridge: Cambridge University Press. Jozsa, R. (1998). Entanglement and quantum computation. In *The Geometric Universe: Science, Geometry, and the Work of Roger Penrose*. Oxford: Oxford University Press, pp. 369–379. Mermin, N. D. (2007). What has quantum mechanics to do with factoring? *Physics Today*, **60** (April 2007), 8–9. Pearle, P., and Valentini, A. (2006). Quantum mechanics: generalizations. In *Encyclopaedia of Mathematical Physics*, eds. J.-P. Françoise *et al*.. Amsterdam: Elsevier, pp. 265–76 \[quant-ph/0506115\]. Rovelli, C. (2004). *Quantum Gravity*. Cambridge: Cambridge University Press. Steane, A. M. (2003). A quantum computer only needs one universe. ArXiv:quant-ph/0003084v3 (24 March 2003). Struyve, W. (2008). De Broglie-Bohm field beables for quantum field theory. *Physics Reports* (to appear) \[arXiv:0707.3685\]. Struyve, W. and Valentini, A. (2008). De Broglie-Bohm Guidance Equations for Arbitrary Hamiltonians. *Journal of Physics A: Mathematical and Theoretical* (to appear) \[arXiv:0808.0290\]. Struyve, W. and Westman, H. (2007). A minimalist pilot-wave model for quantum electrodynamics. *Proceedings of the Royal Society of London A*, **463**, 3115–3129 \[arXiv:0707.3487\]. Tipler, F. J. (1987). Non-Schrödinger forces and pilot waves in quantum cosmology. *Classical and Quantum Gravity*, **4**, L189–L195. Tipler, F. J. (2006). What about quantum theory? Bayes and the Born interpretation. ArXiv:quant-ph/0611245. Valentini, A. (1991a). Signal-locality, uncertainty, and the subquantum *H*-theorem, I. *Physics Letters A*, **156**, 5–11. Valentini, A. (1991b). Signal-locality, uncertainty, and the subquantum *H*-theorem, II. *Physics Letters A*, **158**, 1–8. Valentini, A. (1992). On the pilot-wave theory of classical, quantum and subquantum physics. Ph.D. thesis, International School for Advanced Studies, Trieste, Italy \[http://www.sissa.it/ap/PhD/Theses/valentini.pdf\]. Valentini, A. (1996). Pilot-wave theory of fields, gravitation and cosmology. In *Bohmian Mechanics and Quantum Theory: an Appraisal*, eds. J. T. Cushing *et al*.. Dordrecht: Kluwer, pp. 45–66. Valentini, A. (2001). Hidden variables, statistical mechanics and the early universe. In *Chance in Physics: Foundations and Perspectives*, eds. J. Bricmont *et al*.. Berlin: Springer-Verlag, pp. 165–81 \[quant-ph/0104067\]. Valentini, A. (2002b). Subquantum information and computation. *Pramana — Journal of Physics*, **59**, 269–277 \[quant-ph/0203049\]. Valentini, A. (2004a). Universal signature of non-quantum systems. *Physics Letters A*, **332**, 187–193 \[quant-ph/0309107\]. Valentini, A. (2004b). Black holes, information loss, and hidden variables. ArXiv:hep-th/0407032. Valentini, A. (2007). Astrophysical and cosmological tests of quantum theory. *Journal of Physics A: Mathematical and Theoretical*, **40**, 3285–3303 \[hep-th/0610032\]. Valentini, A. (2008a). Inflationary cosmology as a probe of primordial quantum mechanics. ArXiv:0805.0163. Valentini, A. (2008b). De Broglie-Bohm prediction of quantum violations for cosmological super-Hubble modes. ArXiv:0804.4656. Valentini, A. (2008c). *International Journal of Modern Physics A* (to appear). Valentini, A. and Westman, H. (2005). Dynamical origin of quantum probabilities. *Proceedings of the Royal Society of London A*, **461**, 253–272. Wallace, D. (2003a). Everettian rationality: defending Deutsch’s approach to probability in the Everett interpretation. *Studies in History and Philosophy of Modern Physics*, **34**, 415–439. Wallace, D. (2003b). Everett and structure. *Studies in History and Philosophy of Modern Physics*, **34**, 87–105. Zeh, H. D. (1999). Why Bohm’s Quantum Theory? *Foundations of Physics Letters*, **12**, 197–200. Zurek, W. H. (1998). Decoherence, chaos, quantum-classical correspondence, and the algorithmic arrow of time. *Physica Scripta*, **T76**, 186–198. Zurek, W. H. (2003). Decoherence, einselection, and the quantum origins of the classical. *Reviews of Modern Physics*, **75**, 715–775. Zurek, W. H., Habib, S., and Paz, J. P. (1993). Coherent states via decoherence. *Physical Review Letters*, **70**, 1187–1190. [^1]: Present address. [^2]: More generally, $\dot{q}=j/|\Psi|^{2}$ where $j$ is the current associated with the Schrödinger equation (Struyve and Valentini 2008). [^3]: For a detailed account, see chapter 2 of Bacciagaluppi and Valentini (2009). [^4]: ‘.... the wave function is a component of physical law rather than of the reality described by the law’ (Dürr *et al*. 1997, p. 33). [^5]: One should also guard against the idea — sometimes expressed in this context — that the existence of ‘only one universe’ somehow suggests that the universal wave function cannot be contingent. Equally, in non-Everettian cosmology, there is only one intergalactic magnetic field, and yet it would be generally agreed that the precise form of this field is a contingency (not determined by physical law). [^6]: Cf. the role played by the ether in electromagnetism, or in Newton’s thinking about gravitation. For a discussion of this parallel, see section 2.3.2 of Bacciagaluppi and Valentini (2009). [^7]: Over an ensemble, if $x$ and $y$ have an initial distribution $P_{0}(x,y)=\left\vert \Psi_{0}(x,y)\right\vert ^{2}$, one of course finds that a fraction $\left\vert c_{n}\right\vert ^{2}$ of trajectories $q(t)=(x(t),y(t))$ end in the (support of) the $n$th branch $\phi_{n}(x)g_{0}(y-a\omega_{n}t)$. [^8]: Because the branches have separated in configuration space, it follows from de Broglie’s equation of motion that the ‘empty’ branches no longer affect the trajectory. [^9]: For example, the eigenfunction $\phi_{E}(x)\propto (e^{ipx}+e^{-ipx})$ of the kinetic-energy operator $\hat{p}^{2}/2m$ has eigenvalue $E=p^{2}/2m\neq0$; and yet, the actual de Broglie-Bohm kinetic energy vanishes, ${\frac{1}{2}}m\dot{x}^{2}=0$ (since $\partial S/\partial x=0$). If the system had this initial wave function, and we performed a so-called ‘quantum measurement of kinetic energy’ using a pointer $y$, then the initial joint wave function $\phi_{E}(x)g_{0}(y)$ would evolve into $\phi_{E}(x)g_{0}(y-aEt)$ and the pointer would indicate the value $E$ — even though the particle kinetic energy was and would remain equal to zero. The experiment has not really measured anything. [^10]: Deutsch cites the rather confused paper by Tipler (1987), which argues among other things that de Broglie-Bohm trajectories must affect each other in unphysical ways. Tipler’s critique is mostly aimed at a certain stochastic version of pilot-wave theory. While it is not really relevant to Deutsch’s argument, for completeness we note that, as regards conventional (deterministic) pilot-wave theory, Tipler’s critique stems from an elementary misunderstanding of the role of probability in the theory. [^11]: In quantum theory too, of course, the second experiment will always give different results for the two packets. But the outcome will be random, making the operational difference between the packets less clear. [^12]: This is not to suggest that Brown and Wallace, or other proponents of the Claim, actually make the Claim in the ‘strong’ form given here. We consider this form first, because it seems to us to be the strongest possible version of the argument. [^13]: Except, one might argue, if one is talking about the ‘whole universe’. One could restrict the argument to approximately-independent regions; this does not seem an essential point. [^14]: The examples are based on the weak-coupling, high-temperature limit of quantum Brownian motion. The system consists of a single particle moving in one dimension in a classically-chaotic potential. [^15]: See, for example, Gell-Mann and Hartle (1993) and Halliwell (1998), as well as the reviews in this volume. [^16]: Such a theory has, in effect, been considered by Tipler (2006). [^17]: One need not think of this as ‘adding’ trajectories to the wave function; one could think of it as an alternative reading of physical structure already existing in the ‘bare’ wave function. [^18]: It might be claimed that, outside the nonrelativistic domain, such an extension is neither simple nor natural. However, the (deterministic) pilot-wave theory of high-energy physics has achieved a rather complete (if not necessarily final) state of development — for recent progress see Colin (2003), Colin and Struyve (2007), Struyve (2008), Struyve and Westman (2007), and Valentini (2008c). [^19]: See Valentini (2007, 2008a,b) for recent discussions of possible experimental evidence. [^20]: On the other hand, quantum equilibrium for a multi-component closed system implies the Born rule for measurements performed on subsystems (Valentini 1991a, Dürr *et al*. 1992). [^21]: In the classical limit of pilot-wave theory, emergent effective degrees of freedom have a purely mathematical correspondence with linear operators acting on the wave function. Physicists trapped in quantum equilibrium have made the mistake of taking this correspondence literally (Valentini 1992, pp. 14–16, 19–29; 1996, pp. 50–51). [^22]: In contrast with Bohr’s unwarranted claim: ‘The unambiguous interpretation of any measurement must be essentially framed in terms of the classical physical theories, and we may say that in this sense the language of Newton and Maxwell will remain the language of physicists for all time’ (Bohr 1931).
--- abstract: 'Understanding the effect of glassy dynamics on the stability of bio-macromolecules and investigating the underlying relaxation processes governing degradation processes of these macromolecules are of immense importance in the context of bio-preservation. In this work we have studied the stability of a model polymer chain in a supercooled glass-forming liquid at different amount of supercooling in order to understand how dynamics of supercooled liquids influence the collapse behavior of the polymer. Our systematic computer simulation studies find that apart from long time relaxation processes ($\alpha$ relaxation), short time dynamics of the supercooled liquid, known as $\beta$ relaxation plays an important role in controlling the stability of the model polymer. This is in agreement with some recent experimental findings. These observations are in stark contrast with the common belief that only long time relaxation processes are the sole player. We find convincing evidence that suggest that one might need to review the the vitrification hypothesis which postulates that $\alpha$ relaxations control the dynamics of biomolecules and thus $\alpha$-relaxation time should be considered for choosing appropriate bio-preservatives. We hope that our results will lead to understand the primary factors in protein stabilization in the context of bio-preservation.' author: - Mrinmoy Mukherjee - Jagannath Mondal - Smarajit Karmakar title: 'Role of $\alpha$ and $\beta$ relaxations in Collapsing Dynamics of a Polymer Chain in Supercooled Glass-forming Liquid' --- Introduction ============ Many organisms can survive in dehydrated state for long period of times by accumulating large amount of sugars (sometime $20-50$ $\%$ of the dry weight) [@CarpenterCroweCrowe98; @review1]. These carbohydrates (mainly trehalose and sucrose) stabilize proteins and membranes in dry state [@review1]. There are many hypotheses for this protein stabilization and they mainly focus on the vitrification of the stabilizing sugar matrix along with the biomolecules and water replacement from the neighbourhood of the biomolecules by the sugar [@science1; @review1; @KDCRIJPharm99]. In the water replacement hypothesis, it is believed that water molecules are replaced by sugar which provides appropriate hydrogen bonds to polar residues of macromolecules thereby stabilizing them thermodynamically. A slightly refined hypothesis is water entrapment hypothesis, in which it is argued that interfacial waters provide stabilization of local conformations of biomolecules. Some regions on the surface of the biomolecule are more hydrophilic than others which leads to preferential binding of water molecules at biomolecule-sugar interface. On the other hand, vitrification hypothesis is purely based on kinematics. It is assumed that the carbohydrates form glasses at high concentrations or in dry state and thereby slow down the degradation process of biomolecules. This hypothesis mainly focuses on how glassy materials relaxes at longer time scale. A well-known example of such a phenomena is the preservation of insects in amber for millions of years, suggesting that vitrification is one of the best choices for nature for bio-preservation. A recent hypothesis which is a variant of the vitrification hypothesis, suggests that the shorter time-scale $\beta$ relaxation rather than slower and longer time $\alpha$ relaxation of the glass-forming liquid is actually responsible for the degradation of the biomolecule in sugar glasses [@CiceroneDouglasSoftMatter2012; @CiceroneDouglasBioPhyJ2004]. All of these hypotheses suggest rather different approaches to design appropriate sugar glass model to optimally increase the stability of biomolecules for preservation. A clear understanding towards this direction warrants consideration of all the relevant relaxation processes in glass forming liquids,which is summarized below Relaxation of density fluctuations in supercooled liquids is hierarchical and happens in multiple steps as the putative glass transition temperature is approached. After a fast initial decay the correlation functions approaches a plateau and then at subsequent long-time it decay to zero. The relaxation that happens in the plateau like regime is called $\beta$ relaxation and the longer time decay from the plateau to zero is called $\alpha$ relaxation [@11BB; @arcmp; @KDSROPP16; @SK2016]. It is well-known that the $\alpha$-relaxation is very heterogeneous and cooperative in nature with a associated growth of a dynamic heterogeneity length scale [@KDS; @arcmp; @KDSROPP16]. On the other hand $\beta$-relaxation is believed to be more local process without any significant growth of correlation length [@JG], but recent studies have suggested that shorter time relaxation processes are probably also cooperative in nature with length scale that grow very similarly as the long time dynamic heterogeneity length scale [@betaPRL; @footnote]. This indicates that if cooperative motions are required for certain relaxation process to happen in a molecules embedded in supercooled liquid, then both short and long time relaxation processes will probably play equally important role. For example, if $\alpha$-relaxation plays a key role in degradation of protein molecules in glassy matrix due to its cooperative nature to induce mobility in these biomolecules which are much larger compare to the solvent molecules, then shorter time $\beta$-relaxation process will also be able to induce such mobility especially at lower temperatures where the time scale related to $\beta$-relaxation does not become super exponentially larger compare to $\alpha$-relaxation time. Indeed in a recent experiment, it is suggested that $\beta$-relaxation plays very important role in the preservation of protein in sugar glasses [@CiceroneDouglasSoftMatter2012; @CiceroneDouglasBioPhyJ2004]. Although the physical and chemical processes that degrade a macromolecule is known, the microscopic mechanisms of how glassy matrix helps to slow down these physical and chemical degradation process of a biomolecules is not clearly understood. A clear understanding of these microscopic mechanisms will reduce the trial-and-error aspect of lengthy and tedious long-term stability studies in many fields such as food, pharmaceuticals. The goal of this work is to understand how the dynamics of biomacromolecule might couple to the dynamics of supercooled liquids and how rates of different processes are modified by the embedding liquid as it is supercooled with decreasing temperature. In this regard, we have quantified the collapse dynamics of a model polymer chain [@BZZJACS2009] in a well-known glass forming liquids[@KA] using extensive molecular dynamics simulations. The use of a homopolymer as a system of choice avoids the inherent molecular heterogeneity of diverse amino-acids in a protein, where isolating individual contributions to a glassy-matrix induced change in stability is a difficult task and also provides an incentive for exploring the action of glassy matrices on hydrophobic interaction, one of the central driving forces for protein folding. The model helps us to clearly understand how crowding due to the dense packing of embedding glassy liquid molecules particularly influence the dynamics of model biomolecule and whether the dynamics of the biomolecule can be slaved to the dynamics of the supercooled liquids. We find that indeed the dynamics of a polymer chain can be slaved to the dynamics of the supercooled liquid even when the polymer interacts very weakly with the liquid molecules. The rest of the paper is organized as follows. First we will discuss about the models studied and the details of the simulations and then introduce correlation functions that we have calculated to characterize the relevant relaxation processes in glass forming liquids. Finally we show our results and discuss the implications of these results in the context of bio-preservation. -1.50cm ![Extended Configuration of the polymer inside the binary mixture. The actual size of the solvent glassy molecules have been scaled down in the figure for clarity.[]{data-label="in_confFig"}](inconf_1.pdf "fig:") Models and Methods ================== We have studied the well known Kob-Anderson 80:20 binary glass former Lennard-Jones mixture [@KA] as the solvent. The interaction potential in this model is given by, $$V_{AB}(r) = 4\epsilon_{AB}\left[\left(\frac{\sigma_{AB}}{r}\right)^{12} - \left(\frac{\sigma_{AB}}{r}\right)^{6}\right]$$ where $\epsilon_{AA} = 0.997$, $\epsilon_{AB} = 1.4955$, $\epsilon_{BB} = 0.4985$, $\sigma_{AA} = 0.34$, $\sigma_{AB} = 0.272$, $\sigma_{BB} = 0.2992$. The units of $\epsilon$ is kJ/mole and unit of $\sigma$ is in nm (all are transformed to real unit in terms of Argon). This binary mixture works as a solvent for a $32$ bead model polymer. The polymer model closely resembles that of Berne and coworkers [@BZZJACS2009]. The constituent polymer beads are connected to the covalently bonded neighbor by a harmonic potential, with an equilibrium bond length of $0.153 nm$ (the same as CH2-CH2 bond length). The angle between adjacent covalent bonds is represented by a harmonic potential, with an equilibrium angle of $111^o$ (the same as CH2-CH2-CH2 bond angle). The polymer is uncharged and the beads interact among themselves and with their environment via Lennard-Jones potentials. The bead diameter is fixed at $\sigma_b=0.4nm$ and the bead-bead interaction is fixed at $\epsilon_b=11$ kJ/mol. Non-bonded interactions between a bead and its first and second nearest neighbors were excluded, and no dihedral interaction terms were included. The hydrophobic character of the chain can be tuned by varying the intermediate interaction between polymer beads and particles constituting glassy matrices using geometric combination rules. Specifically, the polymer-glass interaction potential is given by $\sqrt{\epsilon_p * \epsilon_{AA}}$ and $\sqrt{\epsilon_p * \epsilon_{BB}}$, where we have independently varied the value of $\epsilon_p=0.1, 1.0$ and $3.0$ kJ/mol in separate simulations for tuning polymer-liquid interactions. In essence, $\epsilon_p$ denotes the polymer contribution towards the inter polymer-glass interaction. On the other hand, the inter-polymer-glass interaction-range has been calculated by $\sqrt{\sigma_b * \sigma_{AA}}$ and $\sqrt{\sigma_b * \sigma_{BB}}$ . A cutoff of $1.2 nm$ was used to treat the nonbonding interactions and periodic boundary condition condition was implemented in all dimensions. The temperature range studied for this model is $50-120K$. Number of particles we have chosen for the binary mixture is $9600$ in a cubic box of dimension $6.8$ nm. The same average density was maintained throughout the simulations. All the molecular dynamics simulations have been performed using $GROMACS \quad 5.1.4$ software. We have solvated the energy-minimized extended configuration of the polymer chain into the energy-minimized binary mixture for each case. The systems were first energy minimized by steepest descent algorithm and then equilibrated for $100$ ps at 260K in NVT ensemble and then in NPT ensemble for $200$ ps. The systems were then annealed to desired temperatures at a cooling rate of $0.5$ K/ps and then subjected to a NPT equilibration for $20-1500$ ns depending on the temperatures. Note that equilibration runs for each temperatures are at least $100\tau_{\alpha}$ or more longer. $\tau_{\alpha}$ is the $\alpha$ relaxation time (defined later) of the solvent glass forming liquid. Finally the systems were subjected to production run in NPT ensemble. The reference pressure for NPT simulations in last part of equilibration and production run was the average pressure obtained from the equilibrated binary mixture without the polymer. The integration time step used is $dt = 0.002$ ps. Berendsen and V-rescale thermostat has been used respectively during equilibration and production runs to maintain the average temperature. On the other hand Berendsen and Parrinello-Rahman barostat to keep pressure fixed during equilibration and production runs respectively. Results and Discussion ====================== All our simulations start with an extended configuration of the polymer (Radius of gyration $R_G = 1.2 nm$) as shown in Fig.\[in\_confFig\] in a well equilibrated supercooled liquid state of the solvent mixture at different studied temperatures in the range $T\in [50K,120K]$ and we explore the transition of the polymer from extended to collapsed conformation during the course of the simulation. Time profile of Radius of gyration of the polymer is calculated (as shown in top right panel of Fig.\[fig2label\] for $T = 50K$ ) to quantify the collapse-dynamics of the polymer and the collapse time, $\tau_c$ (defined later) is estimated by identifying the time of sharp transition from extended to collapsed conformation at different supercooling temperature. This collapsing timescale is then compared with the intrinsic relaxation time scale ($\tau_{\alpha})$ of the glassy liquid. In left top panel of Fig.\[fig2label\], we have shown one such instance of the collapsed configuration of the polymer. The solvent binary supercooled liquid molecules are also shown by reducing their actual size for clarity. -0.720cm ![Top left panel: Typical snapshot of the collapsed state of the polymer in the supercooled liquid. Top right panel: The time profile of the radius of gyration of the polymer chain at $T = 50K$. The step like change in the radius of gyration suggest that the collapsing transition for this polymer is very sharp. Bottom left panel: Timescale for degradation of polymer inside the supercooled Liquid and in Gas Phase. Notice the stark difference in the collapsing timescale of the polymer in gas phase and in the supercooled liquid for different temperatures. Bottom right panel: Temperature dependence of the collapsed time for three different cutoff radii of gyration for collapsed state of the polymer respectively $0.45$, $0.50$ and $0.60 nm$. []{data-label="fig2label"}](fig2.pdf "fig:") The choice of a large intra-bead interaction parameter ($\epsilon_b$ = $11 kJ/mol$) renders a strong propensity for the polymer-collapse and hence allows us to observe the collapse behavior of the polymer for entire range of temperature of interest $T\in [50K,120K]$ within simulation time scale. So in gas phase the polymer collapses very quickly with a very low temperature dependence as shown in bottom left panel of Fig.\[fig2label\] (green square symbols). The reason for choosing such polymer parameters is to explore whether dynamics of supercooled liquid can slave the dynamics of the polymer even when the polymer interacts weakly with the liquid compare to its own interaction strength. We also have shown a comparison of the collapsing timescale in the same panel (red circle) when the polymer is immersed in supercooled liquids with particular interaction (discussed in details later). The changes in the collapsing timescale compare to the gas phase timescale is really dramatic. This clearly proves why glassy matrices are chosen for bio-preservation. We have used three different $\epsilon_p$ value to control the interactions between glass molecules and polymer beads. The $\epsilon_p$ values for polymer-liquid interactions are $0.1, 1.0$ and $3.0 kJ/mol$. Before going in to discussing our main observations, we will briefly discuss how characterization of the supercooled liquid is done. Relaxation time is measured from the decay of a modified version of the two point density-density correlation function $Q(t)$, also known as overlap correlation function [@KDS]. It is defined as $$Q(t) = \sum_{i=1}^{N} w\left(|\vec{r}_i(0)-\vec{r}_i(t)|\right)$$ where $\vec{r}_i(t)$ is the position of particle $i$ at time $t$, $N$ is the total number of particles. The window function $w(x) = 1$ if $x \le a$ and $0$ otherwise, where $a$ is a cut-off distance at which the the root mean square displacement (MSD) of the particles as a function of time exhibits a plateau before increasing linearly with time at long time. The precise choice of $a$ is qualitatively unimportant. This window function is chosen to remove any de-correlation that might happen due to vibrational motions of the solvent particles inside the cages formed by their neighbours. In this study we have taken $a^2 = 0.006nm$. The relaxation time $\tau_{\alpha}$ is defined as $\langle Q(t = \tau_{\alpha})\rangle = 1/e$, where $\langle\ldots\rangle$ refers to ensemble average. The collapse time ($\tau_c$) of the polymer chain is obtained from the time dependence of the radius of gyration ($R_G$) of the polymer chain. In top panels of Fig.\[fig2label\], we have shown the $R_G$ as a function of time closed to the collapsing transition for $T = 50K$ as an illustrative collapse profile. In all our analysis, we have considered the chain to be in the collapsed state when $R_G$ became $0.50nm$. In bottom right panel of Fig.\[fig2label\], we have shown the temperature dependence of the collapsed time for three different cutoff radius of gyration for collapsed state of the polymer as $0.45$, $0.50$ and $0.60 nm$. As evident, a different choice of the cut off radius gyration to define the collapsed state does not change the results qualitatively. ![Comparison of timescale for a polymer with $\epsilon_b$=11 kJ/mol and $\epsilon_p$=3 kJ/mol in the studied temperature of range. The lines are fit to the data with VFT formula (see text for details). The VFT divergence temperatures for both the timescales are found to be equal to $38K$, suggesting a strong coupling between $\alpha$-relaxation time and collapsing time of the polymer.[]{data-label="tauVsCollapsed3_11"}](timeScale_3_11.pdf) Next we compare $\alpha$-relaxation time, $\tau_{\alpha}$ of the supercooled liquid and the collapse time, $\tau_c$ for a situation where the solvent supercooled molecules interacts somewhat strongly with the polymer chain molecules. Specifically the polymer-solvent intermediate interaction is tuned by using $\epsilon_p$ =3.0kJ/mol, keeping polymer bead-bead interaction fixed at $\epsilon_b$=11 kJ/mol. In Fig.\[tauVsCollapsed3\_11\] we have plotted collapse time of the polymer along with the $\alpha$ relaxation time of the liquid for different temperatures for this particular choice of the parameter. It is clear that, at least in the studied temperature regime, $\tau_\alpha$ controls the degradation rate, supporting the “Vitrification Hypothesis”. One may infer that better stability needs larger value of $\tau_\alpha$ of the preservative. We have fitted both $\tau_{\alpha}$ and $\tau_c$ by Vogel-Fulcher-Tamman (VFT) formula [@vft], defined as $\tau = \tau_0 \exp{\left(A/(T-T_0)\right)}$, where $\tau_0$, $A$ and $T_0$ are free parameters. $T_0$ is known as VFT divergence temperature and is very closed to the Kauzmann Temperature [@kauz]. The divergence temperatures for both $\tau_{\alpha}$ and $\tau_c$ are found to be close to $38K$ suggesting a strong coupling between the dynamics of supercooled liquid and the collapsing dynamics of the polymer chain. Note that the polymer chain collapse very rapidly in gas phase, whereas its dynamics now is slaved to the dynamics of the solvent glassy liquids. Next we look at the other extreme in which we choose an interaction parameter such that the polymer chain interacts very weakly with the solvent liquid molecules. We choose the value of $\epsilon_p$ contributing to solvent-polymer interaction to be $0.1kJ/mol$, so in this limit polymer dynamics will be mainly affected (if at all) by the crowding effect of the solvent glassy molecules. In Fig.\[tauVsCollapsed01\_11\], we show the temperature dependence of the two timescales and surprisingly, they cross each other at some intermediate temperature, $T \sim 55K$ in this case. The corresponding VFT fits also suggest that the extrapolated divergence temperatures are very different from each other. This is now in contrast with our previous observation, where both the timescales are more or less proportional to each other. This new result suggests that only $\alpha$-relaxation time is not the main controlling parameter, especially at lower temperatures, where a much faster relaxation process, probably $\beta$-relaxation process seems to play a role in the dynamics of the polymer chain. This is in complete agreement with recent experimental observations [@CiceroneDouglasSoftMatter2012; @CiceroneDouglasBioPhyJ2004], where it is suggested that protein preservation in sugar glasses is directly linked to high frequency $\beta$-relaxation process as protein stability seems to increase almost linearly with $\tau_{\beta}$ when $\tau_{\beta}$ is increased by adding anti-plasticizing additives. These additives are found to increase the $\beta$-relaxation time even though it decreases $\alpha$-relaxation time [@review1]. In a bid to further understand whether it is the glassy dynamics that is slowing down the collapsing dynamics of the polymer chain, we performed quenching studies in which we decrease the temperature of the supercooled liquid rapidly from its initial equilibrium temperature. It is well known that if we quench a glass forming liquid to low temperature then it shows aging and initially it relaxes almost at the same timescale as that of the initial temperature from which it is quenched. The relaxation time then gradually increases with increasing waiting time. Now, if the dynamics of the polymer chain is slaved to the dynamics of the supercooled liquid, then if we quench the whole system, the polymer should still be able to collapse in a timescale which is almost same as the initial temperature from which it is cooled. ![Comparison of timescale for a polymer with $\epsilon_b$=11 kJ/mol and $\epsilon_p$=3 kJ/mol for the temperature of range of interest. The lines are the fit to the VFT formula (see text). $\tau_c^{q}$ is the collapsing time obtained in the quench studies (see the text for further details).[]{data-label="tauVsCollapsed01_11"}](timeScaleBioPolymer.pdf) In Fig.\[tauVsCollapsed01\_11\], we show that collapse time (referred here as $\tau_c^q$) seems to depend on the initial temperature ($T = 120K$) from which it is quenched, irrespective of the final temperatures (green diamonds, $T = 70, 60, 55, 50K$ respectively). In all these quench studies, the equilibrium collapse time is many orders of magnitude larger than the time obtained if the system is quenched to these temperatures from high temperature. This observation seems to corroborate with an old experimental finding [@mazurSchmidt], where it was noted that survival probability of frozen and thawed yeast is orders of magnitude more if it is cooled very slowly. We then increase the polymer-solvent interactions a bit more by increasing the value to an intermediate value $\epsilon_p$ to $1kJ/mol$ to see whether these two timescales still cross each other at an accessible temperature range. In Fig.\[tauVsCollapsed1\_11\], we indeed see the crossing of these two timescales but now at a temperature lower than that observed in the previous case when the polymer-solvent interaction is $0.1kJ/mol$. With this particular parameter, the crossover temperature moves to $50K$. Thus we can expect that at some intermediate parameter of $\epsilon_p$ =$1-3$ kj/mol guiding the polymer-solvent interaction, the collapsing dynamics of the polymer chain will be completely controlled by $\alpha$-relaxation and below that short time $\beta$ relaxation will also be important. At this point we can not rule out the other possibility of cross over of these two timescales at all parameter range, as our conclusions are based on the extrapolation done using VFT formula. ![Temperature dependence of $\alpha$-relaxation time and collapse time for the case of $\epsilon_p$ equal to $1.0$ kJ/mol. The cross over temperature is now shifted to lower temperature compare to the case when $\epsilon_p$ was $0.1$ kJ/mol.[]{data-label="tauVsCollapsed1_11"}](timeScale_1_11.pdf) In conclusion, we have shown that dynamics of supercooled glass forming liquids play a major role in controlling the collapsing dynamics of a polymer chain at various temperature. At certain polymer-solvent interaction strength, the polymer can be completely slaved to the long time $\alpha$-relaxation of the glassy liquid, on the other hand at low polymer-solvent interaction strength, at which the polymer is passive to the liquid and only packing of the solvent molecules around the polymer molecule is relevant, both short time $\beta$ and long time $\alpha$ relaxations play intricate role at different temperature regimes. We also have shown that coupling between the solvent dynamics and polymer becomes weak if one does quenches from high temperatures due to aging in the glassy liquids. This suggests that flash freezing might not be a good method if one wants to preserve a biomolecules in glassy matrix. Thus “Vitrification Hypothesis” although might be valid for some biomacromolecules, need serious revision to include the effect of shorter time scale processes like $\beta$-relaxation in order to better understand bio-preservation in glassy sugar matrix. In a recent work [@KwonPRL2017], it is shown that reaction kinetics of polymer collapsing dynamics depends on viscosity of supercooled liquids with a fractional power. This again supports our findings reported in this work very strongly. Finally, in our model studies all complicated interactions like hydrogen bonding and complex structural aspects of the biomolecules are not incorporated, thus it will be important to do further studies to understand how these different parameters influence the results reported here. [100]{} J.F. Carpenter, J.H. Crowe and L.M. Crowe, The role of vitrification in anhydrobiosis. Annu. Rev. Physiol., [**60**]{}, 73–103 (1998). M.A. Mensink, H.W. Frijlink, K.V. Maarschalk, W.L.J. Hinrichs, How sugars protect proteins in the solid state and during drying (review): Mechanisms of stabilization in relation to stress conditions, European Journal of Pharmaceutics and Biopharmaceutics [**114**]{}, 288–295, (2017). A. Ansari, C.M. Jones, E.R. Henry, J. Hofrichter, W.A. Eaton, The role of solvent viscosity in the dynamics of protein conformational changes, Science [**256**]{}, 1796-1798 (1992). V.L. Kett, M.L.H. Duncan, Q.M. Craig, P.G. Royall, The relevance of the amorphous state to pharmaceutical dosage forms: glassy drugs and freeze dried systems. International Journal of Pharmaceutics, [**179**]{}, 179–207, (1999). M.T. Cicerone and J.F. Douglas, $\beta$-relaxation governs protein stability in sugar-glass matrices. Soft Matter, [**8**]{}, 2983, (2012). M.T. Cicerone and C.L. Soles, Fast dynamics and stabilization of proteins: Binary glasses of trehalose and glycerol, Biophysical Journal, [**86**]{}, 3836–3845, (2004). L. Berthier and G. Biroli, Theoretical perspective on the glass transition and amorphous materials, Rev. Mod. Phys. [**83**]{},587–645, (2011) S. Karmakar, C. Dasgupta, and S. Sastry, Growing length scales and their relation to timescales in glass-forming liquids, Annu. Rev. Condens. Matter Phys. [**5**]{}, 255 (2014). S. Karmakar, C. Dasgupta and S. Sastry, Length scales in glass-forming liquids and related systems: a review, Rep. Prog. Phys., [**79**]{}, 2016. S. Karmakar, An Overview on Short and Long Time Relaxations in Glass-forming Supercooled Liquids, Journal of Physics: Conf. Series [**759**]{}, 012008 (2016). S. Karmakar, C. Dasgupta, and S. Sastry, Growing length and time scales in glass-forming liquids, Proc. Nat. Acad. Sci (USA) [**106**]{}, 3675 (2009). G. P. Johari and M. Goldstein, J. Chem. Phys. [**53**]{} 2372 (1970). S. Karmakar, C. Dasgupta and S. Sastry, Short-time beta relaxation in glass-forming liquids is cooperative in nature, Phys. Rev. Lett. [**116**]{}, 085701 (2016). The $\beta$ relaxation discussed here is defined according to the mode coupling theory \[W. Götze, [*Complex Dynamics of Glass-Forming Liquids: A Mode-Coupling Theory*]{} (Oxford University Press, 2009)\]. It is believed to be very different from the well studied Johari-Goldstein process Ref.[@JG]. B.J. Berne R. Zangi, R. Zhou, Urea’s action on hydrophobic interactions, J. Am. Chem. Soc., [**131**]{}, 1535–1541, (2009). W. Kob and H.C. Andersen, Testing mode-coupling theory for a supercooled binary Lennard-Jones mixture I: The van Hove correlation function Phys. Rev. E [**51**]{}, 4626 (1995). H. Vogel, [Z. Phys.]{} [**22**]{} 645 (1921), G.S. Fulcher, [J. Amer. Ceram. Soc.]{} [**8**]{} 339 (1925), D. Tammann [J. Soc. Glass Technol.]{} [**9**]{} 166 (1925). W. Kauzmann [Chem. Rev.]{} [**48**]{} 219 (1948). P. Mazur and J.J. Schmidt, Interactions of cooling velocity, temperature, and warming velocity on the survival of frozen and thawed yeast, Cryobiology [**5**]{} 1-17 (1968). S. Kwon, H.W. Cho, J. Kim, and B.J. Sung, Fractional Viscosity Dependence of Reaction Kinetics in Glass-Forming Liquids, Phys. Rev. Lett. [**119**]{}, 087801 (2017).
--- abstract: 'We introduce iposets—posets with interfaces—equipped with a novel gluing composition along interfaces and the standard parallel composition. We study their basic algebraic properties as well as the hierarchy of gluing-parallel posets generated from singletons by finitary applications of the two compositions. We show that not only series-parallel posets, but also interval orders, which seem more interesting for modelling concurrent and distributed systems, can be generated, but not all posets. Generating posets is also important for constructing free algebras for concurrent semirings and Kleene algebras that allow compositional reasoning about such systems.' author: - 'Uli Fahrenberg$^1$, Christian Johansen$^2$, Georg Struth$^3$, Ratan Badahur Thapa$^2$' bibliography: - 'mybib.bib' title: Generating Posets beyond --- Introduction ============ This work is inspired by Tony Hoare’s programme of building graph models of concurrent Kleene algebra ($\CKA$) [@DBLP:journals/jlp/HoareMSW11] for real-world applications. $\CKA$ extends the sequential compositions, nondeterministic choices and unbounded finite iterations of imperative programs modelled by Kleene algebra into concurrency, adding operations of parallel composition and iteration, and a weak interchange law for the sequential-parallel interaction. Such algebras have a long history in concurrency theory, dating back at least to Winkowski [@DBLP:journals/ipl/Winkowski77]. Commutative Kleene algebra—the parallel part of $\CKA$—has been investigated by Pilling and Conway [@Conway71]. A double semiring with weak interchange—$\CKA$ without iteration—has been introduced by Gischer [@DBLP:journals/tcs/Gischer88]; its free algebras have been studied by Bloom and Ésik [@DBLP:journals/tcs/BloomE96a]. $\CKA$, like Gischer’s concurrent semiring, has both interleaving and true concurrency models, e.g. shuffle as well as pomset languages. Series-parallel pomset languages, which are generated from singletons by finitary applications of sequential and parallel compositions, form free algebras in this class [@DBLP:journals/corr/LaurenceS17; @DBLP:conf/esop/KappeB0Z18] (at least when parallel iteration is ignored). The inherent compositionality of algebra is thus balanced by the generative properties of this model. Yet despite this and other theoretical work, applications of $\CKA$ remain rare. One reason is that series-parallel pomsets are not expressive enough for many real-world applications: even simple producer-consumer examples cannot be modelled [@DBLP:journals/tcs/LodayaW00]. *Tests*, which are needed for the control structure of concurrent programs and as assertions, are hard to capture in models of $\CKA$ (see [@DBLP:journals/jlp/JipsenM16] and its discussion in [@DBLP:conf/concur/KappeBRSWZ19]). Finally, it remains unclear how modal operators could be defined over graph models akin to pomset languages, which is desirable for concurrent dynamic algebras and logics beyond alternating nondeterminism [@DBLP:journals/jacm/Peleg87; @DBLP:journals/tocl/FurusawaS15]. A natural approach to generating more expressive pomset languages is to “cut across” pomsets in more general ways when (de)composing them. This can be achieved by (de)composing along interfaces, and this idea can be traced back again to Winkowski [@DBLP:journals/ipl/Winkowski77]; see also [@DBLP:books/daglib/0030804; @DBLP:conf/birthday/FioreC13; @DBLP:journals/corr/Mimram15] for interface-based compositions of graphs and posets, or [@DBLP:conf/RelMiCS/HoareSMSVZO14; @DBLP:conf/mpc/MollerH15; @DBLP:conf/utp/MollerHMS16] for recent interface-based graph models for $\CKA$. As a side effect, interfaces may yield notions of tests or modalities. When they consist of events, cutting across them presumes that they extend in time and thus form intervals. Interval orders [@Wiener14; @journals/mpsy/Fishburn70] of events with duration have been applied widely in partial order semantics of concurrent and distributed systems [@DBLP:journals/dc/Lamport86; @DBLP:journals/jacm/Lamport86a; @DBLP:conf/parle/GlabbeekV87; @DBLP:conf/ifip2/Glabbeek90; @DBLP:journals/dc/Vogler91; @DBLP:books/sp/Vogler92; @DBLP:journals/tcs/JanickiK93] and the verification of weak memory models [@DBLP:journals/toplas/HerlihyW90], yet generating them remains an open problem [@DBLP:journals/iandc/JanickiY17]. Our main contribution lies in a new class and algebra of posets with interfaces (*iposets*) based on these ideas. We introduce a new gluing composition that acts like standard serial po(m)set composition outside of interfaces, yet glues together interface events, thus composing events that did not end in one component with those that did not start in the other one. Our definitions are categorical so that isomorphism classes of posets are considered ab initio. Their decoration with labels is then trivial, so that we may focus on posets instead of pomsets. Our main technical results concern the hierarchy of gluing-parallel posets generated by finitary applications of this gluing composition and the standard parallel composition of po(m)sets, starting from singleton iposets.[^1] It is obvious that all series-parallel pomsets can be generated, but also all interval orders are captured at the second alternation level of the hierarchy. Beyond that, we show that the gluing-parallel hierarchy does not collapse and that posets with certain zigzag-shaped induced subposets are excluded. Yet a precise characterisation of the generated (i)posets remains open. Series-parallel posets, by comparison, exclude precisely those posets with induced -shaped subposets; interval orders exclude precisely those with induced subposets $\twotwo$, which makes the two classes incomparable. Iposets thus retain at least the pleasant compositionality properties of series-parallel pomsets and the wide applicability of interval orders in concurrency and distributed computing. In addition, we establish a bijection between isomorphism classes of interval orders and certain equivalence classes of interval sequences [@DBLP:conf/ifip2/Glabbeek90], and we study the basic algebraic properties of iposets, including weak interchange laws and a Levi lemma. The relationship between gluing-parallel ipo(m)set languages and $\CKA$ is left for another article. Posets and Series-Parallel Posets {#se:posets} ================================= A *poset* $(P,\mathord{\le})$ is a set $P$ equipped with a *partial order* $\mathord\le$; a reflexive, transitive, antisymmetric relation $\mathord\le$ on $P$. A *morphism* of posets $P$ and $Q$ is an order-preserving function $f: P\to Q$, that is, $x\le_P y$ implies $f( x)\le_Q f( y)$. Posets and their morphisms define the category $\Pos$. A poset is *linear* if each pair of elements is comparable with respect to its order. We write $\mathord<$ for the strict part of $\le$. We write $[ n]$, for $n\ge 1$, for the *discrete $n$-poset* $(\{ 1,\dotsc, n\},\mathord{\le})$, which satisfies $i\le j\Leftrightarrow i= j$. Additionally, $[ 0]= \emptyset$. The isomorphisms in $\Pos$ are *order bijections*: bijective functions $f: P\to Q$ for which $x\le_P y\Leftrightarrow f( x)\le_Q f( y)$. We write $P\cong Q$ if posets $P$ and $Q$ are isomorphic. We generally consider posets up-to isomorphism and assume, moreover, that all posets are finite. Concurrency theory often considers (isomorphism classes of) posets with points labelled by letters from some alphabet, which represent actions of some concurrent system. These are known as *partial words* or *pomsets*. As we are mainly interested in structural aspects of concurrency, we ignore such labels. Series-parallel posets form a well investigated class that can be generated from the singleton poset by finitary applications of two compositions. Their labelled variants generalise rational languages into concurrency. For arbitrary posets, these compositions are defined as follows. Let $P_1=( P_1, \mathord{ \le_1})$ and $P_2=( P_2, \mathord{ \le_2})$ be posets. 1. Their *serial composition* is the poset $P\pomser Q=( P\sqcup Q, \mathord\le_1\cup \mathord\le_2\cup P_1\times P_2)$. 2. Their *parallel composition* is the poset $P_1\otimes P_2=( P_1\sqcup P_2,\mathord\le_1\cup \mathord\le_2)$. Here, $\sqcup$ means disjoint union (coproduct) of sets. We generalise serial composition to a gluing composition in Section \[se:iposets\], after equipping posets with interfaces. Serial and parallel compositions respect isomorphism, and $[ n+ m]$ is isomorphic to $[ n]\otimes[ m]$ with isomorphism $\varphi_{ n, m}:[ n+ m]\to[ n]\otimes[ m]$ given by $$\varphi_{ n, m}( i)= \begin{cases} i_{[ n]} &\text{if } i\le n\,, \\ ( i- n)_{[ m]} &\text{if } i> n\,. \end{cases}$$ By definition, a poset is *series-parallel* (an *sp-poset*) if it is either empty or can be obtained from the singleton poset by applying the serial and parallel compositions a finite number of times. It is well known [@DBLP:journals/siamcomp/ValdesTL82; @DBLP:journals/fuin/Grabowski81] that a poset is series-parallel iff it does not contain the induced subposet $\N= \pomset{\cdot \ar[r] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot}$.[^2] Sp-po(m)sets form bi-monoids with respect to serial and parallel composition, and with the empty poset as shared unit—in fact the free algebras in this class. Compositionality of the recursive definition of sp-po(m)sets is thus reflected by the compositionality of their algebraic properties, which is often considered a desirable property of concurrent systems [@DBLP:books/sp/Vogler92]. Yet sp-posets are, in fact, too compositional for many applications: even simple consumer-producer problems inevitably generate $\N$’s [@DBLP:journals/tcs/LodayaW00], as shown in Fig. \[fi:prodcon\] which contains the $\N$ spanned by $c_1$, $c_2$, $p_2$, and $p_3$ as an induced subposet among others. in [1,2,3,4]{} (p) at (,0) [$p_\x$]{}; in [1,2,3,4]{} (c) at (,-1) [$c_\x$]{}; (p5) at (5.2,0) [$\cdots$]{}; (c5) at (5.2,-1) [$\cdots$]{}; in [1,2,3,4]{} (p) edge (c); i/in [1/2,2/3,3/4,4/5]{} (pi) edge (p); i/in [1/2,2/3,3/4,4/5]{} (ci) edge (c); Interval orders and interval sequences {#S:interval-orders} ====================================== Interval orders [@Wiener14; @journals/mpsy/Fishburn70] form another class of posets that are ubiquitous in concurrent and distributed computing. Intuitively, they are isomorphic to sets of intervals on the real line that are ordered whenever they do not overlap. An *interval order* is a relational structure $(P,<)$ with $<$ irreflexive such that $w< y$ and $x< z$ imply $w< z$ or $x< y$, for all $w,x,y,z\in P$. Transitivity of $<$ follows. An alternative geometric characterisation is that interval orders are precisely those posets that do not contain the induced subposet $\twotwo= \pomset{\cdot \ar[r] & \cdot \\ \cdot \ar[r] & \cdot}$. The intuition is captured by Fishburn’s theorem [@journals/mpsy/Fishburn70], which implies that a finite poset $P$ is an interval order iff it has an *interval representation*: a pair of functions $b,e:P\to Q$ into some linear order $(Q,<_Q)$ such that $b(x)<_Q e(x)$, for all $x\in P$, and $x<_P y \Leftrightarrow e(x)<_Q b(y)$, for all $x,y\in P$. By the first condition, pairs $(b(x),e(x))$ correspond to intervals $I(x)=[b(x),e(x)]$ in $Q$; by the second condition, $x<_P y$ iff $I( x)$ lies entirely before $I(y)$ in $Q$. We write $\irep(P)$ for the set of interval representations of $P$. Each representation can be rearranged such that all endpoints of intervals are distinct ([@GolumbicT04], Lemma 1.5). We henceforth assume that all interval presentations have this property. It then holds that $|Q|=2|P|$, and we can fix $Q$ as the target type of any interval representation of $P$. Finally, with relation $\sqsubset$ on the set of maximal antichains of poset $P$ given by $$A\sqsubset B \Leftrightarrow (\forall x\in A\setminus B.\forall y\in B\setminus A.\ x<y),$$ it has been shown that $P$ is an interval order iff $\sqsubset$ is a strict linear order [@book/Fishburn85]. Interval orders also occur implicitly in the ST-traces of Petri nets [@DBLP:conf/ifip2/Glabbeek90]. In a pure order-theoretic setting, these are *interval sequences*, that is, sequences of $b(x)$ and $e(x)$, with $x$ from some finite set $P$, in which each $b(x)$ occurs exactly once and each $e(x)$ at most once and only after the corresponding $b(x)$. An interval sequence is *closed* if each $e(x)$ occurs exactly once [@DBLP:conf/ifip2/Glabbeek90; @DBLP:books/sp/Vogler92]. An *interval trace* [@DBLP:journals/iandc/JanickiY17] is an equivalence class of interval sequences modulo the relations $b(x)b(y) \approx b(y)b(x)$ and $e(x)e(y) \approx e(y)e(x)$ for all $x,y\in P$. We write $\approx^\ast$ for the congruence generated by $\approx$ on interval sequences. We identify interval sequences and interval traces with the Hasse diagrams of their linear orders over $Q$. Let $P$ be an interval order and $(b,e)\in \irep(P)$. Then $(Q, <_Q)$ is a closed interval sequence. Trivial. We write $\sigma_{(b,e)}(P)$ for the interval sequence of interval order $P$ and $(b,e)\in \irep(P)$, and $\Sigma(P)$ for the set of all interval sequences of interval representations of $P$. \[P:irep1\] If $\sigma\in \Sigma(P)$ and $\sigma\approx^\ast \sigma'$, then $\sigma'\in \Sigma(P)$. We show that $\sigma\in \Sigma(P)$ and $\sigma\approx \sigma'$ imply $\sigma'\in \Sigma(P)$. Suppose that $\sigma=\sigma_1b(x)b(y)\sigma_2$ and $\sigma'= \sigma_1b(y)b(x)\sigma_2$ and that $(b,e)\in \irep(P)$ generates $\sigma$. Then $(b',e)$ with $$b'(z) = \begin{cases} b(y), & \text{ if } z = x,\\ b(x), & \text{ if } z = y,\\ b(z), & \text{ otherwise} \end{cases}$$ is in $\irep(P)$, as $b'(x) <_Q e(x)$, $b'(y) <_Q e(y)$ and, for all $v,w\in P$, $v <_P w \Leftrightarrow e(v) <_P b(w)$ still holds. In addition, $(b',e)$ generates $\sigma'$. An analogous result for $\sigma=\sigma_1e(x)e(y)\sigma_2$ and $\sigma'= \sigma_1e(y)e(x)\sigma_2$ holds by opposition. The result for $\approx^\ast$ follows by a simple induction. \[P:irep2\] Let $P$ be an interval order. If $(b,e),(b',e')\in \irep(P)$ assign $b$ and $e$ to elements of $P$ in interval sequences, then $\sigma_{(b,e)}(P) \approx^\ast \sigma_{(b',e')}(P)$. Let $\prec_1$ and $\prec_2$ be the orderings of the interval sequences for $(b,e)$ and $(b',e')$ in $Q$. Then $b(x)\prec_1 e(x)$ and $b(x)\prec_2 e(x)$ for all $x\in X$, and $e(x)\prec_1 b(y) \Leftrightarrow e(x)\prec_2 b(y)$ for all $x,y\in X$. It follows that there is no $b(z)$ in $\prec_1$ or $\prec_2$ between the positions of $e(x)$ in $\prec_1$ and $\prec_2$ and, by opposition, there is no $e(z)$ in $\prec_1$ or $\prec_2$ between the positions of $b(x)$ in $\prec_1$ and $\prec_2$. But this means that the positions of $e(x)$ and $b(x)$ can be rearranged by $\approx^\ast$. \[P:irep\] If $P$ is an interval order and $(b,e)\in\irep(P)$, then $[\sigma_{(b,e)}(P)]_{\approx^\ast} = \Sigma(P)$. The mapping $\varphi$ defined by $\varphi( P)= [\sigma_{(b,e)}(P)]_{\approx^\ast}$ is a bijection. By Lemma \[P:irep1\] and \[P:irep2\], and properties of interval representations. Posets with interfaces {#se:iposets} ====================== An element $s$ of poset $( P, \le)$ is *minimal* (*maximal*) if $v\not\le s$ ($v\not\ge s$) holds for all $v\in P$. We write $P_{\min}$ ($P_{\max}$) for the sets of minimal (maximal) elements of $P$. A *poset with interfaces (iposet)* consists of a poset $P$ together with two injective morphisms $$\label{eq:ipos} \xymatrix@C=1.5pc@R=0pc{ [ n] \ar[dr]^s && [ m] \ar[dl]_t \\ & P & }$$ such that $s[ n]\subseteq P_{\min}$ and $t[ m]\subseteq P_{\max}$. Injection $s:[ n]\to P$ represents the *source interface* of $P$ and $t:[ m]\to P$ its *target interface*. We write $(s,P,t):n\to m$ for the iposet $s:[ n]\to P\from[ m]: t$. Figure \[fi:iposets\] shows some examples of iposets. Elements of source and target interfaces are depicted as filled half-circles to indicate the unfinished nature of the events they represent. (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); Next we define a sequential gluing composition on iposets whose interfaces agree and we adapt the standard parallel composition of posets to iposets. Let $(s_1,P_1,t_1):n\to m$ and $(s_2,P_2 ,t_2):\ell\to k$ be iposets. 1. For $m=\ell$, their *gluing composition* is the iposet $(s_1,P_1\ipomconcat P_2,t_2):n\to k$ with $P_1\ipomconcat P_2 = \left(( P_1\sqcup P_2)_{/ t_1( i)= s_2( i)},\mathord\le_1\cup \mathord\le_2\cup( P_1\setminus t_1[ m])\times( P_2\setminus s_2[ m])\right)$. 2. Their *parallel composition* is the iposet $(s,P_1\otimes P_2,t):n+\ell\to m+k$ with $s=\left(s_1\otimes s_2\right)\circ \varphi_{ n, l}$ and $t=\left(t_1\otimes t_2\right)\circ \varphi_{ m, k}$. Parallel composition of iposets thus puts components “side by side”: it is the disjoint union of posets and interfaces. Gluing composition puts iposets “one after the other”, $P_1$ before $P_2$, but glues their interfaces together (and adds arrows from all points in $P_1$ that are not in its target interface to all points in $P_2$ that are not in its source interface). Figures \[fi:ndecomp\] and \[fi:iposetcomp\] show examples. The half-circles in source and target interfaces are glued to circles in the diagrams. \(1) at (.5,0) ; (2) at (0,-1) ; (3) at (1,-1) ; (2) edge (3); at (1.5,-.5) [$\ipomconcat$]{}; \(1) at (0,0) ; (2) at (0,-1) ; at (.5,-.5) [$=$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (2) at (0,-1) ; at (.5,-.5) [$\ipomconcat$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (.5,-1) ; (1) edge (2); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$\ipomconcat$]{}; (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (2,0) ; (4) at (3,0) ; (5) at (0,-1) ; (6) at (1,-1) ; (7) at (2,-1) ; (8) at (3,-1) ; i/in [1/2,2/3,2/7,3/4,5/2,5/6,6/3,6/7,7/4,7/8]{} (i) edge (); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$\ipomconcat$]{}; (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (23) at (1.5,0) ; (4) at (3,0) ; (5) at (0,-1) ; (6) at (1,-1) ; (7) at (2,-1) ; (8) at (3,-1) ; i/in [1/23,1/7,23/4,5/23,5/6,6/7,7/4,7/8]{} (i) edge (); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$\ipomconcat$]{}; (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (23) at (1.5,0) ; (3) at (2,0) ; (4) at (3,0) ; (5) at (0,-1) ; (6) at (1,-1) ; (7) at (2,-1) ; (8) at (3,-1) ; i/in [1/23,1/7,23/4,5/23,5/6,6/4,6/7,23/8,7/8]{} (i) edge (); (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$\ipomconcat$]{}; (0,0) to (1,-1); (1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (2); (3) edge (4); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (23) at (1.5,0) ; (4) at (3,0) ; (5) at (0,-1) ; (67) at (1.5,-1) ; (8) at (3,-1) ; i/in [1/23,23/4,5/67,5/23,67/8,67/4,1/8]{} (i) edge (); We define *identity iposets* $\id_n=( \id,[ n], \id): n\to n$, for $n\ge 0$. For convenience, we generalise this notation to other singleton posets with interfaces: for $k, \ell\le n$, we write $\idpos k \ell n$ for the iposet $(f_k^n,[n],f_\ell^n):k\to \ell$, where $f_k^n:[ k]\to[ n]$ is the (identity) injection $x\mapsto x$ (similarly for $f_\ell^n$). Hence $\id_n= \idpos n n n$. We write $\mcal S = \{ \idpos k \ell 1\mid k, \ell= 0, 1\}$ for the set of all singleton iposets. \(1) at (0,0) ; (2) at (1,0) ; (3) at (1,-1) ; (1) edge (2); at (1.5,-.5) [$\ipomconcat$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (1) edge (2); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (2,0) ; (4) at (1,-1) ; (1) edge (2); (2) edge (3); \(3) at (1,0) ; (1) at (0,-1) ; (2) at (1,-1) ; (1) edge (2); at (1.5,-.5) [$\ipomconcat$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (1) edge (2); at (1.5,-.5) [$=$]{}; \(1) at (0,0) ; (2) at (1,0) ; (3) at (0,-1) ; (4) at (1,-1) ; (1) edge (2); (3) edge (4); (3) edge (2); Parallel composition need not be commutative, as the namings of interfaces in $P\otimes Q$ may differ from those in $Q\otimes P$. One can, however, rename interfaces using *symmetries*: iposets $(s,[ n],t):n\to n$ with $s$ and $t$ bijective. Figure \[fi:otimesnotsymm\] shows two parallel compositions where renaming of interfaces and gluing with another iposet yields non-isomorphic posets. Also, gluing and parallel composition need not satisfy an interchange law: $$(\idpos 0 0 1 \otimes \idpos 0 0 1) \ipomconcat (\idpos 0 0 1 \otimes \idpos 0 0 1) = \pomset{\cdot \ar[r] \ar[dr] & \cdot \\ \cdot\ar[r] \ar[ur] & \cdot} \neq \pomset{\cdot \ar[r] & \cdot \\ \cdot\ar[r] & \cdot} =(\idpos 0 0 1 \ipomconcat \idpos 0 0 1) \otimes (\idpos 0 0 1\ipomconcat \idpos 0 0 1)\,.$$ Hence iposets do *not* form (strict) monoidal categories, or even PROPs, because $\otimes$ is not a tensor. The situation differs from gluing compositions where interfaces of iposets are defined by *all* minimal and maximal elements [@DBLP:journals/ipl/Winkowski77], and also from sequential compositions of digraphs with “partial” interfaces similar to ours where interface points glue arrows together and disappear in these compositions [@DBLP:conf/birthday/FioreC13]. Both of these give rise to a PROP. Gluing composition, of course, is not commutative either: $$\idpos 0 1 1\ipomconcat \idpos 1 0 1= \idpos 0 0 1 = \pomset{\cdot }\neq \pomset{\cdot \ar[r] & \cdot}=\idpos 1 0 1\ipomconcat \idpos 0 1 1$$ \[pr:ipos-comp\] Iposets form a small category with natural numbers as objects, iposets $( s, P, t): n\to m$ as morphisms, $\ipomconcat$ as composition, and identities $\id_n$. Checking associativity of $\ipomconcat$ and the existence of units is routine, as is the proof of the next proposition. \[pr:ops-assoc\] Iposets form a monoid with composition $\otimes$ and unit $\id_0$. A *morphism* of iposets is a commuting diagram $$\label{eq:morphipos} \vcenter{ \xymatrix{ [ n] \ar[r]^s \ar[d]_\nu & P \ar[d]_f & [ m] \ar[l]_t \ar[d]^\mu \\ [ n'] \ar[r]_{ s'} & P' & [ m'] \ar[l]^{ t'} }}$$ where $\nu$ and $\mu$ are strictly order preserving with respect to $<_\Nat$ and $f$ is an order morphism. Intuitively, iposet morphisms thus preserve interfaces and their order in $\Nat$. Let $\iPos$ denote the so-defined category. An iposet morphism $(\nu,f,\mu)$ is an *isomorphism* if $\nu$, $f$ and $\mu$ are order isomorphisms. Hence $n=n'$, $m=m'$, $\nu=\id:n\to n$, and $\mu=\id:m\to m$ in diagram . As a consequence, we note that iposets which are related by a symmetry $(s,[ n],t):n\to n$ need not be isomorphic. We write $P\cong Q$ if there exists an isomorphism $\varphi:P\to Q$. The following lemma shows that the two compositions respect isomorphism. Let $P, P', Q, Q'$ be iposets. Then $P\cong P'$ and $Q\cong Q'$ imply $P\otimes Q\cong P'\otimes Q'$ and $P\ipomconcat Q\cong P'\ipomconcat Q'$. Let $\varphi: P\to P'$ and $\psi: Q\to Q'$ be (the poset components of) isomorphisms. Define the functions $\varphi\otimes \psi: P\sqcup Q\to P'\sqcup Q'$ and $\varphi\ipomconcat \psi:( P\sqcup Q)_{/t_P(i)=s_Q(i)}\to (P'\sqcup Q') _{/t_{P'}(i)=s_{Q'}(i)}$ as $$(\varphi \mathop{\Box} \psi)( x)= \begin{cases} \varphi( x) &\text{if } x\in P\,, \\ \psi( x) &\text{if } x\in Q\,, \end{cases}$$ for $\Box \in \{\otimes,\ipomconcat\}$. First, $\varphi\otimes\psi$ is obviously an isomorphism. Second, $\varphi\ipomconcat\psi$ is well-defined because $\varphi\circ t_P( i)= \psi\circ s_Q( i)$ for all $i\in[ m]$, and easily seen to be an isomorphism as well. We write $P\preceq Q$ if there is a bijective (on points) morphism $\varphi:Q\to P$ between iposets $P$ and $Q$. Intuitively, $P\preceq Q$ iff $P$ has more arrows and is therefore less parallel than $Q$, while interfaces are preserved. Similar relations on posets and pomsets, sometimes called *subsumption*, are well studied [@DBLP:journals/fuin/Grabowski81; @DBLP:journals/tcs/Gischer88]. In particular, $\preceq$ is a preorder on (finite) iposets and a partial order up to isomorphism. \[P:lax-interchange\] For iposets $P,P',Q,Q'$, the following lax interchange law holds: $$(P\otimes P') \ipomconcat (Q\otimes Q') \preceq (P\ipomconcat Q)\otimes (P'\ipomconcat Q')$$ Let $P_\ell = (P\otimes P') \ipomconcat (Q\otimes Q')$ and $P_r = (P\ipomconcat Q)\otimes (P'\ipomconcat Q')$. First, $P_\ell= ( P\sqcup Q)_{/ t_P\equiv s_Q}\sqcup( P'\sqcup Q')_{/ t_{P'}\equiv s_{Q'}}=( P\sqcup Q\sqcup P'\sqcup Q')_{ t_P\equiv s_Q, t_{P'}\equiv s_{Q'}} =P_r$, by definition of $\otimes$. Hence both posets have the same points, and we may choose $\varphi:P_r\to P_\ell$ to be the identity. It remains to show that $\varphi$ is order preserving, which means that every arrow in $P_r$ must be in $P_\ell$. Hence suppose $x\le_{P_r} y$, that is, $x\le_{P\ipomconcat Q} y$ or $x\le _{P'\ipomconcat Q'} y$. In the first case, if $x\le_P y$ or $x\le_Q y$, then $x\le_{P\otimes P'} y$ or $x\le_{Q\otimes Q'} y$ and therefore $x\le_{P_\ell} y$; and if $x\in P\setminus t_P$ and $y\in Q\setminus s_Q$, then $x\in P\sqcup P'\setminus t_{ P\otimes P'}$ and $y\in Q\sqcup Q'\setminus s_{ Q\otimes Q'}$ and therefore $x\le_{P_\ell} y$, too. The second case is symmetric. Thus, in any case, $x\le_{P_\ell} y$. In sum, the algebra of iposets is thus similar to concurrent monoids [@DBLP:journals/jlp/HoareMSW11], but $\ipomconcat$ is a partial operation with many units $\id_k$. As $\otimes$ is not a tensor, the categorical structure of iposets is somewhat unusual and deserves further exploration. \[pr\_iposets\_generalize\] \[pr:posiposadj\] $\Pos$ embeds into $\iPos$ as iposets with both interfaces $[0]$, and likewise for morphisms. The so-defined inclusion functor $J: \Pos\to \iPos$ is fully faithful and left adjoint to the forgetful functor $F: \iPos\to \Pos$ that maps $(s,P,t)$ to $P$, hence $\Pos$ is coreflective in $\iPos$. Under $F$, gluing composition of iposets becomes serial composition of posets, and parallel composition of iposets becomes that of posets (hence, commutative). It is clear that $J$ is a functor. It is full because any morphism $\tilde f$ from $P: 0\to 0$ to $Q: 0\to 0$ in $\iPos$ must have the form $( \emptyset, f, \emptyset)= J f$ for some $f$ in $\Pos$. It is faithful because $J f = ( \emptyset, f, \emptyset)=( \emptyset, g, \emptyset)= J g$ implies $f= g$. For $P\in \Pos$ and $\tilde Q\in \iPos$, $J$ induces a natural bijection $J: \Pos( P, F \tilde Q)\cong \iPos( J P, \tilde Q)$, hence $J$ and $F$ are indeed adjoint. The last claims about the operations are clear. Further Properties of Iposets ============================= We now derive additional algebraic properties of iposets, before turning to the set of iposets generated by gluing and parallel composition from singleton iposets. For an iposet $P$ with order relation $\le$ we write $\mathord{\para}= \mathord{\not\le}\cap \mathord{\not\ge}$. Hence $x\para y$ iff $x$ and $y$ are unrelated and therefore *independent*. In addition to the lax interchange in Lemma \[P:lax-interchange\], we prove an equational interchange law that shows that the equational theory of $\iPos$ as given by the bimonoidal laws in Propositions \[pr:ipos-comp\] and \[pr:ops-assoc\] is not free. The lemmas further below then show that this law is the *only* non-trivial additional identity. \[le:interchange-eq\] For all iposets $P$, $Q$ and $k, \ell\in\{ 0, 1\}$, $$( \idpos k 1 1\otimes P)\ipomconcat( \idpos 1 \ell 1\otimes Q)= \idpos k \ell 1\otimes( P\ipomconcat Q)\,.$$ The interface between $\idpos k 1 1$ and $\idpos 1 \ell 1$ forces these iposets to be glued separately to the rest in the gluing composition $( \idpos k 1 1\otimes P)\ipomconcat( \idpos 1 \ell 1\otimes Q)$. One the one hand, it follows that singleton iposets in $\mcal S$ do not interfere with compositions. On the other hand, Lemma \[le:interchange-eq\] shows that decompositions need not be unique. The next lemma shows a kind of converse: if an iposet can be decomposed by $\ipomconcat$ and also by $\otimes$, then all but one of the components must be in $\mcal S$. Henceforth, let $\mcal C_1= \left\{ P_1\otimes\dotsm\otimes P_n\bigmid P_1,\dotsc, P_n\in\mcal S \right\}$ denote the set of multisets-with-interfaces, that is, iposets with discrete order. \[le:decomp\] Let $P= P_1\otimes P_2= Q_1\ipomconcat Q_2$ such that $P_1\ne \id_0$, $P_2\ne \id_0$, and $Q_1\ne \idpos k n n$, $Q_2\ne \idpos n k n$ for any $k\le n$. Then $P_1\in \mcal C_1$ or $P_2\in \mcal C_1$. Suppose $P_1\notin \mcal C_1$ and $P_2\notin \mcal C_1$. Then $P$ contains a $\twotwo$: there are $w, x\in P_1$ and $y, z\in P_2$ for which $w<_P x$, $y<_P z$, $w\para_P y$, $w\para_P z$, $x\para_P y$, and $x\para_P z$. If $w,y\notin Q_2$, then $w,y\in Q_1\setminus t_{ Q_1}$. As $Q_2\ne \idpos n k n$ for any $k\le n$, there must be an element $v\in Q_2\setminus s_{ Q_2}$. But then $w\le_P v$ and $y\le_P v$, which yields arrows between $w\in P_1$ and $y\in P_2$ that contradict $P=P_1 \otimes P_2$. A dual argument rules out that $x,z\notin Q_1$. It follows that $w\in Q_2$ or $y\in Q_2$. Assume, without loss of generality, that $w\in Q_2$. Then $x\in Q_2\setminus s_{Q_2}$ because $w\le_{P_1} x$. Now if also $y\in Q_2$, then by the same argument, $z\in Q_2\setminus s_{ Q_2}$. Hence $Q_2$ contains two different points which are not in its starting interface; and as $Q_1\setminus t_{ Q_1}$ is non-empty, this again establishes a connection between $x\in P_1$ and $z\in P_2$ which cannot exist. Hence $y\notin Q_2$, but then $y\in Q_1\setminus t_{ Q_1}$, so that $y\le_P x$, which contradicts $x\para_P y$. The next lemma generalises Levi’s lemma for words [@journals/bcms/Levi44]. \[le:Levi\] Let $P\mathop{\Box} Q= U\mathop{\Box} V$ for $\Box\in \{\ipomconcat,\otimes\}$. Then there is an $R$ so that either $P= U\mathop{\Box }R$ and $R\mathop{\Box} Q= V$, or $U= P\mathop{\Box} R$ and $R\mathop{\Box} V= Q$. The proof for $\otimes$ is trivial: If $P\otimes Q=U\otimes V$, then this iposet is partitioned into three components according to $P\sqcup Q$ and $U\sqcup V$. If the decomposition of $U$ and $V$ happens within $P$, then there is an $R$ such that $P=U\otimes R$ and $R\otimes Q=V$. Otherwise, if it happens within $Q$, then there exists an $R$ such that $U=P\otimes R$ and $R\otimes V$. Finally, if $P=U$ and $Q=V$, there is nothing to show. The proof for $\ipomconcat$ is similar, but more tedious due to gluing. It is instructive to find the two cases in the decomposition of  in Figure \[fi:ndecomp\]. Levi’s lemma is an interpolation property: every $P\mathop{\Box} Q= U\mathop{\Box} V$ has a common factorisation—either $U\mathop{\Box} R\mathop{\Box} Q$ or $P\mathop{\Box} R\mathop{\Box} V$. Hence sequential and gluing decompositions at top level are equal up-to associativity (and unit laws). The three lemmas in this section are helpful for characterising the iposets generated by $\ipomconcat$ and $\otimes$ from singletons. This is the subject of the next section. Generating Iposets {#se:generate} ================== Recall that $\mcal S$ is the set of *singleton* iposets. It contains the four iposets $\idpos 0 0 1$, $\idpos 0 1 1$, $\idpos 1 0 1$ and $\idpos 1 1 1$, that is, $$[ 0]\to[ 1]\from [ 0]\,, \qquad [ 0]\to[ 1]\from[ 1]\,, \qquad [ 1]\to[ 1]\from [ 0]\,, \qquad [ 1]\to[ 1]\from[ 1]\,,$$ with mappings uniquely determined. We are interested in the sets of iposets generated from singletons using $\ipomconcat$ and $\otimes$. Note that strictly speaking, $\idpos 0 0 1$ should not count as a generator, because by Lemma \[le:interchange-eq\] it is equal to $\idpos 0 1 1\ipomconcat \idpos 1 0 1$. The set of *gluing-parallel* iposets (*gp-iposets*) is the smallest set that contains the empty iposet $\id_0$ and the singleton iposets in $\mcal S$ and is closed under gluing and parallel composition. \[P:ipos-free\] The gp-iposets are generated freely by $\mcal S$ in the variety of algebras satisfying the equations of Propositions \[pr:ipos-comp\] and \[pr:ops-assoc\] and Lemma \[le:interchange-eq\]. Suppose $(A,\ipomconcat,\otimes,(1_i)_{i\ge 0})$ is any algebra satisfying the equations of Propositions \[pr:ipos-comp\] and \[pr:ops-assoc\] and Lemma \[le:interchange-eq\] and let $\varphi:\mcal S\to A$ be any function. We need to show that $\varphi$ extends to a unique iposet morphism $\hat{\varphi}$. We can generate any $\id_n$ as a parallel composition of $\id_1$. We map $\hat{\varphi}(\id_i)\mapsto 1_i$ for any $i\ge 0$, and we map any other singleton $p\in \mcal S$ as $\hat{\varphi}(p) = \varphi(p)$. For complex iposets we proceed by induction on the number of elements, assuming that homomorphism laws hold for iposets with $n$ elements. If the top composition of the size $n+1$ iposet is $\ipomconcat$, then we use Levi’s lemma to factorise with respect to $\ipomconcat$ and use associativity of $\ipomconcat$ to establish the homomorphism property of $\hat{\varphi}$. For $\otimes$ we proceed likewise. Finally, if the top composition is ambiguous, then the decomposition lemma forces the configuration in which the interchange lemma can be applied, yielding a parallel composition of the same size. Finally, this extension is unique, as it was forced by the construction. Now we define hierarchies of iposets generated from $\mcal S$. (If $\idpos 0 0 1$ were removed from $\mcal S$, the hierarchy would be different only for less than two alternations of $\ipomconcat$ and $\otimes$.) For any $\mcal Q\subseteq \iPos$ and $\mathop{\Box}\in\{\otimes,\ipomconcat\}$, let $$\begin{aligned} \mcal Q^{\mathop{\Box}} = \{ P_1\mathop{\Box}\dotsm\mathop{\Box} P_n\mid n\in \Nat, P_1,\dotsc, P_n\in \mcal Q\}\ .\end{aligned}$$ Then define $\mcal C_0= \mcal D_0= \mcal S$ and, for all $n\in \Nat$, $$\mcal C_{ 2 n+ 1}= \mcal C_{ 2 n}^\otimes\,, \qquad \mcal D_{ 2 n+ 1}= \mcal D_{ 2 n}^\ipomconcat\,, \qquad \mcal C_{ 2 n+ 2}= \mcal C_{ 2 n+ 1}^\ipomconcat\,, \qquad \mcal D_{ 2 n+ 2}= \mcal D_{ 2 n+ 1}^\otimes$$ (this agrees with the $\mcal C_1$ notation used earlier). Finally, let $$\bar{ \mcal S}\defeq \bigcup_{ n\ge 0} \mcal C_n= \bigcup_{ n\ge 0} \mcal D_n$$ be the set of all iposets generated from $\mcal S$ by application of $\otimes$ and $\ipomconcat$. \[le:trivial\_hierarchy\_inclusions\] For all $n\in \Nat$, $\mcal C_n\cup \mcal D_n\subseteq \mcal C_{ n+ 1}\cap \mcal D_{ n+ 1}$. We need to check the inclusions $\mcal C_n\subseteq \mcal C_{ n+ 1}$, $\mcal D_n\subseteq \mcal D_{ n+ 1}$, $\mcal C_n\subseteq \mcal D_{ n+ 1}$ and $\mcal D_0\subseteq \mcal C_1$. The first two are trivial by construction, plus $\mcal C_n\subseteq \mcal D_{ n+ 1}$ and $\mcal D_n\subseteq \mcal C_{ n+ 1}$. For the third one, note that $\mcal C_0\subseteq \mcal C_{0}^{\ipomconcat}= \mcal S^\ipomconcat= \mcal D_{0}^{\ipomconcat} = \mcal D_{1}$. Since $\mcal C_{n}$ is constructed from $\mcal C_{0}$ by the same alternations of $\otimes$ and $\ipomconcat$ as $\mcal D_{n+1}$ is constructed from $\mcal D_{1}$, the inclusion holds. The proof of the fourth inclusion is similar. \[th:iorder\] An iposet is in $\mcal C_2$ iff it is an interval order. Suppose $P\ipomconcat Q\in \mcal C_2$. First it is clear that all elements of $\mcal C_1$ are interval orders, so we will be done once we can show that the gluing composition of two interval orders is an interval orders. This is precisely the proof of Lemma \[le:decomp\]: if $P\ipomconcat Q$ contains a $\twotwo$, then so do $P$ or $Q$. Yet we also give a direct construction: Let $\sigma_P$ be the interval sequence for interval representation $(b_P,e_P)$ of $P:n\to m$ and $\sigma_Q$ the interval sequence for interval representation $(b_Q,e_Q)$ of $Q:m\to k$. Then concatenate $\sigma_P$ and $\sigma_Q$, rename $b_P$, $b_Q$ as $b$ and $e_P$, $e_Q$ as $e$, delete $e(t_P(i))$, $b(s_Q(i))$ and replace $e(t_Q(i))$ with $e(t_P(i))$ for each $i\in [m]$. This yields the interval sequence for interval representation $(b,e)$ of $P\ipomconcat Q$ and $P\ipomconcat Q$ is therefore an interval order. Figure \[fi:intcomp\] gives an example. \(1) at (1.5,0) [$a$]{}; (2) at (4.5,0) [$b$]{}; (3) at (1.5,-1) [$c$]{}; (4) at (4.5,-1) [$d$]{}; (5) at (4.5,-2) [$e$]{}; i/in [1/2,3/2,3/4,3/5]{} (i) edge (); at (6.5,-1) [$\ipomconcat$]{}; \(6) at (.5,0) [$f$]{}; (7) at (3.5,0) [$g$]{}; (8) at (.5,-2) [$h$]{}; (9) at (3.5,-2) [$i$]{}; i/in [6/7,8/7,8/9]{} (i) edge (); at (5.7,-1) [$=$]{}; \(1) at (-.5,0) [$a$]{}; (2) at (4.5,0) [$bf$]{}; (3) at (-.5,-1) [$c$]{}; (4) at (2.5,-1) [$d$]{}; (5) at (4.5,-2) [$eh$]{}; \(6) at (.5,0) ; (7) at (5.5,0) [$g$]{}; (8) at (.5,-2) ; (9) at (5.5,-2) [$i$]{}; i/in [1/2,3/2,3/4,3/5]{} (i) edge (); i/in [6/7,8/7,8/9]{} (i) edge (); i/in [1/9,4/7,4/9]{} (i) edge (); \] (1l) at (0,0) [[$|$]{}]{}; (1r) at (3,0) [[$|$]{}]{}; (2l) at (4,0) [[$|$]{}]{}; (2r) at (5,0) [[$|$]{}]{}; (3l) at (0,-1) [[$|$]{}]{}; (3r) at (1,-1) [[$|$]{}]{}; (4l) at (2,-1) [[$|$]{}]{}; (4r) at (5,-1) [[$|$]{}]{}; (5l) at (2,-2) [[$|$]{}]{}; (5r) at (5,-2) [[$|$]{}]{}; (1l) edge node\[above\] [$I(a)$]{} (1r); (2l) edge node\[above\] [$I(b)$]{} (2r); (3l) edge node\[above\] [$I(c)$]{} (3r); (4l) edge node\[above\] [$I(d)$]{} (4r); (5l) edge node\[above\] [$I(e)$]{} (5r); at (6,-1) [$\ipomconcat$]{}; (6l) at (0,0) [[$|$]{}]{}; (6r) at (3,0) [[$|$]{}]{}; (7l) at (4,0) [[$|$]{}]{}; (7r) at (5,0) [[$|$]{}]{}; (8l) at (0,-2) [[$|$]{}]{}; (8r) at (1,-2) [[$|$]{}]{}; (9l) at (2,-2) [[$|$]{}]{}; (9r) at (5,-2) [[$|$]{}]{}; (6l) edge node\[above\] [$I(f)$]{} (6r); (7l) edge node\[above\] [$I(g)$]{} (7r); (8l) edge node\[above\] [$I(h)$]{} (8r); (9l) edge node\[above\] [$I(i)$]{} (9r); at (6,-1) [$=$]{}; (1l’) at (0,0) [[$|$]{}]{}; (1r’) at (3,0) [[$|$]{}]{}; (2/6l’) at (4,0) [[$|$]{}]{}; (3l’) at (0,-1) [[$|$]{}]{}; (3r’) at (1,-1) [[$|$]{}]{}; (4l’) at (2,-1) [[$|$]{}]{}; (4r’) at (5,-1) [[$|$]{}]{}; (5/8l’) at (2,-2) [[$|$]{}]{}; (2/6r’) at (3,0) [[$|$]{}]{}; (7l’) at (4,0) [[$|$]{}]{}; (7r’) at (5,0) [[$|$]{}]{}; (5/8r’) at (1,-2) [[$|$]{}]{}; (9l’) at (2,-2) [[$|$]{}]{}; (9r’) at (5,-2) [[$|$]{}]{}; (1l’) edge node\[above\] [$I(a)$]{} (1r’); (2/6l’) edge node\[above\] [$I(bf)$]{} (2/6r’); (3l’) edge node\[above\] [$I(c)$]{} (3r’); (4l’) edge node\[above\] [$I(d)$]{} (4r’); (5/8l’) edge node\[above\] [$I(eh)$]{} (5/8r’); (7l’) edge node\[above\] [$I(g)$]{} (7r’); (9l’) edge node\[above\] [$I(i)$]{} (9r’); For the backward direction, let $P$ be an interval order and $A_P$ its set of maximal antichains. Then $A_P$ is totally ordered by the relation $\sqsubset$ defined in Section \[S:interval-orders\]. Now write $A_P=\{ P_1,\dotsc, P_k\}$ such that $P_i\sqsubset P_j$ for $i< j$. Then each $P_i$ is an element of $\mcal S^\otimes$. Write $s_1:[ n_1]\to P\from[ n_{ k+ 1}]: t_k$ for the sources and targets of $P$. For $i= 2,\dotsc, k$, let $[ n_i]= P_{ i- 1}\cap P_i$ be the overlap and $s_i:[ n_i]\hookrightarrow P_i$, $t_{ i- 1}:[ n_i]\hookrightarrow P_{ i- 1}$ the inclusions. Together with $s_1$ and $t_k$ this defines iposets $s_i:[ n_i]\to P_i\from[ n_{ i+ 1}]: t_i$. (Note that $s_1:[ n_1]\to P_1$ because $P_1$ is the minimal element in $A_P$; similarly for $t_k:[ n_{ k+ 1}]\to P_k$.) It is clear that $P= P_1\ipomconcat\dotsm\ipomconcat P_k$; see also [@DBLP:conf/apn/Janicki18 Prop. 2]. In order to compare with series-parallel posets, we construct a similar hierarchy for these. Let $\mcal T_0= \mcal U_0= \mcal S_0=\{ \idpos 0 0 1\}$ and, for all $n\in \Nat$, $$\mcal T_{ 2 n+ 1}= \mcal T_{ 2 n}^\otimes\,, \qquad \mcal U_{ 2 n+ 1}= \mcal U_{ 2 n}^\ipomconcat\,, \qquad \mcal T_{ 2 n+ 2}= \mcal T_{ 2 n+ 1}^\ipomconcat\,, \qquad \mcal U_{ 2 n+ 2}= \mcal U_{ 2 n+ 1}^\otimes\,.$$ Then, noting that any element of any $\mcal T_n$ or $\mcal U_n$ has empty interfaces and that for iposets with empty interfaces, $\ipomconcat$ is serial composition, we see that $$\bar{ \mcal S}_0\defeq \bigcup_{ n\ge 0} \mcal T_n= \bigcup_{ n\ge 0} \mcal U_n$$ is the set of series-parallel posets. Note that $\mcal T_n\subseteq \mcal C_n$ and $\mcal U_n\subseteq \mcal D_n$ for all $n$, hence also $\bar{ \mcal S}_0\subseteq \bar{ \mcal S}$. Now $\bar{ \mcal S}_0$ contains precisely the -free posets whereas is an interval order. Hence $\N\in \mcal C_2$, implying the next lemma. On the other hand, we will see below that $\bar{ \mcal S}_0\not\subseteq \mcal C_n$ for any $n$. $\mcal C_2\not\subseteq \bar{ \mcal S}_0$. $\mcal C_1\cup \mcal D_1 \subsetneq \mcal C_{2} \cap \mcal D_{2}$,  there is an iposet with two non-trivial different decompositions. Directly from Lemma \[le:interchange-eq\]. Next we show that the $\mcal C_n$ hierarchy is infinite, by exposing a sequence of witnesses for $\mcal C_{ 2 n- 1}\subsetneq \mcal C_{ 2 n}$ for all $n\ge 1$. Let $Q= \idpos 0 0 1$, $P_1= Q\ipomconcat Q$, and for $n\ge 1$, $P_{ n+ 1}= Q\ipomconcat( P_n\otimes P_n)$. Note that all these are series-parallel posets. Graphically: $$\begin{gathered} P_1= \pomset{ \cdot \ar[r] & \cdot} \qquad P_2= \pomset{ & \cdot \ar[r] & \cdot \\ \cdot \ar[ur] \ar[dr] \\ & \cdot \ar[r] & \cdot} \qquad P_3= \pomset{ & & \cdot \ar[r] & \cdot \\ & \cdot \ar[ur] \ar[dr] \\ & & \cdot \ar[r] & \cdot \\ \cdot \ar[uur] \ar[ddr] \\ & & \cdot \ar[r] & \cdot \\ & \cdot \ar[ur] \ar[dr] \\ & & \cdot \ar[r] & \cdot} \quad \dotsc\end{gathered}$$ \[le:bintree\] $P_n\in \mcal C_{ 2 n}\setminus \mcal C_{ 2 n- 1}$ for all $n\ge 1$. By induction. For $n= 1$, $P_1\notin \mcal C_1$, but $Q\in \mcal C_0\subseteq \mcal C_1$ and hence $P_1= Q\ipomconcat Q\in \mcal C_2= \mcal C_1^\ipomconcat$. Now for $n\ge 1$, suppose $C_{2n-1} \not \ni P_n\in \mcal C_{ 2 n}$. We use Lemma \[le:decomp\] to show that $P_n\otimes P_n\in \mcal C_{ 2 n+ 1}\setminus \mcal C_{ 2 n}$: Obviously $P_n\otimes P_n\in \mcal C_{ 2 n+ 1}= \mcal C_{ 2 n}^\otimes$. If $P_n\otimes P_n\in \mcal C_{ 2 n}= \mcal C_{ 2 n- 1}^\ipomconcat$, then $P_n\otimes P_n= Q_{1}\ipomconcat\dotsm\ipomconcat Q_k$ for some $Q_1,\dotsc, Q_k\in \mcal C_{ 2 n- 1}$. Yet $P_n\notin \mcal C_1$, which contradicts Lemma \[le:decomp\]. Now to $P_{ n+ 1}= Q\ipomconcat( P_n\otimes P_n)$. Trivially, $P_{ n+ 1}\in \mcal C_{ 2 n+ 2}= \mcal C_{ 2 n+ 1}^\ipomconcat$. Suppose $P_{ n+ 1}\in \mcal C_{ 2 n+ 1}= \mcal C_{ 2 n}^\otimes$. $P_{ n+ 1}$ is connected, hence not a parallel product, so that $P_{ n+ 1}$ must already be in $\mcal C_{ 2 n}= \mcal C_{ 2 n- 1}^\ipomconcat$ and therefore $P_{ n+ 1}= R_1\ipomconcat R_2$. Then, by Levi’s lemma, there is an iposet $S$ such that either $Q= R_1\ipomconcat S$ and $S\ipomconcat( P_n\otimes P_n)= R_2$ or $R_1= Q\ipomconcat S$ and $S\ipomconcat R_2= P_2\otimes P_n$. In the second case, $S\ipomconcat R_2= P_2\otimes P_n$, which again contradicts Lemma \[le:decomp\]; in the first case, both $R_1$ and $S$ must be single points (with suitable interfaces), so that either $R_1= \idpos 0 1 1$ and $R_2= P_{ n+ 1}$ (with an extra starting interface) or $R_1= Q$ and $R_2= P_n\otimes P_n$. This shows that $P_{ n+ 1}= Q\ipomconcat( P_n\otimes P_n)$ is the only non-trivial $\ipomconcat$-decomposition of $P_{ n+ 1}$. Thus $P_n\in \mcal C_{ 2 n- 1}$, a contradiction, and therefore $P_{ n+ 1}\notin \mcal C_{ 2 n+ 1}$. $\mcal C_{ 2 n- 1}\subsetneq \mcal C_{ 2 n}$ for all $n\ge 1$, hence the $\mcal C_n$ hierarchy does not collapse, and neither does the $\mcal D_n$ hierarchy. The last statement follows from $\mcal D_{ 2 n- 2}\subseteq \mcal C_{ 2 n- 1}\subsetneq \mcal C_{ 2 n}\subseteq \mcal D_{ 2 n+ 1}$. For all $n\in \Nat$, $\bar{ \mcal S}_0\not\subseteq \mcal C_n$ and $\bar{ \mcal S}_0\not\subseteq \mcal D_n$. As we have already noted above, $P_n\in \bar{ \mcal S}_0$ for all $n$, which together with Lemma \[le:bintree\] implies the first statement. The second follows from $\mcal C_n\subseteq \mcal D_{ n+ 1}$. We have seen that the $\mcal C_n$ and $\mcal D_n$ hierarchies are properly infinite and that they contain the set of sp-posets only in the limit $\bar{ \mcal S}= \bigcup_{ n\ge 0} \mcal C_n= \bigcup_{ n\ge 0} \mcal D_n$. Finally, we turn to the question of characterising this limit $\bar{ \mcal S}$ geometrically. Recalling that a poset is series-parallel iff if it does not contain an induced subposet isomorphic to , we would like a similar characterisation using forbidden subposets for the gp-(i)posets. We expose five such forbidden subposets, but leave the question of whether there are others to future work. Define the following five posets on six points: $$\begin{gathered} \NN= \pomset{ \cdot \ar[r] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot} \qquad \NPLUS = \pomset{ & \cdot \ar[r] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot} \qquad \NMINUS = \pomset{ & \cdot \ar[r] & \cdot \\ & \cdot \ar[r] \ar[ur] & \cdot \\ \cdot \ar[r] & \cdot \ar[ur]} \\ \TC= \pomset{ \cdot \ar[r]\ar[ddr] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot \\ \cdot \ar[r] \ar[ur] & \cdot} \qquad \LN= \pomset{ \cdot\ar[r] & \cdot\ar[r] & \cdot \\ \cdot\ar[r]\ar[urr] & \cdot\ar[r] & \cdot}\end{gathered}$$ \[pr:forbidden\] If $P\in \bar{ \mcal S}$, then $P$ does not contain , , , , or  as induced subposets. We only show the proof for ; the others are very similar and are left to the reader. We can assume that $P$ is connected. We use structural induction, noting that all $P\in \mcal S$ are -free, so it remains to show that $P\ipomconcat Q$ is -free whenever $P$ and $Q$ are. By contraposition, suppose $P\ipomconcat Q$ contains the induced sub- $\smash{\!\pomset{ a\ar[r] & b \\ c\ar[r]\ar[ur] & d \\ e\ar[r]\ar[ur] & f}\!}$. Then we show that either $P$ or $Q$ also have an induced sub-. Assume first that $a\in Q$. Then $a\le_Q b$, hence also $b\in Q$, but $b\notin Q_{\min}$, that is, $b\notin s_Q$. Now $e\not\le_{P\ipomconcat Q} b$, which forces $e\in t_P$ and therefore in $e\in Q$. This in turn implies that $d, f\in Q$ and in particular $e\le_Q f$. Thus $f\notin Q_{\min}$ and therefore $f\notin s_Q$, which forces $c\in t_P$ and therefore $c\in Q$. This shows that  lies entirely in $Q$. Finally assume that $a\notin Q$. Then $a\in P\setminus t_P$, and as $a\not\le_{P\ipomconcat Q} d$ and $a\not\le_{P\ipomconcat Q} f$, we must have $d, f\in s_Q$ and therefore $d, f\in P$. This forces $c, e\in P$ and in particular $e\le_P f$. Thus $e\notin P_{\min}$, whence $e\notin t_P$. This in turn forces $b\in s_Q$ and therefore $b\in P$. This shows that  lies entirely in $P$. Experiments {#se:experi} =========== We have encoded most of the constructions in this paper with Python to experiment with gluing-parallel (i)posets. Notably, Proposition \[pr:forbidden\] is, in part, a result of these experiments.[^3] Our prototype is rather inefficient, which explains why some numbers are “n.a.”,  not available, in Table \[ta:numposets\]. Using procedures to generate non-isomorphic posets of different types, we have used our software to verify that 1. all posets on five points are in $\bar{ \mcal S}$,  gp-posets; 2. , , , , and  are the only six-point posets that are not in $\bar{ \mcal S}$. We provide tables of gluing-parallel decompositions of posets in appendix to prove these claims. We have also used our software to count non-isomorphic posets and iposets of different types, see Table \[ta:numposets\]. We note that $\mathsf{P}$ and $\mathsf{SP}$ are sequences no. A000112 and A003430, respectively, in the On-Line Encyclopedia of Integer Sequences (OEIS).[^4] Sequences $\mathsf{GPC}$, $\mathsf{SIP}$, $\mathsf{IP}$, and $\mathsf{GPI}$ are unknown to the OEIS. $n$ $\mathsf{P}( n)$ $\mathsf{SP}( n)$ $\mathsf{GP}( n)$ $\mathsf{GPC}( n)$ $\mathsf{SIP}( n)$ $\mathsf{IP}( n)$ $\mathsf{GPI}( n)$ ----- ------------------ ------------------- ------------------- -------------------- -------------------- ------------------- -------------------- -- 0 1 1 1 1 1 1 1 1 1 1 1 1 2 4 4 2 2 2 2 1 5 17 16 3 5 5 5 3 16 86 74 4 16 15 16 10 66 532 419 5 63 48 63 44 350 n.a. 2980 6 318 167 313 233 n.a. n.a. 26566 : Different types of posets with $n$ points: all posets; sp-posets; gp-posets; (weakly) connected gp-posets; iposets with starting interfaces only; iposets; gp-iposets.[]{data-label="ta:numposets"} The single iposet on two points which is not gluing-parallel is the symmetry $[ 2]: 2\to 2$ with $s( 1)= 1$, $s( 2)= 2$, $t( 1)= 2$, and $t( 2)= 1$. The prefix of $\mathsf{GP}$ we were able to compute equals the corresponding prefix of sequence no. A079566 in the OEIS,$^{ \ref{OEISlabels}}$ which counts the number of connected (undirected) graphs which have no induced 4-cycle $C_4$. We leave it to the reader to ponder upon the relation between gp-posets and $C_4$-free connected graphs. beyondn-appendix [^1]: There is only one singleton poset, but with interfaces, there are *four* singleton iposets. [^2]: This means that there is no injection $f$ from $\N$ satisfying $x\le y\Leftrightarrow f(x)\le f(y)$. [^3]: Our software is available at <http://www.lix.polytechnique.fr/~uli/posets/> [^4]: \[OEISlabels\] See <http://oeis.org/A000112>, [oeis.org/A003430](oeis.org/A003430), and [oeis.org/A079566](oeis.org/A079566).
--- abstract: | The problem of photon creation from vacuum due to the nonstationary Casimir effect in an ideal one-dimensional Fabry–Perot cavity with vibrating walls is solved in the resonance case, when the frequency of vibrations is close to the frequency of some unperturbed electromagnetic mode: $\omega_w=p(\pi c/L_0)(1+\delta)$, $|\delta|\ll 1$, $p=1,2,\ldots$ ($L_0$ is the mean distance between the walls). An explicit analytical expression for the total energy in all the modes shows an exponential growth if $|\delta|$ is less than the dimensionless amplitude of vibrations $\varepsilon\ll 1$, the increment being proportional to $p\sqrt{\varepsilon^2-\delta^2}$. The rate of photon generation from vacuum in the $(j+ps)$th mode goes asymptotically to a constant value $cp^2\sin^2(\pi j/p)\sqrt{\varepsilon^2-\delta^2} /[\pi L_0 (j+ps)]$, the numbers of photons in the modes with indices $p,2p,3p,\ldots$ being the integrals of motion. The total number of photons in all the modes is proportional to $p^3(\varepsilon^2-\delta^2) t^2$ in the short-time and in the long-time limits. In the case of strong detuning $|\delta|>\varepsilon$ the total energy and the total number of photons generated from vacuum oscillate with the amplitudes decreasing as $(\varepsilon/\delta)^2$ for $\varepsilon\ll|\delta|$. The special cases of $p=1$ and $p=2$ are studied in detail. address: | Departamento de Física, Universidade Federal de São Carlos,\ Via Washington Luiz km 235, 13565-905 São Carlos, SP, Brazil author: - 'V V Dodonov[^1] [^2]' title: Resonance photon generation in a vibrating cavity --- Introduction ============ Fifty years ago Casimir [@Cas] showed that the presence of boundaries changes the ground state of the electromagnetic field, leading to nontrivial quantum effects like the [*Casimir force*]{} (see also \[2-4\]). Last years, an attention of many authors \[5-40\] was attracted to the [*nonstationary modifications*]{} of the Casimir effect in the case of [*moving boundaries*]{} (a detailed list of publications before 1995 was given in [@DKPR]). The present article is devoted to the special case of the [*nonstationary Casimir effect*]{} (NSCE), namely, to the effect of [*photon creation from vacuum*]{} in an ideal one-dimensional cavity (a model of the Fabry–Perot interferometer) with [*vibrating*]{} boundaries. As was understood recently \[10,14-23\], notwithstanding that the maximal velocity of the boundary achievable under the laboratory conditions is very small in comparison with the speed of light, a gradual accumulation of the small changes in the quantum state of the field could result finally in a significant observable effect, if the boundaries of a cavity perform small oscillations at a frequency $\omega_w$ which is an integer multiple of the unperturbed eigenfrequency of the fundamental electromagnetic mode $\omega_1=\pi c/L_0$ (where $L_0$ is the mean distance between the walls): $\omega_w=p\omega_1$, $p=1,2,\ldots$ (remember that the spectrum of the electromagnetic modes is equidistant in the case involved: the unperturbed frequency of the $p$th mode equals $\omega_p=p\omega_1$). The time evolution of the field in the short time limit $\varepsilon\omega_1 t\ll 1$ (where $\varepsilon\ll 1$ is a ratio of the amplitude of vibrations to $L_0$) was considered in [@Sarkar; @DKM90] in the framework of Moore’s approach [@Moore] and in [@Calucci; @Law; @Ji; @Plun; @Ji98] in the framework of the “instantaneous basis” method (IBM) described in section 2. The asymptotical solutions to Moore’s equation in the case $\varepsilon\omega_1 t\gg 1$ were obtained in [@DK92; @DKN93; @Klim], and more general solutions were found in [@LawPRL; @Cole; @Dal]. A detailed study of the problem in the framework of the IBM was given in [@DKPR] for $p=2$ and in [@D96] for $p=1$. The [*short-time*]{} limit $\varepsilon\omega_1 t\ll 1$ for an arbitrary integer value of $p$ was considered in [@Ji]. However, in all the cited papers the solutions were found under the condition of the [*strict resonance*]{} $\omega_w=\omega_p$ between the mechanical and electromagnetic oscillations (excepting recent article [@D98], where a detuned three-dimensional cavity with a nondegenerate spectrum was considered). Evidently, such a condition is an idealization. The aim of the present paper is to study the case of a [*nonzero*]{} (although small) detuning between the frequencies of the mechanical and field modes: $$\omega_w=p\omega_1(1+\delta), \quad |\delta|\ll 1 \label{omegw}$$ for any integer $p=1,2,\ldots$, thus generalizing the results of [@DKPR; @D96; @Ji]. It will be shown that the photons can be created from vacuum provided the dimensionless detuning parameter does not exceed the dimensionless amplitude of the wall vibrations, otherwise the total number of photons generated inside the cavity exhibits small oscillations and goes periodically to zero. The plan of the paper is as follows. In section 2 we give general formulae related to the field quantization in a cavity with moving boundaries and derive the simplified “reduced equations” in the resonance case. A simple explicit analytical expression for the total energy of the field in all the modes is found in section 3. Section 4 is devoted to the “semi-resonance” case $p=1$ when the frequency of the wall is close to the fundamental frequency of the field. Under this condition new photons are not created, but the total energy of all the field modes increases exponentially with time above the threshold or oscillates in the case of a large detuning. The generic resonance case of an arbitrary $p\ge2$ is analysed in section 5 and the simplifications in the case $p=2$ are considered in section 6. A brief discussion of the results is given in section 7. Some details of calculations are given in the appendix. Field quantization and reduced equations in the resonance case ============================================================== Following the scheme of the field quantization in a cavity with time-dependent boundary conditions first proposed by Moore [@Moore], we consider a cavity formed by two infinite ideal plates moving in accordance with the prescribed laws $$x_{left}(t)=u(t), \quad x_{right}(t)=u(t)+L(t)$$ where $L(t)>0$ is the time dependent length of the cavity. Taking into account only the electromagnetic modes whose vector potential is directed along $z$-axis (“scalar electrodynamics”), one can write down the field operator [*in the Heisenberg representation*]{} $\hat {A}(x,t)$ at $t\le 0$ (when both the plates were at rest at the positions $x_{left}=0$ and $x_{right}=L_0$) as (we assume $c=\hbar=1$) $$\hat {A}_{in}=2\sum_{n=1}^{\infty}\frac 1{\sqrt {n}}\sin\frac { n\pi x}{L_0}\hat b_n\exp\left(-i\omega_nt\right)+\mbox{h.c.} \label{Ast}$$ where $\hat {b}_n$ means the usual annihilation photon operator and $\omega_n=\pi n/L_0$. The choice of coefficients in equation (\[Ast\]) corresponds to the standard form of the field Hamiltonian $$\hat {H}\equiv\frac 1{8\pi}\int_0^{L_0}\mbox{d}x\,\left [\left(_{}\frac {\partial A}{\partial t}\right)^2+\left(_{}\frac { \partial A}{\partial x}\right)^2\right] =\sum_{n=1}^{\infty}\omega_n\left(\hat b^{\dag}_n\hat b_n+\frac 12\right). \label{Ham}$$ For $t>0$ the field operator can be written as $$\hat {A}(x,t)=2\sum_{n=1}^{\infty}\frac 1{\sqrt {n}}\left[ \hat b_n\psi^{(n)}(x,t)\,+\,\mbox{h.c.}\,\right].$$ To find the explicit form of functions $\psi^{(n)}(x,t)$, $n=1,2,\ldots$, one should take into account that the field operator must satisfy i\) the wave equation $$\frac {\partial^2A}{\partial t^2}\,-\frac {\partial^ 2A}{\partial x^2}=0, \label{we}$$ ii\) the boundary conditions $$A(u(t),t)=A(u(t)+L(t),t)=0, \label{boundcon}$$ iii\) the initial condition (\[Ast\]), which is equivalent to $$\psi^{(n)}\left(x,t<0\right)=\sin\frac {n\pi x}{L_0} \exp\left(-i\omega_nt\right). \label{init}$$ Following the approach of Refs. [@Calucci; @Law; @Law-new] we expand the function $\psi^{(n)}(x,t)$ in a series with respect to the [*instantaneous basis*]{}: $$\psi^{(n)}(x,t>0)=\sum_{k=1}^{\infty} Q_k^{(n)}(t)\sqrt {\frac { L_0}{L(t)}}\sin\left(\frac {\pi k[x-u(t)]}{L(t)}\right), \quad n=1,2,\ldots \label{psit}$$ with the initial conditions $$Q_k^{(n)}(0)=\delta_{kn},\quad\dot {Q}_k^{(n)}(0)=-i\omega_n\delta_{kn}, \quad k,n=1,2,\ldots$$ This way we satisfy automatically both the boundary conditions (\[boundcon\]) and the initial condition (\[init\]). Putting expression (\[psit\]) into the wave equation (\[we\]), one can arrive after some algebra at an infinite set of coupled differential equations [@Plun; @Ji98] ($k,n=1,2,\ldots$) $$\ddot {Q}_k^{(n)}+\omega_k^2(t)Q_k^{(n)} =2\sum_{j=1}^{\infty} g_{kj}(t)\dot {Q}_j^{(n)}+ \sum_{j=1}^{\infty} \dot{g}_{kj}(t) Q_j^{(n)} +{\cal O}\left(g_{kj}^2\right), \label{Qeq}$$ where $$\omega_k(t)= {k\pi}/{L(t)}$$ and the time dependent antisymmetric coefficients $g_{kj}(t)$ read ($j\neq k$) $$g_{kj}=-g_{jk}=(-1)^{k-j}\frac {2kj \left(\dot {L} +\dot {u}\epsilon_{kj}\right)}{\left(j^2-k^2\right)L(t)}, \quad \epsilon_{kj}= 1-(-1)^{k-j}. \label{gkj}$$ For $u=0$ (the left wall at rest) the equations like (\[Qeq\])-(\[gkj\]) were derived in [@Calucci; @Law-new]. If the wall comes back to its initial position $L_0$ after some interval of time $T$, then the right-hand side of equation (\[Qeq\]) disappears, so at $t>T$ one gets $$Q_k^{(n)}(t)=\xi_k^{(n)}e^{-i\omega_kt}+\eta_k^{(n)}e^{i\omega_kt}, \quad k,n=1,2,\ldots \label{ksi}$$ $\xi_k^{(n)}$ and $\eta_k^{(n)}$ being some constant coefficients. Consequently, at $t>T$ the initial annihilation operators $\hat {b}_n$ cease to be “physical”, due to the contribution of the terms with “incorrect signs” in the exponentials $\exp(i\omega_kt)$. Introducing a new set of “physical” operators $\hat {a}_m$ and $\hat {a}_m^{\dag}$, which result at $t>T$ in relations such as (\[Ast\]) and (\[Ham\]), but with $\hat {a}_m$ instead of $\hat {b}_m$, one can easily check that the two sets of operators are related by means of the Bogoliubov transformation $$\hat {a}_m=\sum_{n=1}^{\infty} \left(\hat b_n\alpha_{nm}+\hat b_ n^{\dag}\beta_{nm}^{*}\right), \quad m=1,2,\ldots \label{Bogol}$$ with the coefficients $$\alpha_{nm}=\sqrt {\frac mn}\xi_m^{(n)},\qquad\beta_{ nm}=\sqrt {\frac mn}\eta_m^{(n)}.\label{al-ksi}$$ The unitarity of the transformation (\[Bogol\]) implies the following constraints: $$\begin{aligned} &&\sum_{m=1}^{\infty}\left(\alpha_{nm}^*\alpha_{km} - \beta_{nm}^*\beta_{km}\right) = \sum_{m=1}^{\infty} \frac{m}{n} \left(\xi_{m}^{(n)*}\xi_{m}^{(k)} - \eta_{m}^{(n)*}\eta_{m}^{(k)} \right) =\delta_{nk} \label{cond1} \\[2mm] &&\sum_{n=1}^{\infty}\left(\alpha_{nm}^*\alpha_{nj} - \beta_{nm}^*\beta_{nj}\right) = \sum_{n=1}^{\infty}\frac{m}{n} \left(\xi_{m}^{(n)*}\xi_{j}^{(n)} - \eta_{m}^{(n)*}\eta_{j}^{(n)} \right) =\delta_{mj} \label{cond2}\\[2mm] &&\sum_{n=1}^{\infty}\left(\beta_{nm}^*\alpha_{nk} - \beta_{nk}^*\alpha_{nm}\right) = \sum_{n=1}^{\infty}\frac{1}{n} \left(\eta_{m}^{(n)*}\xi_{k}^{(n)} - \eta_{k}^{(n)*}\xi_{m}^{(n)} \right) =0 \label{cond3}\end{aligned}$$ The mean number of photons in the $m$th mode equals the average value of the operator $\hat {a}_m^{\dag}\hat {a}_m$ in the initial state $|{\rm in}\rangle$ (remember that we use the Heisenberg picture), since just this operator has a physical meaning at $t>T$: $$\begin{aligned} &&{\cal N}_m \equiv \langle {\rm in}|\hat {a}_m^{\dag} \hat {a}_m|{\rm in}\rangle \nonumber\\[2mm] &&=\sum_n|\beta_{nm}|^2 +\sum_{n,k}\left[\left(\alpha_{nm}^*\alpha_{km} +\beta_{nm}^*\beta_{km}\right)\langle\hat {b}_n^{\dag}\hat {b}_k\rangle + 2 {\rm Re}\left(\beta_{nm}\alpha_{km}\langle\hat {b}_n\hat {b}_k \rangle\right)\right] \nonumber\\[2mm] &&= \sum_{n=1}^{\infty}\frac{m}{n}|\eta_m^{(n)}|^2 + \sum_{n,k=1}^{\infty} \frac{m}{\sqrt{nk}} \left(\xi_{m}^{(n)*}\xi_{m}^{(k)} + \eta_{m}^{(n)*}\eta_{m}^{(k)} \right) \langle\hat {b}_n^{\dag}\hat {b}_k\rangle \nonumber\\[2mm] &&+ 2{\rm Re}\sum_{n,k=1}^{\infty} \frac{m}{\sqrt{nk}} \eta_{m}^{(n)}\xi_{m}^{(k)}\langle\hat {b}_n\hat {b}_k \rangle . \label{number}\end{aligned}$$ The first sum in the right-hand sides of the relations above describes the effect of the photon creation from vacuum due to the NSCE, while the other sums are different from zero only in the case of a nonvacuum initial state of the field. To find the coefficients $\xi_k^{(n)}$ and $\eta_k^{(n)}$ one has to solve an infinite set of coupled equations (\[Qeq\]) ($k=1,2,\ldots$) with time-dependent coefficients, moreover, each equation also contains an infinite number of terms. However, the problem can be essentially simplified, if the walls perform small oscillations at the frequency $\omega_w$ close to some unperturbed field eigenfrequency: $$L(t)=L_0\left(1+\varepsilon_L \sin\left[p\omega_1(1+\delta)t\right] \right), \quad u(t)=\varepsilon_u L_0\sin\left[p\omega_1(1+\delta)t +\varphi\right].$$ Assuming $|\varepsilon_L|,|\varepsilon_u|\sim \varepsilon\ll 1$, it is natural to look for the solutions of equation (\[Qeq\]) in the form similar to (\[ksi\]), $$Q_k^{(n)}(t)=\xi_k^{(n)}e^{-i\omega_k(1+\delta)t} +\eta_k^{(n)}e^{i\omega_k(1+\delta)t}, \label{ksitime}$$ but now we allow the coefficients $\xi_k^{(n)}$ and $\eta_k^{(n)}$ to be [*slowly varying functions of time*]{}. The further procedure is well known in the theory of parametrically excited systems \[41-43\]. First we put expression (\[ksitime\]) into equation (\[Qeq\]) and neglect the terms $\ddot{\xi },\ddot{\eta}$ (having in mind that $\dot{\xi },\dot{\eta} \sim\varepsilon$, while $\ddot{\xi },\ddot{\eta}\sim\varepsilon^2$), as well as the terms proportional to $\dot{L}^2\sim\dot{u}^2\sim\varepsilon^2$. Multiplying the resulting equation for $Q_k$ by the factors $\exp\left[i\omega_k(1+\delta)t\right]$ and $\exp\left[-i\omega_k(1+\delta)t\right]$ and performing averaging over fast oscillations with the frequencies proportional to $\omega_k$ (since the functions $\xi ,\eta$ practically do not change their values at the time scale of $2\pi /\omega_k$) one can verify that only the terms with the difference $j-k=\pm p$ survive in the right-hand side. Consequently, for [*even*]{} values of $p$ the term $\dot{u}$ in $g_{kj}(t)$ does not make any contribution to the simplified equations of motion, thus only the rate of change of the cavity length $\dot{L}/L_0$ is important in this case. On the contrary, if $p$ is an [*odd*]{} number, then the field evolution depends on the velocity of the [*centre of the cavity*]{} $v_c=\dot{u}+\dot{L}/2$ and does not depend on $\dot{L}$ alone. These [*interference effects*]{} were discussed recently (in the short time limit $\varepsilon\omega_1 t\ll 1$) in [@Ji98] (see also [@Lamb]). We assume hereafter that $u=0$ (i.e. that the left wall is at rest), since this assumption does not change anything if $p$ is an even number, whereas one should simply replace $\dot{L}/L_0$ by $2v_c/L_0$ if $p$ is an odd number. The final equations for the coefficients $\xi_k^{(n)}$ and $\eta_k^{(n)}$ contain only three terms with simple [*time independent*]{} coefficients in the right-hand sides: $$\begin{aligned} \frac {\mbox{d}}{\mbox{d}\tau}\xi_k^{(n)}&=& (-1)^p\left[(k+p)\xi_{k+p}^{(n)}- (k-p)\xi_{k-p}^{(n)}\right] +2i\gamma k \xi_{k}^{(n)} , \label{pksik}\\ \frac {\mbox{d}}{\mbox{d}\tau}\eta_k^{(n)}&=& (-1)^p\left[(k+p)\eta_{k+p}^{(n)} -(k-p)\eta_{k-p}^{(n)}\right] - 2i\gamma k \eta_{k}^{(n)}. \label{petak}\end{aligned}$$ The dimensionless parameters $\tau$ (a “slow” time) and $\gamma$ read ($\varepsilon\equiv\varepsilon_L$) $$\tau =\frac 12\varepsilon\omega_1t, \qquad \gamma=\delta/\varepsilon. \label{tau}$$ The initial conditions are $$\xi_k^{(n)}(0)=\delta_{kn},\qquad\eta_k^{(n)}(0)=0. \label{ini}$$ Note, however, that uncoupled equations (\[pksik\])-(\[petak\]) hold only for $k\ge p$. This means that they describe the evolution of [*all*]{} the Bogoliubov coefficients only if $p=1$. Then [*all*]{} the functions $\eta_k^{(n)}(t)$ are [*identically equal to zero*]{} due to the initial conditions (\[ini\]), consequently, no photon can be created from vacuum. If $p\ge 2$, we have $p-1$ pair of [*coupled*]{} equations for the coefficients with lower indices $1\le k\le p-1$ $$\begin{aligned} \frac {\mbox{d}}{\mbox{d}\tau}\xi_k^{(n)}&=& (-1)^p\left[(k+p)\xi_{k+p}^{(n)}- (p-k)\eta_{p-k}^{(n)}\right] +2i\gamma k \xi_{k}^{(n)} , \label{pksikin}\\ \frac {\mbox{d}}{\mbox{d}\tau}\eta_k^{(n)}&=& (-1)^p\left[(k+p)\eta_{k+p}^{(n)} -(p-k)\xi_{p-k}^{(n)}\right] - 2i\gamma k \eta_{k}^{(n)}. \label{petakin}\end{aligned}$$ In this case some functions $\eta_{k}^{(n)}(t)$ are not equal to zero at $t>0$, thus we have the effect of photon creation from the vacuum. It is convenient to introduce a new set of coefficients $\rho_k^{(n)}$, whose lower indices run over all integers from $-\infty$ to $\infty$: $$\rho_k^{(n)}=\left\{ \begin{array}{ll} \xi_k^{(n)}\,, & k>0\\ 0\,, & k=0\\ -\eta_{-k}^{(n)}\,, & k<0 \end{array}\right. \label{defrho}$$ Then one can verify that equations (\[pksik\])-(\[petak\]) and (\[pksikin\])-(\[petakin\]) can be combined in a [*single*]{} set of equation ($k=\pm 1, \pm 2, \ldots$) $$\frac {\mbox{d}}{\mbox{d}\tau}\rho_k^{(n)}= (-1)^p\left[(k+p)\rho_{k+p}^{(n)}- (k-p)\rho_{k-p}^{(n)}\right] +2i\gamma k \rho_{k}^{(n)} \label{prhok}$$ with the initial conditions ($n=1,2,\ldots$) $$\rho_k^{(n)}(0)=\delta_{kn}. \label{inirho}$$ A remarkable feature of the set of equations (\[prhok\]) is that its solutions satisfy [*exactly*]{} the unitarity conditions (\[cond1\])-(\[cond3\]) (although the coefficients $\xi_k^{(n)}$ and $\eta_k^{(n)}$ introduced via equation (\[ksitime\]) have additional phase factors in comparison with the coefficients defined in equation (\[ksi\]), these phases do not affect the identities concerned), which can be rewritten as $$\begin{aligned} && \sum_{m=-\infty}^{\infty} m\rho_{m}^{(n)*}\rho_{m}^{(k)} =n\delta_{nk}\,, \quad n,k=1,2,\ldots \label{rhocond1} \\[2mm] && \sum_{n=1}^{\infty}\frac{m}{n} \left[\rho_{m}^{(n)*}\rho_{j}^{(n)} - \rho_{-m}^{(n)*}\rho_{-j}^{(n)} \right] =\delta_{mj}\,, \quad m,j=1,2,\ldots \label{rhocond2}\\[2mm] && \sum_{n=1}^{\infty}\frac{1}{n} \left[\rho_{m}^{(n)*}\rho_{-j}^{(n)} - \rho_{j}^{(n)*}\rho_{-m}^{(n)} \right] =0\,, \quad m,j=1,2,\ldots \label{rhocond3}\end{aligned}$$ For example, calculating the derivative $I=(d/d\tau)\sum_{m=-\infty}^{\infty}\, m\rho_{m}^{(n)*}\rho_{m}^{(k)}$ with the aid of equation (\[prhok\]) and its complex conjugated counterpart one can easily verify that $I=0$. Then the value of the right-hand side of (\[rhocond1\]) is a consequence of the initial conditions (\[inirho\]). The identities (\[rhocond2\]) and (\[rhocond3\]) can be verified in a similar way, if one uses instead of (\[prhok\]) the recurrence relations between the coefficients $\rho_{m}^{(n)}$ with the same lower index $m$ but with different [*upper*]{} indices derived in section \[generic\]. Due to the initial conditions (\[inirho\]) the solutions to (\[prhok\]) satisfy the relation $$\rho_{j+mp}^{(k+np)}\equiv 0 \quad {\rm if} j\neq k \label{identrho}$$ $$j,k=0,1,\ldots,p-1, \quad m=0,\pm 1,\pm 2, \ldots\,, \quad n=0, 1, 2, \ldots$$ Consequently, the nonzero coefficients $\rho_m^{(n)}$ form $p$ independent subsets $$y_k^{(q,j)}\equiv\rho_{j+kp}^{(j+qp)} \label{subsets}$$ $$j=0,1,\ldots,p-1, \quad q=0,1,2,\ldots\,, \quad k=0,\pm 1,\pm 2, \ldots$$ The subset $y_k^{(q,0)}$ is distinguished, because $y_k^{(q,0)}\equiv 0$ for $k\le 0$ and the upper index $q$ begins at $q=1$. This subset is considered in detail in section \[semi\]. The generic case is studied in section \[generic\]. Total energy and the rate of photon generation ============================================== It is remarkable that to calculate the total energy of the field (normalized by $\hbar\omega_1$) $${\cal E}(\tau)\equiv \sum_m m{\cal N}_m(\tau)$$ one does not need explicit expressions of the coefficients $\rho_m^{(n)}(\tau)$. Calculating the first and the second derivatives of ${\cal E}(\tau)$ with the aid of the relations (\[defrho\])-(\[rhocond3\]) one can obtain a simple differential equation (see \[Etot\]) $$\ddot{\cal E}=4p^2a^2{\cal E} +4p^2\gamma^2{\cal E}(0) + \frac{p^2}{6}(p^2-1) +2p^2\gamma\sigma {\rm Im}({\cal G}) \label{eqEtot}$$ where $$a= \sqrt{1-\gamma^2}\;, \quad \sigma=(-1)^p\,, \label{def-a}$$ $${\cal G} = 2\sum_{n=1}^{\infty} \sqrt{n(n+p)}\langle\hat {b}_n^{\dag}\hat {b}_{n+p}\rangle +\sum_{n=1}^{p-1}\sqrt{n(p-n)}\langle\hat {b}_n\hat {b}_{p-n}\rangle \label{defcalG}$$ (if $p=1$, the last sum in (\[defcalG\]) should be replaced by zero). The quantum averaging is performed over the initial state of the field (no matter pure or mixed). The initial value of the total energy is ${\cal E}(0)= \sum_{n=1}^{\infty} n \langle\hat {b}_n^{\dag}\hat {b}_{n}\rangle $, whereas the initial value of the first derivative $\dot{\cal E}(\tau)$ reads (see \[Etot\]) $$\dot{\cal E}(0)=-p\sigma{\rm Re}({\cal G}) \label{inconE}$$ Consequently, the solution to equation (\[eqEtot\]) can be expressed as $$\begin{aligned} {\cal E}(\tau)&=& {\cal E}(0) +\frac{2\sinh^2(pa\tau)}{a^2} \left[{\cal E}(0) +\frac{p^2-1}{24} +\frac{\gamma\sigma }{2} {\rm Im}({\cal G})\right] \nonumber\\ &-&\sigma{\rm Re}({\cal G}) \frac{\sinh(2pa\tau)}{2a}. \label{ansEtot}\end{aligned}$$ We see that the total energy increases exponentially at $\tau\to\infty$, provided $\gamma< 1$. In the special case $\gamma=0$ such asymptotical behaviour of the total energy was obtained also in the frameworks of other approaches in [@LawPRL; @Law-new; @Cole; @Mep]. Here we have found the explicit dependence of the total energy on time in the whole interval $0\le \tau<\infty$, as well as a nontrivial dependence on the initial state of field, which is contained in the constant parameter ${\cal G}$. This parameter is equal to zero for initial Fock or thermal states of the field. However, in a generic case ${\cal G}$ is different from zero, and it can affect significantly the total energy, if ${\cal E}(0)\gg 1$. Consider, for example, the case $p=2$. If initially the first mode ($n=1$) was in the coherent state $|\alpha\rangle$ with $\alpha=|\alpha|e^{i\phi}$, $|\alpha|\gg 1$, and all other modes were not excited, then ${\cal E}(0)=|\alpha|^2$, ${\cal G}=\alpha^2$, so for $\tau\gg 1$ and $\gamma=0$ (exact resonance) we have ${\cal E}(\tau\gg 1)\approx \frac14 |\alpha|^2 e^{4\tau} \left[2-\cos(2\phi)\right]$. The maximal value of the energy in this case is three times bigger than the minimal one, depending on the phase $\phi$. According to (\[ansEtot\]), the initial stage of the evolution does not depend on the detuning parameter $\gamma$ for all states which yield Im$({\cal G})=0$, since at $\tau\to 0$ one has $${\cal E}(\tau) \approx {\cal E}(0) -\sigma{\rm Re}({\cal G}) p\tau +2\left[{\cal E}(0) +\frac{p^2-1}{24} +\frac{\gamma\sigma }{2} {\rm Im}({\cal G})\right] (p\tau)^2 \label{Etot-0}$$ Formula (\[Etot-0\]) is [*exact*]{} in the case of $\gamma=1$. If $\gamma>1$, then one should replace each function $\sinh(ax)/a$ in (\[ansEtot\]) by its trigonometrical counterpart $\sin(\tilde{a}x)/\tilde{a}$, where $$\tilde{a}= \sqrt{\gamma^2-1} \label{def-tilda}$$ In this case the total energy [*oscillates*]{} in time with the period $\pi/(p\tilde{a})$, returning to the initial value at the end of each period. For a large detuning $\gamma\gg 1$ the amplitude of oscillations decreases as $\gamma^{-1}$ if Re${\cal G}\neq 0$ and as $\gamma^{-2}$ otherwise. For the initial vacuum state of field we have $${\cal E}^{(vac)}(\tau)= \frac{p^2-1}{12a^2}\sinh^2(pa\tau)\,. \label{Etotvac}$$ The total number of photons in all the modes equals ${\cal N}={\cal N}^{(vac)}+{\cal N}^{(cav)}$, where $${\cal N}^{(vac)}=\sum_{m,n=1}^{\infty}\frac{m}{n}|\eta_m^{(n)}|^2 \label{Nvac}$$ is the total number of photons generated from vacuum, and the sum $${\cal N}^{(cav)}= {\cal N}(0) + 2\sum_{m,n,k=1}^{\infty} \frac{m}{\sqrt{nk}} \left[ \eta_{m}^{(n)*}\eta_{m}^{(k)} \langle\hat {b}_n^{\dag}\hat {b}_k\rangle + {\rm Re}\left( \eta_{m}^{(n)}\xi_{m}^{(k)} \langle\hat {b}_n\hat {b}_k \rangle\right)\right] \label{Ncav}$$ describes the influence of the initial state of the field (to obtain the formula (\[Ncav\]) one should take into account the identity (\[cond1\])). Differentiating (\[Nvac\]) and (\[Ncav\]) with respect to $\tau$ and performing the summation over $m$ with the help of equations (\[pksik\])-(\[petakin\]) or (\[prhok\]) one can obtain the formulae $$\frac {\mbox{d}{\cal N}^{(vac)}}{\mbox{d}\tau}= 2\sigma{\rm Re}\sum_{n=1}^{\infty}\frac 1n \sum_{m=1}^p m(p-m) \rho_{-m}^{(n)*}(\tau )\rho_{p-m}^{(n)}(\tau ) \label{ratetot}$$ $$\begin{aligned} \frac{d{\cal N}^{(cav)}}{d\tau}&=& 2\sigma\sum_{n,k=1}^{\infty} \frac{\langle\hat {b}_n^{\dag}\hat {b}_k\rangle } {\sqrt{nk}} \sum_{m=1}^p m(p-m) \left[ \rho_{-m}^{(n)*}\rho_{p-m}^{(k)}+ \rho_{-m}^{(k)}\rho_{p-m}^{(n)*}\right]\nonumber\\ &-& 2\sigma{\rm Re}\sum_{n,k=1}^{\infty} \frac{\langle\hat{b}_n\hat{b}_k\rangle} {\sqrt{nk}} \sum_{m=1}^p m(p-m) \left[ \rho_{-m}^{(n)}\rho_{m-p}^{(k)}+ \rho_{m}^{(n)}\rho_{p-m}^{(k)}\right]. \label{extradot}\end{aligned}$$ Consequently, to calculate the total number of photons one has to know the coefficients $\eta_{m}^{(n)}$ and $\xi_{m}^{(n)}$ with the lower indices $m=1,2,\ldots,p-1$. “Semi-resonance” case ($p=1$) {#semi} ============================= Let us start calculating the Bogoliubov coefficients with the “semi-resonance” case $p=1$. It is distinguished, since all the coefficients $\eta_k^{(n)}(t)$ are equal to zero, and the total number of photons is conserved. In this specific case one has to solve the set of equations ($k,n=1,2,\ldots$) $$\frac {\mbox{d}}{\mbox{d}\tau}\xi_k^{(n)}= (k-1)\xi_{k-1}^{(n)} - (k+1)\xi_{k+1}^{(n)} +2i\gamma k \xi_{k}^{(n)} \label{1ksik}$$ with the initial conditions $\xi_k^{(n)}(0)=\delta_{kn}$. To get rid of the infinite number of equations we introduce the [*generating function*]{} $$X^{(n)}(z,\tau)=\sum_{k=1}^{\infty}\xi_k^{(n)}(\tau )z^k \label{1defX}$$ where $z$ is an auxiliary variable. Using the relation $kz^k=z(\mbox{d}z^k/\mbox{d}z)$ one obtains the first-order partial differential equation $$\frac{\partial X^{(n)}}{\partial\tau}=\left(z^2-1 +2i\gamma z\right) \frac{\partial X^{(n)}}{\partial z}+\xi_1^{(n)}(\tau) \label{eqG}$$ whose solution satisfying the initial condition $X^{(n)}(0,z)=z^n$ reads $$X^{(n)}(z,\tau)=\left[\frac{z g(\tau) -S(\tau)} {g^*(\tau)- zS(\tau)}\right]^n +\int_0^{\tau} \xi_1^{(n)}(x)\,\mbox{d}x \label{solG}$$ where $$S(\tau)=\sinh(a\tau)/a\,, \quad g(\tau)= \cosh(a\tau) + i\gamma S(\tau)\,. \label{def-gS}$$ Differentiating (\[solG\]) over $z$ we find $$\xi_1^{(n)}(\tau)=\frac{n[-S(\tau)]^{n-1}} {[g^*(\tau)]^{n+1}}. \label{sol-1n}$$ Putting this expression into the integral in the right-hand side of equation (\[solG\]) we arrive at the final form of the generating function $$X^{(n)}(z,\tau)=\left[\frac{z g(\tau) -S(\tau)} {g^*(\tau)- zS(\tau)}\right]^n -\left[\frac{ -S(\tau)}{g^*(\tau)}\right]^n \label{solGfin}$$ which satisfies automatically the necessary boundary condition $X^{(n)}(\tau,0)=0$. The right-hand side of (\[solGfin\]) can be expanded into the power series of $z$ with the aid of the formula ([@Bateman], vol. 3, section 19.6, equation (16)) $$(1-t)^{b-c}(1-t+xt)^{-b}=\sum_{m=0}^\infty \frac{t^m}{m!} (c)_m F(-m,b;c;x),$$ where $F(a,b;c;x)$ means the Gauss hypergeometric function, and $(c)_k\equiv \Gamma(c+k)/\Gamma(c)$. In turn, the function $(c)_m F(-m,b;c;x)$ with an integer $m$ is reduced to the Jacobi polynomial in accordance with the formula ([@Bateman], vol. 2, section 10.8, equation (16)) $$(c)_m F(-m,b;c;x)=m!(-1)^m P_m^{(b-m-c,\,c-1)}(2x-1).$$ Consequently, $$(1-t)^{b-c}(1-t+xt)^{-b}=\sum_{m=0}^\infty (-t)^m P_m^{(b-m-c,\,c-1)}(2x-1) \label{genBatJac}$$ and the coefficient $\xi_m^{(n)}(\tau)$ reads $$\xi_m^{(n)}(\tau)=(-\kappa)^{n-m}\lambda^{n+m} P_m^{(n-m,\,-1)}\left(1-2\kappa^2\right) \label{sol-mnJac}$$ where $$\begin{aligned} \kappa(\tau)&=&\frac{S}{\sqrt{gg^*}} \equiv \frac{S(\tau)} {\sqrt{1+S^2(\tau)}} \label{def-kap}\\ \lambda(\tau)&=&\sqrt{g(\tau)/g^*(\tau)}\equiv \sqrt{1-\gamma^2\kappa^2}+i\gamma\kappa, \quad |\lambda|=1. \label{def-lam}\end{aligned}$$ The form (\[sol-mnJac\]) is useful for $n\ge m$. To find a convenient formula in the case of $n\le m$ we introduce the [*two-dimensional*]{} generating function $$\begin{aligned} &&X(\tau,z,y)=\sum_{m=1}^\infty\sum_{n=1}^\infty z^m y^n \xi_m^{(n)}(\tau)=\sum_{n=1}^\infty X^{(n)}(z,\tau)y^n \nonumber\\ &&= \frac{ yz}{[g^*(\tau)+yS(\tau)] [g^*(\tau) -g(\tau)yz+ S(\tau)(y-z)]}. \label{G}\end{aligned}$$ The coefficient at $z^m$ in (\[G\]) yields another one-dimensional generating function $$X_{m}(\tau,y)=\sum_{n=1}^\infty y^n \xi_m^{(n)}(\tau) = y\frac{[g(\tau)y+S(\tau)]^{m-1}} {[g^*(\tau) +yS(\tau)]^{m+1}}. \label{Gm}$$ Then equation (\[genBatJac\]) results in the expression $$\xi_m^{(n)}= (1-\kappa^2)\kappa^{m-n}\lambda^{n+m} P_{n-1}^{(m-n,\,1)} \left(1-2\kappa^2\right). \label{sol-nmJac}$$ Note that the functions $S(\tau)$, $\cosh(a\tau)$ and $\kappa(\tau)$ are real for any value of $\gamma$. For $\gamma>1$ it is convenient to use instead of (\[def-gS\]) the equivalent expressions in terms of the trigonometrical functions: $$\tilde{S}(\tau)=\sin(\tilde{a}\tau)/\tilde{a}\,, \quad \tilde{g}(\tau)= \cos(\tilde{a}\tau) + i\gamma \tilde{S}(\tau)\,. \label{def-tilgS}$$ In the special case $\gamma=1$ one has $S(\tau)=\tau$ and $g(\tau)=1 + i\tau$. In particular, $$\xi_m^{(n)}(\tau;\gamma=1) = \frac{\tau^{m-n}(1+i\tau)^{n-1}} {(1-i\tau)^{m+1}} P_{n-1}^{(m-n,\,1)} \left(\frac{1-\tau^2}{1+\tau^2}\right). \label{1a0}$$ The knowledge of the two-dimensional generating function enables to verify the unitarity condition (\[cond2\]). Consider the product $X^*(\tau,z_1,y_1)X(\tau,z_2,y_2)$, which is a four–variable generating function for the products $\xi_m^{(n)*}\xi_l^{(k)}$. Taking $y_1=\sqrt{u}\exp(i\varphi)$, $y_2^*=\sqrt{u}\exp(-i\varphi)$ and integrating over $\varphi$ from $0$ to $2\pi$ one obtains a three–variable generating function $\sum z_1^{*m} z_2^l u^n \xi_m^{(n)*}\xi_l^{(n)}$. Dividing it by $u$ and integrating the ratio over $u$ from $0$ to $1$ one arrives finally at the relation $$\sum_{n,m,l=1}^{\infty} z_1^{*m} z_2^l \frac1n \xi_m^{(n)*} \xi_l^{(n)}=-\ln\left(1-z_1^* z_2\right)=\sum_{k=1}^{\infty} \frac1k \left(z_1^* z_2\right)^k, \label{ident-2}$$ which is equivalent to the special case of (\[cond2\]) for $\eta_{m}^{(k)}\equiv 0$: $$\sum_{n}\; \frac1n \xi_{m}^{(n)*}(\tau)\xi_{j}^{(n)}(\tau) \equiv \frac1m \delta_{mj}. \label{cond2-1}$$ Suppose that initially there was a single excited mode labeled with an index $n$. Due to the linearity of the process one may assume that the mean number of photons in this mode was $\nu_n=1$. Then the mean occupation number of the $m$-th mode at $\tau>0$ equals $${\cal N}_m^{(n)}=\frac{m}{n}\left[\xi_m^{(n)}\right]^2 = \frac{m}{n}\left[(1-\kappa^2) \kappa^{m-n} P_{n-1}^{(m-n,\,1)}\left( 1-2\kappa^2 \right)\right]^2 \label{num-nm}$$ where $\kappa$ is given by (\[def-kap\]). Although formula (\[num-nm\]) seems asymmetric with respect to the indices $m$ and $n$, actually the relation $${\cal N}_m^{(n)}={\cal N}_n^{(m)} \label{nm}$$ holds. To prove it we calculate the generating function $$Q(u,v)\equiv \sum_{m,n=1}^{\infty}v^m u^n {\cal N}_m^{(n)}. \label{defgen-N}$$ It is related to the function $X(z,y)$ (\[G\]) as follows $$Q(u,v)= v\frac{d}{dv}\int_0^{u}dr\int_0^{2\pi}\int_0^{2\pi} \frac{d\varphi d\psi}{(2\pi)^2} X\left(\sqrt{r} e^{i\varphi},\sqrt{v} e^{i\psi}\right) X^*\left(\sqrt{r} e^{i\varphi},\sqrt{v} e^{i\psi}\right).$$ Having performed all the calculations we arrive at the expression $$2Q(u,v)= \frac{1+uv -\kappa^2(u+v)} {\left\{ \left[1+uv -\kappa^2(u+v)\right]^2 -4uv(1-\kappa^2)^2\right\}^{1/2}} -1. \label{gen-N}$$ Then (\[nm\]) is a consequence of the relation $Q(u,v)=Q(v,u)$. The initial stage of the evolution of ${\cal N}_m^{(n)}(\tau)$ does not depend on the detuning parameter $\gamma$, since the principal term of the expansion of (\[num-nm\]) with respect to $\tau$ yields $${\cal N}_{n\pm q}^{(n)}(\tau\to 0)= \frac{n\pm q}{n} \left[\frac{n(n\pm 1)\ldots(n\pm q \mp 1)}{q!}\right]^2\tau^{2q}.$$ However, the further evolution is sensitive to the value of $\gamma$. If $\gamma\le 1$, then the function ${\cal N}_m^{(n)}(\tau)$ has many maxima and minima (especially for large values of $m$ and $n$), but finally it decreases asymptotically as $mna^4/\cosh^4(a\tau)$. On the contrary, if $\gamma>1$, then the function ${\cal N}_m^{(n)}(\tau)$ is periodic with the period $\pi/\tilde{a}$, and it turns into zero for $\tau=k\pi/\tilde{a}$, $k=1,2,\ldots$ (excepting the case $m=n$). The magnitude of the coefficient ${\cal N}_m^{(n)}(\tau)$ decreases approximately as $\gamma^{-2|m-n|}$ for $\gamma\gg 1$. In the special case of a cavity filled in with a [*high-temperature thermal radiation*]{}, the initial distribution over modes reads $\nu_n(T)=T/n$, constant $T$ being proportional to the temperature. Then ${\cal N}_m^{\{T\}}=\sum_{n}\nu_n(T) {\cal N}_m^{(n)}$. This sum is nothing but $T$ multiplied by the coefficient at $v^m$ in the Taylor expansion of the function $$\tilde{Q}(v)=\int_0^1\frac{\mbox{d}u}{u}Q(u,v)= \ln\frac{1-v\kappa^2(\tau)}{1-v}.$$ Thus we have $${\cal E}_m^{\{T\}}=m{\cal N}_m^{\{T\}}= T\left(1-[\kappa(\tau)]^{2m}\right).$$ We see that the resonance vibrations of the wall cause an effective cooling of the lowest electromagnetic modes (provided $|\gamma|<1$). The total number of quanta and the total energy in this example are formally infinite, due to the equipartition law of the classical statistical mechanics. In reality both these quantities are finite, since $\nu_n(T) < T/n$ at $n\to\infty$ due to the quantum corrections. Other initial conditions in the special case of the [*exact*]{} resonance ($\gamma=0$) were considered in [@D96]. The total energy depends on time according to equation (\[ansEtot\]) with $p=1$. An infinite growth of the energy of a classical string whose ends oscillate at the frequency close to $\omega_1$ in the case of finite amplitude and detuning ($\varepsilon\sim\delta\sim {\cal O}(1)$) was considered in [@Dit]. Generic resonance case $p\ge 2$ {#generic} =============================== Now we turn to calculating the nonzero Bogoliubov coefficients $y_m^{(n,j)}(\tau)$ (\[subsets\]) in the generic case $p\ge 2$. One can easily verify that in the distinguished case $j=0$ the functions $y_m^{(n,0)}(\tau)$ with $m\ge 1$ are given by the formulae for $\xi_m^{(n)}(\tau)$ found in the preceding section, provided one replaces $\tau$ by $\sigma p\tau$ and $\gamma$ by $\sigma\gamma$ (remember that $\sigma\equiv(-1)^p$), whereas $y_m^{(n,0)}(\tau)\equiv 0$ for $m\le 0$. In the generic case $j\neq 0$ it is reasonable to introduce a generating function in the form of the [*Laurent series*]{} of an auxiliary variable $z$ $$R^{(n,j)}(z,\tau)=\sum_{m=-\infty}^{\infty}y_m^{(n,j)}(\tau)z^m \label{defR}$$ since the lower index of the coefficient $y_m^{(n,j)}$ runs over all integers from $-\infty$ to $\infty$. One can verify that the function (\[defR\]) satisfies the [*homogeneous*]{} equation $$\frac{\partial R^{(n,j)}}{\partial\tau}=\left[\sigma\left( \frac1z -z\right) +2i\gamma \right] \left(j+pz\frac{\partial }{\partial z}\right)R^{(n,j)}. \label{eqR}$$ The solution to (\[eqR\]) satisfying the initial condition $R^{(n,j)}(z,0)=z^n$ reads $$R^{(n,j)}(z,\tau)=z^{-j/p}\left[\frac{z g(p\tau) +\sigma S(p\tau)}{g^*(p\tau)+ z\sigma S(p\tau)}\right]^{n+j/p} \label{solR}$$ where the functions $S(\tau)$ and $g(\tau)$ were defined in (\[def-gS\]). The coefficients of the Laurent series (\[defR\]) can be calculated with the aid of the Cauchy formula $$y_m^{(n,j)}(\tau)=\frac{1}{2\pi i}\oint_{\cal C}\frac{dz}{z^{m+1}} R^{(n,j)}(z,\tau) \label{Cauchy}$$ where the closed curve ${\cal C}$ rounds the point $z=0$ in the complex plane in the counterclockwise direction. Making a scale transformation one can reduce the integral (\[Cauchy\]) with the integrand (\[solR\]) to the integral representation of the Gauss hypergeometric function ([@Bateman], vol 1, section 2.1.3) $$F(a,b;c;x)=\frac{-i\Gamma(c)\exp(-i\pi b)}{2\sin(\pi b)\Gamma(c-b)\Gamma(b)} \oint_{1}^{(0+)}\frac{t^{b-1}(1-t)^{c-b-1}}{(1-tx)^a}dt, \label{intpred}$$ where ${\rm Re}(c-b)>0$, $b\neq 1,2,3,\ldots$, and the integration contour begins at the point $t=1$ and passes around the point $t=0$ in the positive direction. After some algebra one can obtain the expression $$\begin{aligned} y_m^{(n,j)}&=& -\,\frac{\Gamma\left(-m-j/p\right)\Gamma\left(1+n+j/p\right) \sin\left[\pi\left(m+j/p\right)\right]} {\pi\Gamma\left(1+n-m\right) } \nonumber\\ &\times&(\sigma\kappa)^{n-m}\lambda^{m+n+2j/p} F\left(n+j/p\,,\,-m -j/p\,;\, 1+n-m\,;\, \kappa^2\right). \label{solrhogen}\end{aligned}$$ We assume hereafter $\kappa\equiv\kappa(p\tau)$ and $\lambda\equiv\lambda(p\tau)$, the functions $\kappa(x)$ and $\lambda(x)$ being defined as in (\[def-kap\]) and (\[def-lam\]). Using the known formula $$\Gamma(-z)\sin(\pi z)=-\pi/\Gamma(z+1) \label{gammapm}$$ one can eliminate the gamma-function of a negative argument: $$\begin{aligned} y_m^{(n,j)}&=& \frac{\Gamma\left(1+n+j/p\right) (\sigma\kappa)^{n-m}\lambda^{m+n+2j/p}} {\Gamma\left(1+m+j/p\right)\Gamma\left(1+n-m\right) } \nonumber\\ &\times& F\left(n+j/p\,,\,-m -j/p\,;\, 1+n-m\,;\, \kappa^2\right). \label{solrhogen1}\end{aligned}$$ The form (\[solrhogen1\]) gives an explicit expression for the coefficient $\xi_{j+pm}^{(j+pn)}$ with $0\le m\le n$. Moreover, it clearly shows the fulfilment of the initial condition $y_m^{(n,j)}(\tau=0)=\delta_{mn}$. Transforming the hypergeometric function with the aid of the formula [@Bateman; @Abram] $$\lim_{c\to -n}\frac{F(a,b;c;x)}{\Gamma(c)}= \frac{(a)_{n+1}(b)_{n+1}x^{n+1}}{(n+1)!} F(a+n+1,b+n+1;n+2;x)$$ ($n=0,1,2,\ldots$) and the identity (\[gammapm\]) one obtains an equivalent expression $$\begin{aligned} y_m^{(n,j)}&=& \frac{ \Gamma\left(m+j/p\right) (-\sigma\kappa)^{m-n}\lambda^{m+n+2j/p}} {\Gamma\left(n+j/p\right) \Gamma\left(1+m-n\right) } \nonumber\\ &\times& F\left(m+j/p\,,\,-n -j/p\,;\, 1+m-n\,;\, \kappa^2\right) \label{solrhogen2}\end{aligned}$$ which gives a convenient form of the coefficient $\xi_{j+pm}^{(j+pn)}$ for $m\ge n$. Formula (\[solrhogen\]) with negative values of the lower index gives an explicit expression for the nonzero coefficients $\eta_{pk-j}^{(pn+j)}$ ($k\ge 1,n\ge 0$): $$\begin{aligned} \eta_{pk-j}^{(pn+j)}&=& -\,\frac{\Gamma\left(k-j/p\right)\Gamma\left(1+n+j/p\right) \sin\left[\pi\left(k-j/p\right)\right]} {\pi\Gamma\left(1+n+k\right) } \nonumber\\ &\times&(\sigma\kappa)^{n+k}\lambda^{n-k+2j/p} F\left(n+j/p\,,\,k -j/p\,;\, 1+n+k\,;\, \kappa^2\right). \label{solrhogen3}\end{aligned}$$ Note that the expressions (\[solrhogen1\])-(\[solrhogen3\]) are valid for $j=0$, too. In this case they coincide with the formulae obtained in the preceding section. The formulae (\[solrhogen1\])-(\[solrhogen3\]) immediately give the short-time behaviour of the Bogoliubov coefficients at $\tau\to 0$: it is sufficient to put $\kappa\approx p\tau$, $\lambda\approx 1$ and to replace the hypergeometric functions by $1$. In this limit the detuning parameter $\gamma$ drops out of the expressions (in the leading terms of the Taylor expansions). At $\tau\to\infty$ we have the following asymptotics of the functions $\kappa(p\tau)$ and $\lambda(p\tau)$ (if $\gamma\le 1$) $$\kappa\approx 1-\frac12 S^{-2}(p\tau) \to 1, \quad \lambda\to a+i\gamma, \quad \tau\to\infty .$$ Then equation (\[solrhogen\]) together with the known asymptotics of the hypergeometric function $F(a,b;a+b+1;1-x)$ at $x\ll 1$ [@Bateman; @Abram] $$F(a,b;a+b+1;1-x)=\frac{\Gamma(a+b+1)}{\Gamma(a+1)\Gamma(b+1)} \left[1+abx\ln(x) +{\cal O}(x)\right] \label{F1}$$ lead to the asymptotical expression for the Bogoliubov coefficients $$\begin{aligned} y_m^{(n,j)}(\tau\gg 1)&=& \frac{\sin[\pi(m+j/p)]}{\pi(m+j/p)} (a+i\gamma)^{m+n+2j/p}\sigma^{n-m} \nonumber\\ &\times&\left[ 1+{\cal O}\left( \frac{mn}{S^2}\ln S\right)\right] \label{asxieta}\end{aligned}$$ For $\gamma<1$ the correction has an order $mn\tau\exp(-2ap\tau)$, while for $\gamma=1$ it has an order $mn\ln(\tau)/\tau^2$. One can verify that the generating function (\[solR\]) satisfies the recurrence relation $$\frac{\partial R^{(q,j)}}{\partial\tau} =(j+qp)\left\{\sigma\left[ R^{(q-1,j)} -R^{(q+1,j)}\right] +2i\gamma R^{(q,j)}\right\} \label{recR}$$ Its immediate consequence is an analogous relation for the Bogoliubov coefficients with the same lower indices: $$\frac{d }{d\tau}\rho_m^{(n)} = n\left\{\sigma\left[ \rho_m^{(n-p)} -\rho_m^{(n+p)}\right] +2i\gamma \rho_m^{(n)}\right\}. \label{recrho}$$ Equation (\[recrho\]) is valid for $n>p$ (when $q\ge 1$ and $j\ge 1$ in (\[recR\])), since the coefficients $\rho_m^{(n)}$ are not defined when $n< 0$. However, using the chain of identities $$\begin{aligned} &&R^{(-1,j)}(z)= z^{-j/p}\left[\frac{S+gz}{g^*+ Sz}\right]^{j/p-1} =\frac1z\left(\frac1z\right)^{j/p-1}\left[\frac{S +g^*/z}{g+ S/z}\right] ^{1-j/p} \\ &&= \frac1z\left[R^{(0,p-j)}(1/z^*)\right]^* =\frac1z \sum_{k=-\infty}^{\infty} y_k^{(0,p-j)*}\left(\frac1z\right)^k =\sum_{k=-\infty}^{\infty} y_{-k-1}^{(0,p-j)*}z^k\end{aligned}$$ one can obtain the first $p-1$ recurrence relations $$\frac{d }{d\tau}\rho_m^{(n)} = n\left\{\sigma\left[ \rho_{-m}^{(p-n)*} -\rho_m^{(p+n)}\right] +2i\gamma \rho_m^{(n)}\right\}, \quad n=1,2,\ldots,p-1. \label{recrho1}$$ To treat the special case $n=p$ (it corresponds to the distinguished subset with $j=0$) one should take into account that $R^{(0,0)}(z)\equiv 1$, which means formally that $\rho_m^{(0)}=\delta_{m0}$. So the last recurrence relation reads $$\frac{d }{d\tau}\rho_m^{(p)} = p\left\{-\sigma\rho_m^{(2p)} +2i\gamma \rho_m^{(p)}\right\}, \quad m\ge 1$$ (remember that $\rho_m^{(p)}\equiv 0$ for $m\le 0$). Now one can verify that the unitarity conditions (\[rhocond2\])-(\[rhocond3\]) are the consequencies of the equations (\[recrho\]) and (\[recrho1\]). Differentiating the “vacuum” part of sum (\[number\]) with respect to $\tau$ and performing the summation over the upper index $n$ with the aid of (\[recrho\])-(\[recrho1\]) (remembering that the coefficients $\rho_m^{(n)}$ are different from zero provided the difference $n-m$ is a multiple of $p$) one can obtain the formula for the photon generation rate from vacuum in each mode ($0\le j\le p-1$, $q=0,1,2,\ldots$) $$\begin{aligned} &&\frac{d}{d\tau}{\cal N}_{j+pq}^{(vac)}= -2\sigma(j+pq){\rm Re}\left[\xi_{j+pq}^{(j)}\eta_{j+pq}^{(p-j)}\right] \nonumber\\[2mm] &&=2 p\sqrt{1-\gamma^2\kappa^2}\,\frac{\sin(\pi j/p)\Gamma(q+j/p) \Gamma(1+q+j/p)\Gamma(2-j/p)}{\pi \Gamma(j/p)\Gamma(q+1)\Gamma(q+2)} \kappa^{2q+1} \nonumber\\[2mm] &&\times F\left(q+j/p\,,\,-j/p\,;\,1+q\,;\,\kappa^2\right) F\left(q+j/p\,,\,1-j/p\,;\,2+q\,;\,\kappa^2\right) \label{ratejps}\end{aligned}$$ We see that there is no photon creation in the modes with numbers $p,2p,\ldots$. At $\tau\ll 1$ we have $\dot{\cal N}_{j+pq}^{(vac)}\sim \tau^{2q+1}$. In the long-time limit the photon generation rate tends to the constant value (if $\gamma<1$) $$\frac{d}{d\tau}{\cal N}_{j+pq}^{(vac)}= \frac{2ap^2 \sin^2(\pi j/p)}{\pi^2 (j+pq)} \left[1+{\cal O}\left(\frac{pq}{S^2}\ln S\right)\right], \quad ap\tau\gg 1 \label{asrate}$$ For $q\gg 1$ and for a fixed value of $\kappa$ one can simplify the right-hand side of (\[ratejps\]) using Stirling’s formula for the Gamma-functions and the easily verified asymptotical formula $$F(a,b;c;z)\approx (1-az/c)^{-b}, \quad a,c\gg 1\, .$$ In this case $$\frac{d}{d\tau}{\cal N}_{j+pq}^{(vac)}\approx 2 p \sqrt{1-\gamma^2\kappa^2}\, \frac{\sin(\pi j/p)\Gamma(2-j/p)\kappa^{2q+1}} {\pi\Gamma(j/p)q^{2(1-j/p)}\left(1-\kappa^2\right)^{1-2j/p}}, \quad q\gg 1. \label{asratebigq}$$ In particular, if $q\gg S^2(p\tau)|\gg 1$, then $$\frac{d}{d\tau}{\cal N}_{j+pq}^{(vac)}\approx 2 pa \, \frac{\sin(\pi j/p)\Gamma(2-j/p)\left(S^2/q\right)^{2(1-j/p)} } {\pi\Gamma(j/p) S^{2}} \exp\left(-q/S^2\right). \label{asratebigqS}$$ Comparing (\[asrate\]) and (\[asratebigqS\]) one can conclude that the number of the effectively excited modes (i.e. the modes with a time independent photon generation rate) increases in time exponentially, approximately as $S^2(\tau)/\ln S(\tau)$. Differentiating equation (\[ratetot\] ) once again over $\tau$ one can perform the summation over the upper index $n$ with the aid of equations (\[recrho\])-(\[recrho1\]) to obtain a closed expression for the [*second derivative*]{} of the total number of “vacuum” photons $$\begin{aligned} &&\frac{d^2}{d\tau^2}{\cal N}^{(vac)}= 2{\rm Re}\sum_{m=1}^{p-1} m(p-m)\left[\xi_{m}^{(m)}\xi_{p-m}^{(p-m)} +\eta_{m}^{(p-m)*}\eta_{p-m}^{(m)*}\right] \nonumber\\[2mm] &&=2 \sum_{m=1}^{p-1} m(p-m)\left\{m(p-m)\left[\frac{\kappa}{p} F\left(\frac{m}{p}\,,\,1-\frac{m}{p}\,;\,2\,;\,\kappa^2\right)\right]^2 \right. \nonumber\\[2mm] &&\left.+ \left(1-2\gamma^2\kappa^2\right) F\left(\frac{m}{p}\,,\,-\frac{m}{p}\,;\,1\,;\,\kappa^2\right) F\left(\frac{m}{p}-1\,,\,1-\frac{m}{p}\,;\,1\,;\,\kappa^2\right)\right\} \label{ratetot2}\end{aligned}$$ In the short-time limit one obtains $$\ddot{\cal N}^{(vac)}=\frac13 p(p^2-1), \quad |ap\tau|\ll 1 \label{2dersmall}$$ In the long-time limit the formulae (\[gammapm\]), (\[F1\]) and $\sum_{m=1}^{p-1} \sin^2(\pi m/p)=p/2$ lead to another simple expression (provided $p\ge 2$) $$\ddot{\cal N}^{(vac)}=2a^2p^3/\pi^2, \quad ap\tau\gg 1, \quad a>0 \label{2derbig}$$ Consequently, the total number of photons created from vacuum due to NSCE increases in time quadratically both in the short-time and in the long-time limits (although with different coefficients). It is interesting to compare formula (\[2derbig\]) with the total rate of change of the number of “cavity” photons due to nonvacuum initial conditions. Using equation (\[extradot\]) and replacing the coefficients $\rho_m^{(n)}$ by their asymptotical values (\[asxieta\]) one can obtain the expression $$\begin{aligned} &&\frac{d{\cal N}^{(cav)}}{d\tau}= \frac{4ap^2}{\pi^2} \sum_{m=1}^{p-1}\sin^2(\pi m/p) \sum_{n,k=0}^{\infty} \frac{\sigma^{n+k}}{\sqrt{(m+pn)(m+pk)}}\nonumber\\[2mm] &&\times \left\{ \langle\hat {b}_{m+pn}^{\dag}\hat {b}_{m+pk}\rangle ( a+i\gamma)^{k-n} -\sigma{\rm Re}\left[\langle\hat{b}_{m+pn}\hat{b}_{m+pk}\rangle ( a+i\gamma)^{k+n+1} \right]\right\} \label{extradotas}\end{aligned}$$ which holds provided $ap\tau\gg 1$ and $a>0$. For the physical initial states the sum in the right-hand side of (\[extradotas\]) is finite. This is obvious if a finite number of modes was excited initially. But even if the cavity was initially in a high-temperature thermal state, so that $\langle\hat {b}_{n}^{\dag}\hat {b}_{k}\rangle=\delta_{nk}T/n$, $\langle\hat {b}_{n}\hat {b}_{k}\rangle=0$, the sum over $n,k$ yields a finite value $ T \sum_{n=0}^{\infty}\,(m+pn)^{-2} $. Consequently, the total number of “nonvacuum” photons increases in time [*linearly*]{} at $ap\tau\gg 1$, whereas the total number of quanta generated from vacuum increases [*quadratically*]{} in the long time limit. At the same time, the total “vacuum” and “nonvacuum” energies increase exponentially if $\gamma<1$ (see section 3). The origin of the difference in the behaviours of the total energy and the total number of photons becomes clear, if one looks at the asymptotical formulae (\[asrate\])-(\[asratebigqS\]). They show that the rate of photon generation in the $m$th completely excited mode decreases approximately as $1/m$ (excepting the modes whose numbers are multiples of $p$), so the stationary rate of the [*energy*]{} generation asymptotically almost does not depend on $m$. In turn, the number of the effectively excited modes increases in time exponentially. These two factors lead to the exponential growth of the total energy (see also [@Klim] in the special case $\gamma=0$). The “principal resonance” ($p=2$) ================================= Some formulae obtained in the preceding section can be simplified in the special case $p=2$. In this case there are two subsets of nonzero Bogoliubov coefficients. The first one consists of the coefficients with even upper and lower indices $\xi_{2k}^{(2q)}$ which are reduced to the coefficients $\xi_{k}^{(q)}$ of the “semi-resonance” case (since $\eta_{2k}^{(2q)}\equiv 0$, this subset does not contribute to the generation of new photons). The second subset is formed by the “odd” coefficients which can be written as \[$\kappa\equiv \kappa(2\tau)$\] $$\begin{aligned} \xi_{2m+1}^{(2n+1)}&=& \frac{\Gamma\left(n+3/2\right) \kappa^{n-m}\lambda^{m+n+1}} {\Gamma\left(m+3/2\right)\Gamma\left(1+n-m\right) } \nonumber\\ &\times& F\left(n+1/2\,,\,-m -1/2\,;\, 1+n-m\,;\, \kappa^2\right), \quad n\ge m \label{xinm}\end{aligned}$$ $$\begin{aligned} \xi_{2m+1}^{(2n+1)}&=& \frac{(-1)^{m-n} \Gamma\left(m+1/2\right) \kappa^{m-n}\lambda^{m+n+1}} {\Gamma\left(n+1/2\right) \Gamma\left(1+m-n\right) } \nonumber\\ &\times& F\left(m+1/2\,,\,-n -1/2\,;\, 1+m-n\,;\, \kappa^2\right), \quad m\ge n \label{ximn}\end{aligned}$$ $$\begin{aligned} \eta_{2k+1}^{(2n+1)}&=& \frac{(-1)^{k-1}\Gamma\left(k+1/2\right)\Gamma\left(n+3/2\right) \kappa^{n+k+1}\lambda^{n-k} } {\pi\Gamma\left(2+n+k\right) } \nonumber\\ &\times& F\left(n+1/2\,,\,k +1/2\,;\, 2+n+k\,;\, \kappa^2\right). \label{etank}\end{aligned}$$ All the “odd” coefficients can be expressed in terms of the complete elliptic integrals [@BrMar] $${\bf K}(\kappa )=\int_0^{\pi /2}\frac {\mbox{d}\alpha}{\sqrt {1 -\kappa^2\sin^2\alpha}}\, , \quad {\bf E}(\kappa )=\int_0^{\pi /2}\mbox{d}\alpha \sqrt {1-\kappa^2\sin^2\alpha}\,.$$ In particular, $$\xi_1^{(1)}=\frac{2}{\pi}\lambda(\kappa){\bf E}(\kappa), \quad \eta_1^{(1)}=\frac{2}{\pi\kappa}\left[\tilde{\kappa}^2{\bf K}(\kappa ) -{\bf E}(\kappa )\right], \label{xietell}$$ where $$\tilde{\kappa}\equiv\sqrt{1-\kappa^2}= \left[1 +S^2(2\tau)\right]^{-1/2}. \label{deftilkappa}$$ However, the analogous expressions for the coefficients $\xi_m^{(n)}$ and $\eta_m^{(n)}$ with $m,n>1$ appear rather cumbersome (they can be written as linear combinations of the functions ${\bf E}(\kappa )$ and ${\bf K}(\kappa )$ multiplied by some rational functions of $\kappa$ and $\tilde{\kappa}$), so we do not bring them here. The photon generation rate from vacuum in the principal cavity mode ($m=1$) reads $$\frac {\mbox{d}{\cal N}_1^{(vac)}}{\mbox{d}\tau}= -2{\rm Re}\left[\eta_1^{(1)} \xi_1^{(1)}\right] =\frac {8\sqrt{1-\gamma^2\kappa^2}}{\pi^2\kappa} {\bf E}(\kappa )\left[{\bf E}(\kappa )- \tilde{\kappa}^2{\bf K}(\kappa )\right]. \label{rate1}$$ The total number of photons in the first mode can be obtained by integrating equation (\[rate1\]). Taking into account the relation $$\sqrt{1-\gamma^2\kappa^2}\mbox{d}\tau =\mbox{d}\kappa/\tilde{\kappa}^2 \label{dtau}$$ and the differentiation rules for the complete elliptic integrals $$\frac {\mbox{d}{\bf K}(\kappa )}{\mbox{d}\kappa}=\frac { {\bf E}(\kappa )}{\kappa\tilde{\kappa}^2}-\frac {{\bf K}(\kappa )}{ \kappa},\quad \frac {\mbox{d}{\bf E}(\kappa )}{\mbox{d}\kappa}=\frac { {\bf E}(\kappa )-{\bf K}(\kappa )}{\kappa} \label{difrul}$$ one can verify the following result: $${\cal N}_1^{(vac)}(\kappa )= \frac 2{\pi^2}{\bf K}(\kappa ) \left[2{\bf E}(\kappa) -\tilde{\kappa}^2{\bf K}(\kappa )\right] -\frac 12. \label{num1EK}$$ Making the transformation [@Bateman; @Abram] $${\bf K}\left(\frac{1-\tilde{\kappa}}{1+\tilde{\kappa}}\right) =\frac{1+\tilde{\kappa}}{2}{\bf K}(\kappa), \quad {\bf E}\left(\frac{1-\tilde{\kappa}}{1+\tilde{\kappa}}\right) =\frac{{\bf E}(\kappa)+\tilde{\kappa}{\bf K}(\kappa)} {1+\tilde{\kappa}}$$ one can rewrite formulae (\[xietell\]) and (\[num1EK\]) in the form given in [@DKPR] for $\gamma=0$. Using the asymptotical expansions of the elliptic integrals at $\kappa\to 1$ [@Grad] $$\begin{aligned} {\bf K}(\kappa )&\approx&\ln\frac 4{\tilde{\kappa}} +\frac 14\left(\ln\frac 4{\tilde{\kappa}}-1\right)\tilde{\kappa}^ 2+\cdots \\ {\bf E}(\kappa )&\approx& 1+\frac 12\left(\ln\frac 4{\tilde{\kappa}}-\frac 12\right)\tilde{\kappa}^2 +\cdots\end{aligned}$$ one can obtain the formula $${\cal N}_1^{(vac)}(\tau\gg 1)=\frac {8a}{\pi^2}\tau + \frac4{\pi^2}\ln\left(\frac{2}{a}\right)-\frac 12 + {\cal O}\left(\tau e^{-4a\tau}\right), \quad a>0. \label{num1as}$$ In the special case of $\gamma=1$ one can obtain the expansion $${\cal N}_1^{(vac)}(\tau\gg 1)=\frac {4}{\pi^2}\ln\tau + \frac{12}{\pi^2}\ln2-\frac 12 + {\cal O}\left(\tau^{-2}\right)$$ If $\gamma>1$, the number of photons in the principal mode oscillates with the period $\pi/(2\tilde{a})$. For $\gamma\gg 1$ one can write $\kappa\approx\sin(2\tilde{a}\tau)/\tilde{a}$, i.e. $|\kappa|\ll 1$. In this case $${\cal N}_1^{(vac)}\approx \frac{\kappa^2}{4}\approx \frac{\sin^2(2\tilde{a}\tau)} {4\tilde{a}^2} \ll 1.$$ The second derivative of the total number of “vacuum” photons can be written as $$\begin{aligned} &&\frac {\mbox{d}^2{\cal N}^{(vac)}}{\mbox{d}\tau^2}= 2\left[{\rm Re}\left(\left[\xi_1^{(1)}\right]^2\right)+ \left|\eta_1^{(1)}\right|^2\right] \nonumber\\ &&= \frac 8{\pi^2\kappa^2} \left[\tilde{\kappa}^4{\bf K}^2(\kappa ) -2\tilde{\kappa}^2{\bf K}(\kappa ){\bf E}(\kappa) +\left(1+\kappa^2 -2\gamma^2\kappa^4\right){\bf E}^2(\kappa)\right] \label{sectot}\end{aligned}$$ In the limiting cases this formula yields $${\cal N}^{(vac)}(\tau\ll 1)\approx\tau^2$$ $${\cal N}^{(vac)}(\tau\gg 1)= 8a^2\tau^2/{\pi^2} +{\cal O}(\tau), \quad a>0.$$ If $\gamma\gg 1$, then $|\kappa|\ll 1$, but $\gamma^2\kappa^2\approx\sin^2(2\tilde{a}\tau)\sim {\cal O}(1)$. In this case the Taylor expansion of the expression (\[sectot\]) yields $\ddot{{\cal N}}^{(vac)}=2\cos(4\tilde{a}\tau) + {\cal O}(\gamma^{-2})$. Integrating this equation with account of the initial conditions $\dot{{\cal N}}^{(vac)}(0)={\cal N}^{(vac)}(0)=0$ one obtains $ {\cal N}^{(vac)}\approx {\cal N}_1^{(vac)}\approx \sin^2(2\tilde{a}\tau)/(4\tilde{a}^2)$. Discussion ========== Let us discuss briefly the main results of the paper. We have solved the problem of the photon generation due to the nonstationary Casimir effect in an ideal Fabry-Perot cavity with an equidistant spectrum, if the cavity walls perform small (quasi)resonance oscillations at the frequency $\omega_w=p(\pi c/L_0)(1+\delta)$, for any integer value of $p=1,2,\ldots$. Namely, we have found explicit analytical expressions for the Bogoliubov coefficients, the rate of photon production in each mode and the total energy in the case of an arbitrary (although small compared with $\omega_w$) detuning. These expressions are [*exact*]{} consequences of the reduced equations (\[prhok\]) or (\[recrho\])-(\[recrho1\]). One should remember, however, that the reduced equations arise after averaging the exact equation (\[Qeq\]) over fast oscillations and neglecting the second-order terms with respect to small parameters $\varepsilon$ and $\delta$. Consequently, the “true” functions ${\cal N}(t)$, ${\cal E}(t)$, etc. could differ from those given above in terms proportional to $\varepsilon^2$. But such a difference seems quite insignificant under the realistic conditions. As was shown in [@D95; @DKPR], it is hardly possible to obtain the value of the dimensionless amplitude of the [*resonance*]{} wall vibrations $\varepsilon$ exceeding $10^{-8}$ in a laboratory. This means that the relative difference between the “true” magnitude of the photon generation rate (for example) and that given in section 5 could be of the order of $10^{-8}$ (or less) for $t<t_c\sim (\omega_1\varepsilon^2)^{-1}$. For $\omega_1\sim 10^{10}$ s$^{-1}$ the characteristic time $t_c$ has an order of months or years, and even for the optical frequences it has an order of seconds (although it is unclear how to cause the wall to vibrate at an optical frequency with a sufficiently big amplitude). Another argument in favour of the solutions obtained is that these solutions satisfy [*exactly*]{} the Bogoliubov transformation unitarity conditions (\[cond1\])-(\[cond3\]). Note that the rate of photon generation from vacuum in some mode is proportional to $p^2\varepsilon$ (if $\gamma=0$), and the total generation rate is proportional to $p^3\varepsilon^2$. Actually, the dimensionless amplitude of the wall oscillations $\varepsilon$ is inversly proportional to the frequency, since it is determined by the maximal possible stresses inside the wall [@D95; @DKPR]. Thus we see that increasing the resonance frequency one could achieve, in principle, some amplification of the number of photons proportional to $p$. It was shown in the previous studies \[10,14-23\] that the photon production from vacuum due to the NSCE [*could*]{} be observed under the condition of the strict parametric resonance. Here it is demonstrated explicitly that the photons [*cannot*]{} be produced if the detuning $\delta$ exceeds the dimensionless amplitude $\varepsilon$. This result confirms once again the statement made in [@DKPR] that the NSCE could be observed only in the resonance regime, ruling out the nonresonance laws of motion of the wall. The requirements to a possible experiment turn out rather hard (for example, for the principal frequency about $10$ GHz the detuning should not exceed $100$ Hz for the time of the order of at least $0.01$ s), but they do not seem to be absolutely unrealizable. Another source of troubles is connected with a nonideality of real cavities. Until now there were only few attempts to take into account different losses in the cavities with moving boundaries [@Lamb; @D98; @Lamb98], and this problem is still a challenge for theoreticians. {#Etot} Using equations (\[number\]) and (\[defrho\]) one can express the total energy in all the modes as $${\cal E}= \sum_{n=1}^{\infty} \frac1n S^{(n)} + \sum_{n,k=1}^{\infty} \frac{\langle\hat {b}_n^{\dag}\hat {b}_k\rangle } {\sqrt{nk}}U_1^{(nk)} +{\rm Re}\sum_{n,k=1}^{\infty} \frac{\langle\hat {b}_n\hat {b}_k\rangle } {\sqrt{nk}}U_2^{(nk)}$$ where $$S^{(n)}=\sum_{m=1}^{\infty}m^2\left|\rho_{-m}^{(n)}\right|^2, \label{defSn}$$ $$U_1^{(nk)}=\sum_{m=-\infty}^{\infty}m^2\rho_{m}^{(n)*} \rho_{m}^{(k)}, \quad U_2^{(nk)}=-\sum_{m=-\infty}^{\infty}m^2\rho_{m}^{(n)}\rho_{-m}^{(k)} \label{defU12}$$ (to write $U_2^{(nk)}$ as a sum from $-\infty$ to $\infty$ one should take into account that the summand in the last sum of (\[number\]) is symmetrical with respect to $n$ and $k$). Differentiating $U_1^{(nk)}$ with respect to $\tau$ and taking into account the equations (\[prhok\]) one can obtain after a simple algebra the expression $$\frac{d}{d\tau}U_1^{(nk)}=-p(-1)^p \sum_{m=-\infty}^{\infty} m(m+p)\left[ \rho_{m}^{(n)*} \rho_{m+p}^{(k)} + \rho_{m}^{(k)}\rho_{m+p}^{(n)*}\right]. \label{dotU1}$$ Differentiating the above expression once more one obtains $$\ddot{U}_1^{(nk)}=4p^2 U_1^{(nk)} + 2i\gamma p^2(-1)^p\chi_1^{(nk)},$$ where $$\chi_1^{(nk)}= \sum_{m=-\infty}^{\infty} m(m+p)\left[ \rho_{m}^{(k)}\rho_{m+p}^{(n)*} -\rho_{m}^{(n)*} \rho_{m+p}^{(k)} \right].$$ Differentiating $\chi_1^{(nk)}$ one can verify that $$\dot\chi_1^{(nk)}=2i\gamma(-1)^p\dot{U}_1^{(nk)}.$$ Consequently, $$\chi_1^{(nk)}=2i\gamma(-1)^p U_1^{(nk)} +const,$$ where the additive constant is determined by the initial conditions. Finally we arrive at the equation $$\ddot{U}_1^{(nk)}=4p^2(1-\gamma^2)U_1^{(nk)} + 4p^2\gamma^2 U_1^{(nk)}(0) + 2i\gamma p^2(-1)^p \chi_1^{(nk)}(0)$$ with $$U_1^{(nk)}(0)=n^2\delta_{nk}, \quad \chi_1^{(nk)}(0)=nk\left[\delta_{k,n-p} -\delta_{n,k-p}\right].$$ Using the same scheme one can obtain analogous relations for the coefficient $U_2^{(nk)}$: $$\frac{d}{d\tau}U_2^{(nk)}= p(-1)^p \sum_{m=-\infty}^{\infty} m\rho_{m}^{(n)} \left[ (m+p) \rho_{-m-p}^{(k)} -(p-m) \rho_{p-m}^{(k)}\right], \label{dotU2}$$ $$\ddot{U}_2^{(nk)}=4p^2 U_2^{(nk)} - 2i\gamma p^2(-1)^p\chi_2^{(nk)},$$ $$\chi_2^{(nk)}= \sum_{m=-\infty}^{\infty} m\rho_{m}^{(n)} \left[ (m+p) \rho_{-m-p}^{(k)} +(p-m) \rho_{p-m}^{(k)}\right],$$ $$\dot\chi_2^{(nk)}=-2i\gamma(-1)^p\dot{U}_2^{(nk)},$$ $$\ddot{U}_2^{(nk)}=4p^2(1-\gamma^2)U_2^{(nk)} - 2i\gamma p^2(-1)^p \chi_2^{(nk)}(0),$$ $$\chi_2^{(nk)}(0)=nk\delta_{k,p-n} .$$ The calculation of the vacuum contribution to the total energy $${\cal E}^{(vac)} =\sum_{n=1}^{\infty} \frac1n S^{(n)}$$ is more involved, since the summation in (\[defSn\]) is performed now not from $-\infty$ to $\infty$, but over the coefficients $\rho_{m}^{(n)}$ with [*negative*]{} indices $m$ only. Differentiating the sum (\[defSn\]) with respect to $\tau$ and using equations (\[prhok\]) we obtain $$\dot {S}^{(n)}= 2(-1)^p {\rm Re} \sum_{m=1}^{\infty} m^2\rho_{-m}^{(n)} \left[ (m+p) \rho_{-m-p}^{(n)*} +(p-m) \rho_{p-m}^{(n)*} \right]. \label{dotSn}$$ Differentiating the expression (\[dotSn\]) once again one can obtain after some algebra the equation $$\begin{aligned} \ddot {{\cal E}}^{(vac)}&=& 4p^2 {\cal E}^{(vac)} +4p(-1)^p\gamma\sum_{n=1}^{\infty} \frac1n \Phi^{(n)}\nonumber\\ &+& 2{\rm Re}\sum_{m=1}^p m(p-m)^2 \left[ m F_m +(m+p)G_m\right], \label{ddotsum}\end{aligned}$$ where $$\Phi^{(n)}={\rm Im} \sum_{m=1}^{\infty} m^2\rho_{-m}^{(n)} \left[ (p-m) \rho_{p-m}^{(n)*} - (m+p)\rho_{-m-p}^{(n)*} \right]$$ $$F_m= \sum_{n=1}^{\infty}\frac{1}{n} \left[\rho_{m}^{(n)*}\rho_{m}^{(n)} - \rho_{-m}^{(n)*}\rho_{-m}^{(n)} \right]$$ $$G_m= \sum_{n=1}^{\infty}\frac{1}{n} \left[\rho_{m+p}^{(n)*}\rho_{m-p}^{(n)} - \rho_{p-m}^{(n)*}\rho_{-m-p}^{(n)} \right]$$ Differentiating the function $\Phi^{(n)}$ over $\tau$ and using again equations (\[prhok\]) one can verify that the derivative $d\Psi/d\tau$ of the combination $ \Psi\equiv \sum_{n=1}^{\infty}\;\frac1n\left[\Phi^{(n)} +p(-1)^p\gamma S^{(n)}\right] $ can be written in the form analogous to the last sum (from $1$ to $p$) of equation (\[ddotsum\]), but the symbol Re should be replaced by Im. Since $F_m=1/m$ due to the identity (\[rhocond2\]) and $G_m=0$ due to (\[rhocond3\]), we have $d\Psi/d\tau=0$. Taking into account the initial conditions $\Phi^{(n)}(0)=S^{(n)}(0)=0$ one obtains $\Psi(\tau)= 0$. Combining all the terms giving the second derivative of ${\cal E}$ one can arrive finally at equation (\[eqEtot\]), where the term $\frac16 p^2(p^2-1)$ is the value of the sum $2\sum_{m=1}^p \,m(p-m)^2$. The initial value of the first derivative $\dot{\cal E}(\tau)$ is determined by the right-hand sides of equations (\[dotU1\]), (\[dotU2\]) and (\[dotSn\]) taken at $\tau=0$, when $\rho_m^{(n)}=\delta_{mn}$: $$\dot{\cal E}(0) = -2p\sigma\sum_{n=1}^{\infty} \sqrt{n(n+p)}{\rm Re}\langle\hat {b}_n^{\dag}\hat {b}_{n+p}\rangle -p\sigma\sum_{n=1}^{p-1}\sqrt{n(p-n)}{\rm Re} \langle\hat {b}_n\hat {b}_{p-n}\rangle .$$ Comparing this formula with (\[defcalG\]) we arrive at equation (\[inconE\]). References {#references .unnumbered} ========== [99]{} Casimir H B G 1948 [*Proc. Kon. Ned. Wet.*]{} [**51**]{} 793 Plunien G, Müller B and Greiner W 1986 [*Phys. Rep.*]{} [**134**]{} 87 Milonni P W 1993 [*Quantum Vacuum*]{} (Boston: Academic) Mostepanenko V M and Trunov N N 1997 [*The Casimir Effect and its Applications*]{} (Oxford: Clarendon) Moore G T 1970 [*J. Math. Phys.*]{} [**11**]{} 2679 Fulling S A and Davies P C W 1976 [*Proc. Roy. Soc. London*]{} A [**348**]{} 393 Sarkar S 1988 in: [*Photons and Quantum Fluctuations*]{} eds Pike E R and Walther H (Bristol: Hilger) p 151 Dodonov V V, Klimov A B and Man’ko V I 1989 [*Phys. Lett.*]{} A [**142**]{} 511 Dodonov V V, Klimov A B and Man’ko V I 1990 [*Phys. Lett.*]{} A [**149**]{} 225 Dodonov V V and Klimov A B 1992 [*Phys. Lett.*]{} A [**167**]{} 309 Jaekel M T and Reynaud S 1992 [*Journal de Physique*]{} I [**2**]{} 149 Calucci G 1992 [*J. Phys. A: Math. Gen.*]{} [**25**]{} 3873 Barton G and Eberlein C 1993 [*Ann. Phys. (NY)*]{} [**227**]{} 222 Dodonov V V, Klimov A B and Nikonov D E 1993 [*J. Math. Phys.*]{} [**34**]{} 2742 Law C K 1994 [*Phys. Rev.*]{} A [**49**]{} 433 Law C K 1994 [*Phys. Rev. Lett.*]{} [**73**]{} 1931 Law C K 1995 [*Phys. Rev.*]{} A [**51**]{} 2537 Dodonov V V 1995 [*Phys. Lett.*]{} A [**207**]{} 126 Cole C K and Schieve W C 1995 [*Phys. Rev.*]{} A [**52**]{} 4405 Méplan O and Gignoux C 1996 [*Phys. Rev. Lett.*]{} [**76**]{} 408 Dodonov V V and Klimov A B 1996 [*Phys. Rev.*]{} A [**53**]{} 2664 Dodonov V V 1996 [*Phys. Lett.*]{} A [**213**]{} 219 Lambrecht A, Jaekel M-T and Reynaud S 1996 [*Phys. Rev. Lett.*]{} [**77**]{} 615 Johnston H and Sarkar S 1996 [*J. Phys. A: Math. Gen.*]{} [**29**]{} 1741 Barton G and North C A 1996 [*Ann. Phys. (NY)*]{} [**252**]{} 72 Jáuregui R and Villarreal C 1996 [*Phys. Rev.*]{} A [**54**]{} 3480 Klimov A B and Altuzar V 1997 [*Phys. Lett.*]{} A [**226**]{} 41 Ji J-Y, Jung H-H, Park J-W and Soh K-S 1997 [*Phys. Rev.*]{} A [**56**]{} 4440 Golestanian R and Kardar M 1997 [*Phys. Rev. Lett.*]{} [**78**]{} 3421 Chizhov A V, Schrade G and Zubairy M S 1997 [*Phys. Lett.*]{} A [**230**]{} 269 Fu L-P, Duan C K and Guo G-C 1997 [*Phys. Lett.*]{} A [**234**]{} 163 Mundarain D F and Maia Neto P A 1998 [*Phys. Rev.*]{} A [**57**]{} 1379 Dalvit D A R and Mazzitelli F D 1998 [*Phys. Rev.*]{} A [**57**]{} 2113 Schützhold R, Plunien G and Soff G 1998 [*Phys. Rev.*]{} A [**57**]{} 2311 Janowicz M 1998 [*Phys. Rev.*]{} A [**57**]{} 4784 Ji J-Y, Jung H-H and Soh K-S 1998 [*Phys. Rev.*]{} A [**57**]{} 4952 Ji J-Y, Soh K-S, Cai R-G and Kim S P 1998 [*J. Phys. A: Math. Gen.*]{} [**31**]{} L457 Dodonov V V 1998 [*Phys. Lett.*]{} A [**244**]{} 517 Lambrecht A, Jaekel M-T and Reynaud S 1998 [*Eur. Phys. J.*]{} D [**3**]{} 95 Golestanian R and Kardar M 1998 [*Phys. Rev.*]{} A [**58**]{} 1713 Louisell W H 1960 [*Coupled Mode and Parametric Electronics*]{} (New York: Wiley) Landau L D and Lifshitz E M 1969 [*Mechanics*]{} (Oxford: Pergamon Press) Bogoliubov N N and Mitropolsky Y A 1985 [*Asymptotic Methods in the Theory of Non-Linear Oscillations*]{} (New York: Gordon & Breach) 1953 ed Erdélyi A (New York: McGraw-Hill) Dittrich J, Duclos P and Šeba P 1994 [*Phys. Rev.*]{} E [**49**]{} 3535 1972 eds Abramowitz M and Stegun I A (New York: Dover) Prudnikov A P, Brychkov Yu A and Marichev O I 1986 [*Integrals and Series. Additional Chapters*]{} (Moscow: Nauka) Gradshtein I S and Ryzhik I M 1994 [*Tables of Integrals, Series and Products*]{} (New York: Academic) [^1]: On leave from Lebedev Physical Institute and Moscow Institute of Physics and Technology, Russia [^2]: E-mail: vdodonov@power.ufscar.br
--- abstract: 'The word-stock of a language is a complex dynamical system in which words can be created, evolve, and become extinct. Even more dynamic are the short-term fluctuations in word usage by individuals in a population. Building on the recent demonstration that [*word niche*]{} is a strong determinant of future rise or fall in word frequency, here we introduce a model that allows us to distinguish persistent from temporary increases in frequency. Our model is illustrated using a $10^8$-word database from an online discussion group and a $10^{11}$-word collection of digitized books. The model reveals a strong relation between changes in word dissemination and changes in frequency. Aside from their implications for short-term word frequency dynamics, these observations are potentially important for language evolution as new words must survive in the short term in order to survive in the long term.' author: - 'Eduardo G. Altmann' - 'Zakary L. Whichard' - 'Adilson E. Motter' date: 'Received: date / Accepted: date' title: Identifying trends in word frequency dynamics --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction {#intro} ============ Quantitative studies of natural languages have led to significant advances in the understanding of word statistics [@Manning; @Baayen2002] and language evolution [@Pagel2009; @Gell-Mann2011]. A comparatively less explored (albeit extremely important) problem concerns the dynamics of word usage. Some representative examples include the study of bursts and lulls in word recurrence in online communities [@PLoS1], distributions of $n$-grams in books written over the past 200 years [@Michel2010], and analysis of word content in Twitter posts to assess temporal changes in perceived happiness [@Dodds2011]. Language evolution and word statistics are related to word dynamics, as illustrated, for example, by early findings that word frequency itself is a correlate of word success at historical time scales [@Lieberman2007; @Pagel2007]. At shorter time scales, however, this relation is more subtle and remains far less understood. For time scales of just a few years, we have recently shown that word niche is a stronger determinant of future change in word frequency usage than the initial word frequency itself [@PLoS2]. The niche of a word was defined in terms of the number of people and topics making use of the word and quantified by dissemination coefficients $D^{(\cdot)}$. These measures were applied to large records of Usenet groups spanning approximately two decades, in which people are represented by Usenet users and topics are represented by the discussion threads. In particular, the results in [@PLoS2] show: ([*i*]{}) that the dissemination across users, $D^U (t_1)$, and threads, $D^T(t_1)$, at a time $t_1$ are both strongly positively correlated with the change in $\log$-frequency $\Delta \log f_{t_2,t_1} =\log_{10}f(t_2) -\log_{10}f(t_1)$ for $t_2-t_1$ of a few years; ([*ii*]{}) the changes in dissemination $\Delta D^U_{t_2,t_1} = D^U (t_2 ) - D^U (t_1)$ and $\Delta D^T_{t_2,t_1} = D^T (t_2 ) - D^T (t_1)$ are both negatively correlated with $\Delta \log f_{t_2,t_1}$ over the same time intervals. Here, we explore the relation between dissemination and frequency change using simple models for the population of word users. We interpret our results using data from two Usenet groups [@data]: the comp.os.linux.misc group, which is focused on Linux operating systems and has 28,903 users and 140,517 threads for the period 1993-08-12 through 2008-03-31, and the rec.music.hip-hop group, which is focused on hip-hop music and has 37,779 users and 94,074 threads for the period 1995-02-08 through 2008-03-31. In these datasets, each post represents a unit of text and is associated with a user and thread, while each thread itself is defined by the initial post and all replies. Examples of the variation of word frequency in these datasets are shown in Fig. \[fig1\]. Using our model and analysis of these datasets, we show that increase in frequency not accompanied by concurrent increase in the number of users is reflected as a decrease in $D^U$ and subsequent frequency fall. This, along with the observations ([*i*]{}) and ([*ii*]{}), illuminates the mechanistic difference between temporary and persistent frequency changes and helps explain why most frequency rises are just transient. We focus on modeling $D^U$, with the view that analogous results hold for $D^T$. We also explore signatures of this behavior over longer time scales by considering a digitized collection of over 2.4 million books published in English between 1820 and 2000 [@google_books]. In this case, the dissemination is considered across different books, which captures characteristics of both word users and topics. This dataset allows us to demonstrate that our observations are not unique to informal, Internet-based communications, and that they do in fact concern properties inherent to language change in general. We believe these results are timely as numerous studies are being carried out on statistical physics aspects of natural languages. Such studies have considered properties on scales ranging from individual letters [@Stephens2010] to thousands [@Montemurro2010] or even millions [@PLoS1] of words, and often benefit from concepts such as phase transitions [@scaling_pt1; @scaling_pt2] and techniques such as network representation [@netw1; @netw2; @netw3; @netw4]. In this context, increasing attention has been given to the modeling of language usage and language change (see, e.g., [@PLoS1; @Serrano2009; @Corral2009; @Sole2010; @Petersen2012; @Perc2012]). Our study of factors distinguishing persistent from temporary word frequency change contributes to this growing body of literature. Results ======= Dissemination Coefficient ------------------------- We define the coefficient of dissemination of each word $w$ across users as $$\label{eq.dissemination} D^U_w=\frac{U_w}{\tilde{U}(N_w)},$$ where $N_w$ is the number of occurrences of the word in the dataset, $U_w$ is the number of users whose posts include word $w$ at least once, and $\tilde{U}$ is the expected number of users predicted by a baseline model in which words are randomized across users and threads. Specifically, the baseline is defined from $\tilde{U}=\sum^{N_U}_{i=1}\tilde{U}_i$, where $N_U$ is the total number of users and $\tilde{U}_i$ is the probability that user $i$ would use word $w$ at least once if all words in the dataset are shuffled randomly while keeping fixed the sizes of the posts. The probability $\tilde{U}_i$ can be calculated as the complement of the probability that the user never uses the word: $ \tilde{U}_i=1-\prod_{j=0}^{N_w-1}\left[1-\frac{m_i}{N_A-j}\right], $ where $N_w$ be the number of occurrences of the word $w$, $m_i$ be the total number of words contributed by user $i$, and $N_A\equiv\sum_w N_w=\sum_i m_i$ is the total number of words in the dataset. In our datasets, $m_i/N_A\ll1$ and $f_w\equiv N_w/N_A\ll1$, which allows us to further simplify this expression to $ \tilde{U}_i \approx 1-e^{-f_w m_i}. $ This represents a Poissonian baseline model in which the probability of using word $w$ is given by the observed word frequency $f_w$. For the rest of the paper, we drop the index $w$ for simplicity. Therefore, the expected value of $D^U$ is $1$ for a word that is distributed randomly across all users. The main purpose of introducing this measure is to detect deviations from random. In particular, $D^U<1$ represents words that are clumped and hence used above average by a subset of all users. For example, the word “yep” shown in Fig. \[fig1\](b) has $D^U$ varying between $0.36$ and $0.90$ over different half-year windows. Clumping is in fact observed for most words in our datasets (89% of the words in the Linux group and 90% of the words in the hip-hop group). On the other hand, $D^U>1$ represents words that are over disseminated and hence more evenly distributed across users than expected by chance. Greetings and expressions of gratitude, such as [*thanks*]{}, tend to be in this class. We refer to [@PLoS2] for more information about the distribution of $D^U$ for the Usenet datasets we consider. Statistical Model ----------------- We discuss a class of models that offer insights into how changes in $f$ are related to changes in $D^U$. This relation is key in discriminating between persistent and temporary word frequency growth. Assume that each user $i$ and word $w$ are characterized by two quantities: $m_i\in[0,\infty)$, which is the size of the user’s total contribution to the text in number of words, and $\nu_i\in[0,1]$, which is a fixed probability of using the word $w$ as opposed to any different word. To simplify the calculations, we assume $m$ to be a continuous variable. For each given word, a population of large size $N_U$ is then described by the joint probability density function $\rho(m,\nu)$ from which the relevant observable quantities can be calculated. In particular, within this model, the frequency of $w$ is given by $$\label{eq.f} f=\frac{N_w}{N_A}=\frac{\int_0^\infty dm\int_0^1 d\nu\,\, m \nu \rho(m,\nu)}{\int_0^\infty dm\,\, m\rho_m(m) },$$ where $\rho_m(m)\equiv \int_0^1 d\nu\,\, \rho(m,\nu)$. Moreover, the expected fraction of users of word $w$ is $$\frac{U}{N_U}=1-\int_0^\infty dm \int_0^1d\nu \,\, \rho(m,\nu)e^{-m\nu},$$ and the baseline is $$\frac{\tilde{U}}{N_U}=\int_0^\infty dm \int_0^1 d\nu \,\, (1-e^{-f m})\rho(m,\nu)=1-(\mathcal{L}\rho_m)(f),$$ where the last term indicates the Laplace transform $(\mathcal{L}g)(y)\equiv\int_0^\infty dx\,\, g(x) e^{-xy}$. It follows from the ratio between the previous two equations that the dissemination $D^U$ is given by $$\label{eq.Dm} D^U=\frac{1-\int_0^\infty dm \int_0^1 d\nu \,\, \rho(m,\nu)e^{-m\nu}}{1-(\mathcal{L}\rho_m)(f)},$$ where $f$ is given by Eq. (\[eq.f\]). Therefore, given a probability distribution $\rho(m,\nu)$, Eq. (\[eq.Dm\]) provides a quantitative relation between frequency and dissemination. As we proceed to our analysis of pertinent implications, we note that the main assumption involved in this derivation is that users behave independently. That is, the size of their contributions as well as their individual word frequencies are independent of those of the other users. Nevertheless, this description is still quite general as it allows for an arbitrary relation between $m$ and $\nu$. Examples -------- [*Example 1:*]{} Assume that with respect to a word $w$ each user belongs to one of two distinct groups. In the first group, formed by a fraction $0\le q\le 1$ of the population, the users use the word with fixed frequency $\nu=\nu^*$. In the second group, formed by the complementary fraction $1-q$ of individuals, the users use the word with a negligible frequency ($\nu=0^+$). For simplicity we consider that all users contribute the same amount to the text, say $m^*$ words. Under these conditions, we have $$\label{eq.rho-ex1} \rho(m,\nu)=\delta(m-m^*)[q\delta(\nu-\nu^*)+(1-q)\delta(\nu - 0^+)].$$ In this case, Eq. (\[eq.f\]) results in the simple relation $$f=\nu^* q \label{eq.fnip}$$ and Eq. (\[eq.Dm\]) leads to $$\label{eq.Dnip} D^U = q \frac{1-e^{-m^*\nu^*}}{1-e^{-m^*\nu^* q}},$$ where the term $m^* \nu^*$ corresponds to the average number of times each user uses the word $w$. Word usage changes over time not only in frequency but also in dissemination. While the frequency in Eq. (\[eq.fnip\]) grows linearly with both $\nu^*$ and $q$, the dissemination coefficient in Eq. (\[eq.Dnip\]) increases with $q$ but decreases with $m^* \nu^*$. To understand the significance of this, we examine the two different scenarios shown in Fig. \[fig2\](a). In the first scenario, the frequency $\nu^*$ remains fixed but the fraction $q$ of the population using the word changes over time; for increasing $q$, this represents a situation in which the overall frequency $f$ increases because the word is used by more individuals. In the second scenario, the frequency $\nu^*$ changes, while the fraction $q$ of users of the word remains fixed; for increasing $\nu^*$, this corresponds to a case in which the frequency $f$ rises simply because the word is used more repetitively by the same individuals. It is then clear that an increase in either $\nu^*$ or $q$ leads to an increase in the overall frequency ($\Delta \log f>0$), but increase in $\nu^*$ without a concurrent increase in $q$ leads to a decrease in dissemination ($\Delta D^U <0$) even though the number of adopters of the word does not decrease. On the other hand, an increase in $q$, and hence in the number of actual users of the word, causes both frequency and dissemination to increase. Given that $D^U(t_1)$ is strongly positively correlated with $\Delta \log f_{t_2,t_1}$ [@PLoS2], it is clear that the first scenario may lead to sustainable growth in frequency while the second may not. These conclusions do not depend sensitively on the assumption that the users contribute the same amount to the text. For example, replacing Eq. (\[eq.rho-ex1\]) with $ \rho(m,\nu)=\rho_m(m)[q\delta(\nu-\nu^*)+(1-q)\delta(\nu-0^+)] $ leads to the same relation for the frequency and to a slightly less explicit expression for the dissemination, $$D^U = q\frac{1-\int_0^\infty dm\,\, \rho_m (m) e^{-m\nu^*}}{1-\int_0^\infty dm\,\, \rho_m(m) e^{-m\nu^*q }},$$ which is qualitatively similar to Eq. (\[eq.Dnip\]) if the distribution $\rho_m(m)$ is peaked around a certain average $m^*$. [*Example 2:*]{} In the example above the variables $m$ and $\nu$ are assumed to be independent, i.e., $\rho(m,\nu)=\rho_m(m)\rho_\nu(\nu)$, meaning that the probability of using the word $w$ is independent of the size of the contribution of the user. More generally, this case leads to $$\label{eq.find} f=\int_0^1 d\nu \,\, \nu \rho_\nu(\nu),$$ and $$\label{eq.Dmi} D^U=\frac{1-\int_0^1 d\nu \,\, \rho_\nu(\nu)(\mathcal{L}\rho_m)(\nu)}{1-(\mathcal{L}\rho_m)(f)}.$$ We have previously observed that $\rho_m(m)$ follows a log-normal distribution for the datasets considered here [@PLoS2]. In addition, by considering words of sufficiently high frequency to generate reliable statistics, we suggest that $\rho_{\nu}(\nu)\arrowvert_{\nu>0}$ too can be approximated by a log-normal distribution. Therefore, we consider the case in which $\ln m$ is a normal distribution with average $\langle \ln m \rangle$ and standard deviation $\sigma_{\ln m}$ for the whole population, and $\ln \nu$ is a normal distribution with average $\langle \ln \nu \rangle$ and standard deviation $\sigma_{\ln \nu}$ for a fraction $q$ of the population. Here, $q$ represents the fraction of users with $\nu>0$ and hence a non-negligible probability of using the word under consideration; the remaining fraction $1-q$ of users do not use the word and are assigned $\nu=0^+$. Figure \[fig2\](b) shows a realization of this model for a choice of parameters representative of those in our datasets. Like in the case of the previous example, in the scenario in which the number of users is varied by controlling $q$, the resulting changes in the overall frequency are accompanied by concordant changes in dissemination ($\Delta \log f \times \Delta D^U >0$); conversely, the scenario in which the frequency is varied for a fixed number of users (now by controlling $\langle \ln \nu \rangle$), the changes in the overall frequency are accompanied by opposing changes in dissemination ($\Delta \log f \times \Delta D^U <0$). As already mentioned, owing to the positive correlations between dissemination and subsequent frequency changes [@PLoS2], the first of these two scenarios will generally lead to more sustainable changes in frequency. This implication is demonstrated explicitly in the next section. Empirical Observations ---------------------- To test the behavior of words in real datasets, we performed additional analysis in the Linux and hip-hop Usenet groups [@data]. Motivated by Fig. \[fig2\], we focus on concurrent changes of both $\log f$ and $D^U$. Specifically, we measured $\Delta \log f_{t_2, t_1}$ and $\Delta D^U_{t_2, t_1}$ for non-overlapping half-year windows centered at times $t_1$ and $t_2 = t_1+\Delta t$ years. We consider all words in the intermediate frequency range $10^{-7} \lessapprox f < 3\times 10^{-4}$, for which $D^U$ has been observed not to depend strongly on $f$. This independence facilitates analysis of the separate influence of frequency and dissemination on frequency change. In order to avoid floor effects on extremely low-frequency words and ceiling effects on extremely high-frequency ones, this was implemented by only selecting words that appear more than $5$ times in both windows and with a frequency no larger than $3\times 10^{-4}$ in any window. In our analysis, words are strings composed only by the symbols $``a-z,',-''$ and are subjected to no additional lemmatization (we refer to [@PLoS2] for the filtering of spams in our datasets). Taking all windows into account, $32,795$ different unique words passed these criteria for the Linux group and $27,869$ for the hip-hop group, corresponding to more than $40\%$ of the whole text in each case; the whole text consists of $7.2\times10^7$ and $5.3\times10^7$ word occurrences, respectively. Figure \[fig3\] shows $\Delta \log f$ and $\Delta D^U$ for $t_1=$ 1998-01-01 and $\Delta t=2$ years. The distribution of words in each scatter plot is centered around the origin and spread over all quadrants. However, the distribution is clearly biassed towards the second and fourth quadrants. This is a manifestation of the negative correlations that dominate the relation between frequency change and dissemination change. The tendency of $\Delta \log f$ and $\Delta D^U$ to vary in opposite directions is evident also from the running median in $\Delta \log f$ as a function of $\Delta D^U$ (Fig. \[fig3\], continuous lines). In view of the properties of the statistical model in Fig. \[fig2\], this indicates that, for most words exhibiting a significant variation in overall frequency, the observed variation occurs due to a change in the usage rate among existing users of the word rather than a change in the number of individuals adopting the word. To verify the generality of these observations, we consider the values of the running median at $\Delta D^U=\pm 0.5$ as a quantitative indicator of the general relation between $\Delta \log f$ and $\Delta D^U$. As shown in Fig. \[fig4\](a, b), this indicator does not change substantially when we vary the position $t_1$ of the initial window. This implies that the conclusions drawn from Fig. \[fig3\] are in fact typical in our datasets for changes in frequency and dissemination over the time scale of a few years. Moreover, Fig. \[fig4\](c, d) shows that similar robustness is also observed when we vary this time scale, represented by the time $\Delta t$ between the initial and final window. The values of the median of $\Delta \log f$ at $\Delta D^U=\pm 0.5$ increase slightly for large $\Delta t$, but this can be attributed in part to the criterion $N_w> 5$, which has the effect of selecting against negative frequency changes and does so more strongly as the time between the windows is increased. The distance between these values is therefore a more informative measure than the values themselves, and this measure does not change substantially with $\Delta t$. In all cases, the median of $\Delta \log f$ at $\Delta D^U= - 0.5$ is significantly larger than at $\Delta D^U= + 0.5$, confirming that large short-term variations in frequency and dissemination tend to oppose each other. Nevertheless, for given $t_1$ and $\Delta t$, a significant number of individual words do exhibit variations in frequency and dissemination that are concurrently increasing or decreasing, as illustrated in Fig. \[fig3\]. Finally, we demonstrate that frequency changes for which $\Delta \log f\times\Delta D^U>0$ are indeed more persistent than those for which $\Delta \log f\times\Delta D^U<0$. Figure \[fig5\](a, b) illustrates this point by showing for $t_1=$ 1998-01-01 how a change in $\log f$ acquired over $\Delta t=2$ years sustains itself after $2$ more years according to the quadrant the word belongs to in the representation of Fig. \[fig3\]. The running medians (Fig. \[fig5\](a, b), dotted and continuous lines) indicate that the words belonging to the first quadrant ($\Delta \log f>0$, $\Delta D^U>0$) exhibit a larger increase in frequency after $\Delta t +2$ years than the words in the second quadrant ($\Delta \log f>0$, $\Delta D^U<0$). Likewise, although to a smaller extent, the words belonging to the third quadrant ($\Delta \log f<0$, $\Delta D^U<0$) tend to exhibit a larger final decrease in frequency than the words in the fourth quadrant ($\Delta \log f<0$, $\Delta D^U>0$). As shown in Fig. \[fig5\](c, d), for both the Linux and the hip-hop datasets, these systematic differences are statistically significant and continue to exist when $t_1$ is varied. Confirmation over Longer and Larger Scales ------------------------------------------ We consider the Google Books Ngram Corpuses of English-language publications over the period 1820-2000, which includes a total of 2,424,241 books [@google_books]. Starting with the raw data, we performed an initial cleaning to remove non-words. We focused on words formed by any combination of letters, apostrophes, and internal hyphens, containing at least 3 letters and less than 50 characters. Within this dataset, upper- and lower-case letters are treated as different words, but it can be argued that distinguishing case has little impact on our results. This leads to a dataset of $1.7 \times10^{11}$ words. Within this set, we study the dissemination properties of words with average frequency in the interval $10^{-8} < f < 10^{-4}$, which results in $6.8\times10^{10}$ words and 632,912 unique words. In calculations of the dissemination coefficient, we further limit ourselves to words with a frequency of at least $10^{-7}$ within the corresponding year, which implies at least 10 occurrences of each selected word even for the years with the smallest number of books. We consider the dissemination across books, with the associated dissemination coefficient $D_{w}^B$ given by $$D_{w}^B= \frac{B_{w}}{\tilde{B}(N_{w})},$$ where the actual number of books using the word, $B_{w}$, and the expected number predicted by the baseline model $\tilde{B}(N_{w})$, are defined and calculated analogously to $U_{w}$ and $\tilde{U}(N_{w})$ in the user dissemination coefficient in Eq. (\[eq.dissemination\]). All calculations of the dissemination coefficient $D_{w}^B$ are performed over time windows of one year. Because no information is available in the database about the length of individual books, in estimating $\tilde{B}(N_{w})$ we have approximated the length of the books by their average length. We focus on books published no earlier than 1820 to avoid conflation of the now obsolete long “s” with “f”, which were not distinguished in the digitization process. Our choice of the period 1820-2000 is further motivated by the need to avoid years with extremely small and extremely large number of digitized books. Figure \[fig6\] shows a summary of the empirical observations in this dataset. As in the case of the Usenet groups, the frequency change is negatively correlated with the dissemination change. This is illustrated both by considering a fixed $\Delta t=10$ years for $t_1$ varying from 1820 to 1990 (Fig. \[fig6\](a)) and by considering a fixed $t_1=1820$ for $\Delta t$ varying from $10$ to $180$ years (Fig. \[fig6\](b)). Over these long time scales, there are some systematic changes in $\Delta \log f$ both as a function of $t_1$ and as a function of $\Delta t$. But these changes may be partially due to the heterogeneity of the dataset. For example, because recent years have a larger number of books (and hence of words), the smaller $\Delta \log f$ for $\Delta D^B=0.5$ for more recent $t_2$ may be in part due to the fact that statistical fluctuations are less likely to push infrequent words below the frequency threshold $10^{-7}$ in recent years than in early years. More important, even when we consider frequency change over relatively long time intervals, the sign of the accompanying change in dissemination is a determinant factor for subsequent changes in frequency. This is illustrated in Fig. \[fig6\](c) for frequency changes over 20 years as determined by the frequency and dissemination changes over the first 10 years. These empirical observations corroborate the conclusion that word dissemination plays a central role in future rise and fall of word frequency even over long times and large social scales. Outlook ======= Our demonstration that [*word frequency dynamics*]{} can be statistically related to simple aspects of the [*users’ dynamics*]{} opens new opportunities for the study of language dynamics in online communities. Several aspects of language dynamics have been traditionally addressed by tacitly assuming a homogeneous and essentially passive population of users. This includes, for example, the long-term lexical evolution and its dependency on word frequency [@Pagel2009]. This study, on the other hand, points to the importance of the medium in which the word is used, including its dynamics and heterogeneities, which determine the niche of the word [@PLoS2]. Our results clearly show that short-term frequency changes in which the increase (decrease) in frequency is accompanied by a concurrent increase (decrease) in dissemination are less dominant but far more persistent in a longer term. While we have focused mainly on the dissemination across users of a word, quantitatively described by the coefficient $D^U$, similar results hold for dissemination across topics ($D^T$), which is another important aspect of the word niche. These different dimensions manifest themselves in the dissemination across documents in formal writing, which is both topic and author dependent, as observed in our analysis of the dissemination coefficient $D^B$ for digitized books. In online discussion groups and other informal settings, because word usage reflects one’s social identity, it is likely that the words actually used by people depend more strongly on their social network than on the words they know. Future research may thus provide further insight into word usage dynamics by accounting for the possible influence of the underlying social network dynamics, and point to new directions within the growing body of literature on cognitively and socially informed models of language [@Hruschka2009]. Finally, we suggest that dissemination coefficients and the notion of niche itself can be extended to address factors contributing to success and failure in the spread of norms, propagation of information cascades, diffusion of innovation, and other processes that compete for adopters [@RMP_2009]. There are processes, such as the dynamics of fashions and fads, in which an eventual widespread dissemination inhibits further adoption—a representative example being the selection of baby names [@Kessler2012]. But because the initial adoption grows by imitation, even the rise of a fashion seems to depend critically on the positive feedback of dissemination [@Zanette2012]. In these contexts, and in the dynamics of word usage too, another topic for future research concerns the impact of spatial patterns of dissemination (which is a major determinant in the survival of species and groups of species in ecological systems [@Foote2008; @Wilson2004; @Meyer1996]) and their interactions with other dissemination measures. We thank Janet Pierrehumbert for discussions during preliminary stages of the project. This work was supported by the Northwestern University Institute on Complex Systems (E.G.A.), the Max Planck Institute for the Physics of Complex Systems (E.G.A.), and a Sloan Research Fellowship (A.E.M.). [10]{} Manning, C.D., Schuetze, H.: Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge MA (1999) Baayen, R.H.: Word Frequency Distributions. Springer, Berlin (2002) Pagel, M.: Human language as a culturally transmitted replicator. Nat. Rev. Genet. [**10**]{}, 405-415 (2009) Gell-Mann, M., Ruhlen, M.: The origin and evolution of word order. Proc. Natl Acad. Sci. [**108**]{}, 17290-17295 (2011) Altmann, E.G., Pierrehumbert, J.B., Motter, A.E.: Beyond word frequency: Bursts, lulls, and scaling in the temporal distributions of words. PLoS ONE [**4**]{}(11), e7678 (2009) Michel, J.-B. [*et al.*]{}: Quantitative analysis of culture using millions of digitized books. Science [**331**]{}, 176-182 (2010) Dodds, P.S., Harris, K.D., Kloumann, I.M., Bliss, C.A., Danforth, C.M.: Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PLoS ONE [**6**]{}(12), e26752 (2011) Lieberman, E., Michel, J.-B., Jackson, J., Tang, T., Nowak, M.A.: Quantifying the evolutionary dynamics of language. Nature [**449**]{}, 713-716 (2007) Pagel, M., Atkinson, A., Meade, A.: Frequency of word-use predicts rates of lexical evolution throughout Indo-European history. Nature [**449**]{}, 717-720 (2007) Altmann, E.G., Pierrehumbert, J.B., Motter, A.E.: Niche as a determinant of word fate in online groups. PLoS ONE [**6**]{}(5), e19009 (2011) The Usenet Archives, available at http://groups.google.com The Google Books Ngram Corpuses, available at http://books.google.com/ ngrams/datasets Stephens, G.J., Bialek, W.: Statistical mechanics of letters in words. Phys. Rev. E [**81**]{}, 066119 (2010) Montemurro, M., Zanette, D.H.: Towards the quantification of the semantic information encoded in written language. Adv. Compl. Sys. [**13**]{}, 135-153 (2010) Ferrer i Cancho, R., Solé., R.V.: Least effort and the origins of scaling in human language. Proc. Natl Acad. Sci. USA [**100**]{}, 788-791 (2003) Prokopenko, M., Ay, N., Obst, O., Polani, D.: Phase transitions in least-effort communications. J. Stat. Mech. [**2010**]{}(11), P11025 (2010) Ferrer i Cancho, R., Solé, R.V.: The small world of human language. Proc. R. Soc. Lond. B [**268**]{}, 2261-2265 (2001) Dorogovtsev, S.N., Mendes, J.F.F.: Language as an evolving word web. Proc. R. Soc. Lond. B [**268**]{}, 2603-2606 (2001) Motter, A.E, de Moura, A.P.S., Lai, Y.-C., Dasgupta, P.: Topology of the conceptual network of language. Phys. Rev. E [**65**]{}, 065102(R) (2002) Sigman, M., Cecchi, G.A.: Global organization of the Wordnet lexicon. Proc. Natl Acad. Sci. USA [**99**]{}, 1742-1747 (2002) Serrano, M.A., Flammini, A., Menczer, F.: Modeling statistical properties of written text. PLoS ONE [**4**]{}(4), e537 (2009) Corral, R., Ferrer-i-Cancho, R., Boleda, G., Diaz-Guilera, A.: Universal complex structures in written language. pre-print arXiv:physics.soc-ph/0901.2924v1 (2009) Solé, R.V., Corominas-Murtra, B., Fortuny, J.: Diversity, competition, extinction: the ecophysics of language change. J. R. Soc. Interface [**7**]{}, 1647-1664 (2010) Petersen, A.M., Tenenbaum, J., Havlin, S., Stanley, H.E.: Statistical laws governing fluctuations in word use from word birth to word death. Sci. Rep. [**2**]{}, 313 (2012) Perc, M.: Evolution of the most common English words and phrases over the centuries. J. R. Soc. Interface [**9**]{}, 3323-3328 (2012) Hruschka, D.J., Christiansen, M.H., Blythe, R.A., Croft, W., Heggarty, P., Mufwene, S.S., Pierrehumbert, J.B., Poplack, S.: Building social cognitive models of language change. Trends Cogn. Sci. [**13**]{}, 464-469 (2009) Castellano, C., Fortunato, S., Loreto, V.: Statistical physics of social dynamics. Rev. Mod. Phys. [**81**]{}, 591-646 (2009) Kessler, D.A., Maruvka, Y.E., Ouren, J., Shnerb, N.M.: You name it—How memory and delay govern first name dynamics. PLoS ONE [**7**]{}(6), e38790 (2012) Zanette, D.H.: Dynamics of fashion: The case of given names. arXiv:1208.0576 \[physics.soc-ph\] (2012) Foote, M., Crampton, J.S., Beu, A.G., Cooper, R.A.: On the bidirectional relationship between geographic range and taxonomic duration. Paleobiology [**34**]{}, 421-433 (2008) Wilson, R.J., Thomas, C.D., Fox, R., Roy, D.B., Kunin, W.E.: Spatial patterns in species distributions reveal biodiversity change. Nature [**432**]{}, 393-396 (2004) Meyer, M., Havlin, S., Bunde, A.: Clustering of independently diffusing individuals by birth and death processes. Phys. Rev. E [**54**]{}, 5567-5570 (1996) ![Frequency dynamics for example words in the (a) Linux and (b) hip-hop groups. The frequency of a word is computed as the number of occurrences of the word relative to the total number of words in a running window of half a year. []{data-label="fig1"}](Fig1.eps){width="0.99\columnwidth"}   ![Frequency change and dissemination change for the statistical model. (a) [*Example 1*]{}: The changes $\Delta \log f_{t_2,t_1}$ and $\Delta D^U_{t_2,t_1}$ are determined using Eqs. (\[eq.fnip\]) and (\[eq.Dnip\]) for $m^*=100$ words. Starting with $q=0.5$ and $\nu^*= \nu^*_{1}\equiv 0.015$ at time $t_1$ (corresponding to the origin in the diagram), two scenarios are considered at time $t_2$: 1) $\nu^*=\nu^*_{1}$ and $0<q<1$ (curve in top right and bottom left quadrants); 2) $q=0.5$ and $0<\nu^*<1$ (curve in top left and bottom right quadrants). (b) [*Example 2*]{}: Same as in panel (a) but now using Eqs. (\[eq.find\]) and (\[eq.Dmi\]), for $\delta(\nu-\nu^*)$ replaced by a log-normal distribution with $\sigma_{\ln \nu} = 0.8$ and tunable $\langle \ln \nu \rangle$ and for $\delta(m-m^*)$ replaced by a log-normal distribution with $\sigma_{\ln m}= 1.36$ and $\langle \ln m \rangle=4.9$. The first scenario is implemented using $\langle \ln \nu \rangle=-4.9$ and $0<q<1$, while the second is implemented using $q=0.5$ and $-10<\langle \ln \nu \rangle<0$. Note that these scenarios represent respectively positive and negative correlations between frequency and dissemination changes. []{data-label="fig2"}](Fig2.eps){width="0.99\columnwidth"} ![Frequency change versus dissemination change for the (a) Linux and (b) hip-hop groups. Both $\Delta D^U_{t_2, t_1}$ and $\Delta \log f_{t_2, t_1}$ are calculated over half-year windows separated by two years, and centered on $t_1=$ 1998-01-01 and $t_2=$ 2000-01-01. The scatter plots include all words with $N_w>5$ in both windows, whereas the continuous lines indicate the running medians and the dashed lines indicate the 5th and 95th running percentiles. Words with rising frequency appear above and words with falling frequency appear below $\Delta \log f_{t_2, t_1}=0$. The higher concentration of points in the second and fourth quadrants indicate that frequency increase (decrease) is for most words accompanied by dissemination decrease (increase), which corresponds to scenario 2 in Fig. \[fig2\]. []{data-label="fig3"}](Fig3.eps){width="0.99\columnwidth"} ![Pattern of frequency change as a function of time for the (a, c) Linux and (b, d) hip-hop groups. (a, b) Medians of the frequency change $\Delta \log f_{t_2, t_1}$ as a function of the time $t_1$ for given $\Delta D^U_{t_2, t_1}$ between $-0.5$ (solid squares) and $0.5$ (solid circles); the windows are half-year wide and centered at $t_1$, and $t_2=t_1+ 2$ years. (c, d) Medians of the frequency change $\Delta \log f_{t_2, t_1}$ as a function of the time interval $\Delta t=t_2- t_1$ for given $\Delta D^U_{t_2, t_1}$ between $-0.5$ (solid squares) and $0.5$ (solid circles); the windows are half-year wide and centered on $t_1=$ 1998-01-01, and $t_2=t_1+ \Delta t$ years. In all panels, we consider all non-overlapping windows and the emphasized symbols correspond to the window pair in Fig. \[fig3\]. The word selection is the same used in Fig. \[fig3\]. []{data-label="fig4"}](Fig4ab.eps){width="0.99\columnwidth"} ![Pattern of frequency change as a function of time for the (a, c) Linux and (b, d) hip-hop groups. (a, b) Medians of the frequency change $\Delta \log f_{t_2, t_1}$ as a function of the time $t_1$ for given $\Delta D^U_{t_2, t_1}$ between $-0.5$ (solid squares) and $0.5$ (solid circles); the windows are half-year wide and centered at $t_1$, and $t_2=t_1+ 2$ years. (c, d) Medians of the frequency change $\Delta \log f_{t_2, t_1}$ as a function of the time interval $\Delta t=t_2- t_1$ for given $\Delta D^U_{t_2, t_1}$ between $-0.5$ (solid squares) and $0.5$ (solid circles); the windows are half-year wide and centered on $t_1=$ 1998-01-01, and $t_2=t_1+ \Delta t$ years. In all panels, we consider all non-overlapping windows and the emphasized symbols correspond to the window pair in Fig. \[fig3\]. The word selection is the same used in Fig. \[fig3\]. []{data-label="fig4"}](Fig4cd.eps){width="0.97\columnwidth"} ![Persistency of frequency change for the (a, c) Linux and (b, d) hip-hop groups. (a, b) Frequency change $\Delta \log f_{t_1+2\Delta t, t_1}$ (after $2\Delta t$ years) versus frequency change $\Delta \log f_{t_1+\Delta t, t_1}$ (after $\Delta t$ years) for $t_1=$ 1998-01-01 and $\Delta t= 2$ years; all three windows are half-year wide. The dashed and continuous lines correspond to the running medians for points (shown in the background) with $\Delta \log f_{t_1+\Delta t, t_1}$ in the quadrants 1Q, 3Q and 2Q, 4Q of Fig. \[fig3\], respectively. (c, d) Running medians as in (a, b) but now calculated using all points from all non-overlapping half-year windows for $t_1$ ranging from 1994-01-01 to 2004-01-01 for the Linux group and from 1995-07-01 to 2004-01-01 for the hip-hop group. The closed curves indicate the fraction of points along the corresponding directions from the origin. The word selection is the same used in Fig. \[fig3\] except that, in order to keep all eligible words of the first two windows, the condition $N_w>5$ is not imposed in the third window. []{data-label="fig5"}](Fig5ab.eps "fig:"){width="0.99\columnwidth"} ![Persistency of frequency change for the (a, c) Linux and (b, d) hip-hop groups. (a, b) Frequency change $\Delta \log f_{t_1+2\Delta t, t_1}$ (after $2\Delta t$ years) versus frequency change $\Delta \log f_{t_1+\Delta t, t_1}$ (after $\Delta t$ years) for $t_1=$ 1998-01-01 and $\Delta t= 2$ years; all three windows are half-year wide. The dashed and continuous lines correspond to the running medians for points (shown in the background) with $\Delta \log f_{t_1+\Delta t, t_1}$ in the quadrants 1Q, 3Q and 2Q, 4Q of Fig. \[fig3\], respectively. (c, d) Running medians as in (a, b) but now calculated using all points from all non-overlapping half-year windows for $t_1$ ranging from 1994-01-01 to 2004-01-01 for the Linux group and from 1995-07-01 to 2004-01-01 for the hip-hop group. The closed curves indicate the fraction of points along the corresponding directions from the origin. The word selection is the same used in Fig. \[fig3\] except that, in order to keep all eligible words of the first two windows, the condition $N_w>5$ is not imposed in the third window. []{data-label="fig5"}](Fig5cd.eps "fig:"){width="1.00\columnwidth"} ![Frequency change and dissemination change in the Google Books dataset. (a) Medians of the frequency change $\Delta \log f_{t_2, t_1}$ as a function of the time $t_1$ for $\Delta D^B_{t_2, t_1}$ equal to $-0.5$ (solid squares) and $0.5$ (solid circles); the windows are at $t_1$ and $t_2=t_1+ 10$ years. (b) Medians of the frequency change $\Delta \log f_{t_2, t_1}$ as a function of the time interval $\Delta t=t_2- t_1$ for $\Delta D^B_{t_2, t_1}$ equal to $-0.5$ (solid squares) and $0.5$ (solid circles); the windows are in $t_1= 1820$ and $t_2=t_1+ \Delta t$ years. (c) Frequency change $\Delta \log f_{t_1+2\Delta t, t_1}$ (after $2\Delta t$ years) versus frequency change $\Delta \log f_{t_1+\Delta t, t_1}$ (after $\Delta t$ years) for the aggregate collection of points corresponding to $t_1= 1820, 1830, ..., 1980$ and $\Delta t= 10$ years. Points corresponding to $\Delta \log f_{t_1+\Delta t, t_1}$ in the quadrants 1Q, 3Q and 2Q, 4Q of the $\Delta \log f$ versus $\Delta D^B$ plot (not shown) are represented in red and black, respectively. Following this color code, the closed curves indicate the fraction of points along the corresponding directions from the origin and the dashed lines correspond to the running median for each quadrant. In all cases, the windows are one-year wide. []{data-label="fig6"}](Fig6.eps){width="0.99\columnwidth"}
--- abstract: | A new method of measurement of the velocities of solar electron antineutrinos is proposed. The method is based on the assumption, that if the neutrino detector having a shape of a pipe and providing a proper angular resolution, is directed onto the optical “image” of the sun, then it would detect solar neutrinos with velocities $V_{\widetilde{\nu}_{e}}$ $=c$. Here c is the velocity of light. It is expected that the less is the value of $V_{\widetilde {\nu}_{e}}$, the larger would be the angular lagging of the “image” of these neutrinos relative to the position of the optical “image” of the sun. Therefore, one can detect solar neutrinos with different energies by changing the angle between the axis of a detector pipe and the direction to the “image” of the sun. Also, the method gives unique possibility to check hypotheses predicting an existence of solar neutrinos with $V_{\widetilde{\nu}_{e}}>c$. In this case the “image” of such solar neutrinos on the sky should pass the “image” of the sun. author: - | Elmir Dermendjiev\ “Mladost-2”, block 224, entr.1, apt.13, 1799-Sofia, Bulgaria title: A new method of measurement of the velocities of solar neutrinos --- PACS number: 03.30; Solar electron antineutrino; Velocities of solar electron antineutrinos; method of measurement of velocities of solar electron antineutrinos. Introduction ============= The problem of a generation of the solar energy and its nature is probably the most important astrophysical one. At present the most spread notion on the origin of the solar energy is based on the assumption that it appears as a result of thermonuclear reactions inside the sun. Briefly, the process of transformation of hydrogen to helium is accompanied by emission of electron antineutrinos $\widetilde{\nu}_{e}$ \[1\]. Some astrophysical solar models predict different yields of groups of antineutrinos with different energies $E_{\nu}$. In regard with the “Standard Solar Model” (SSM) most of antineutrinos ($\symbol{126}99.75\%$) should have an energy in the range of 0 $<$ $E_{\nu}$ $<$ $420keV$ (so - called “PP1” - group of neutrinos). The group of “PeP” - antineutrinos has a fixed energy $E_{\nu}=1.44MeV$, but much lower intensity than the first group. A group “HeP” has an energy spectrum up to $18.6MeV$ and the yield of $\symbol{126}10^{-5}\%$, etc. In addition, the theory predicts the existence of two other groups of antineutrinos:“PP2” and “PP3”. They have mono-energetic and continuous energy spectra respectively and are important for the theory. Other source of antineutrinos is so-called “carbon-nitrogen” (CN) cycle, that generates two more groups of antineutrinos with $E_{\nu }=1.2MeV$ and $1.7MeV$. Thus, to check the SSM one needs to perform very complicated experiment that includes a spectrometry of the energies of solar antineutrinos and a measurement of the intensities of these groups of $\widetilde{\nu}_{e}$ . Unfortunately, at present such experiment is out of our technical possibilities due to many reasons. The most serious experimental difficulty in performing astrophysical experiments with solar $\widetilde{\nu }_{e}$ is an extremely low absorption cross section $\sigma_{a}$($\widetilde{\nu}_{e}$ ) of electron antineutrinos by nuclei, which is estimated to be of $\symbol{126}10^{-43}cm^{2}$\[2\]. At present solar $\widetilde{\nu}_{e}$ are studied in a few laboratories \[1,3\] by using giant neutrino detectors, which are briefly discussed below. Unfortunately, these facilities are not intended for detection of the relatively low energy solar antineutrinos or their values of $V_{\widetilde {\nu}_{e}}$. Below, a method that might be suitable for measurements of the velocities $V_{\widetilde{\nu}_{e}}<c$ of solar electron antineutrinos is proposed. One can hope that the experimental information obtained by proposed method might be useful for further development of the SSM. At present, the proposed method of measurement of the velocities of solar antineutrinos is the only method, that allows to search whether some of them have values of $V_{\widetilde{\nu}_{e}}>c$. Possible existence of such hypothetical solar antineutrinos is discussed in \[4\]. This hypothesis is inconsistent with the theory of relativity. However, by using the method proposed below, one has the unique possibility to find such super-fast particles with $V_{\widetilde{\nu}_{e}}>c$, if they exist, or to refute the hypothesis \[4\]. A method of measurement of the velocities of solar antineutrinos ================================================================= The proposed method of measurement of the velocities $V_{\widetilde{\nu}_{e}}$ of solar electron antineutrinos is based on the simple and clear idea that the position of their “images” on the sky relative to the optical “image” of the sun should depend on the value of $V_{\widetilde{\nu}_{e}}$. Assume, that the observer has a $"pipe"$-type neutrino detector that follows the movement of the optical “image” of the sun on its sky trajectory. Also, assume that this detector has an angular resolution $\Delta\alpha$ comparable with the angular size of the sun, i.e. $\Delta\alpha\leq0,5^{o}$. Then, if $V_{\widetilde{\nu }_{e}}=c$, both the neutrino and the sun “images” should coincide. In the case when $V_{\widetilde{\nu}_{e}}$ $<c$, the “image” of given group of neutrinos with certain value of $V_{\widetilde{\nu}_{e}}$ is expected to have an angular lagging relative to the optical “image” of the sun. It is clear that the less is the value of $V_{\widetilde{\nu}_{e}}$, the larger would be the angular lagging of $\widetilde{\nu}_{e}$ having that value of $V_{\widetilde{\nu}_{e}}$ . In the case when $V_{\widetilde{\nu}_{e}}>c$, one should expect that the “image” of such neutrinos should pass the “image” of the sun. Thus, if the neutrino detector follows the “image” of the sun with a constant angle $\beta$ of lagging or passing, then one can determine the value of $V_{\widetilde{\nu }_{e}}$ . How large are the expected values of $\beta$? The angular velocity $\omega$ of the optical “image” of the sun relative to the Earth is of $\omega =0.00417deg.s^{-1}$. Two cases are discussed below: a\) $V_{\widetilde{\nu}_{e}}<c$. If $\beta$ does not exceed a few degrees, then the value of $V_{\widetilde{\nu}_{e}}$ can be estimated by using the following approximate relationship:$$V_{\widetilde{\nu}_{e}}\thickapprox\frac{L}{480+\beta/\omega}$$ Here $L=1.45.10^{11}m$. If, for instance, $V_{\widetilde{\nu}_{e}}=10^{8}m.s^{-1}$ or $3.10^{7}m.s^{-1}$, then the value of $\beta$ is approximately $4^{o}$ or $18^{o}$. b\) $V_{\widetilde{\nu}_{e}}$ $>c$. In this case the “image” of such super-fast hypothetical neutrinos should pass the optical “image” of the sun with an angle $\beta\prime$. Then a similar relationship can be used:$$V_{\widetilde{\nu}_{e}}\thickapprox\frac{L}{480-\beta^{\prime}/\omega}$$ It is interesting to note that if $V_{\widetilde{\nu}_{e}}\rightarrow$ $\infty$, then the maximum value of $\beta^{\prime}$ is close to $2^{o}$, i.e. of $\symbol{126}\ 4$ angular sizes of the diameter of the sun. This means, that if such super-fast solar neutrinos would exist, than they could be found experimentally.  Discussion =========== The study of the properties of solar neutrinos in modern laboratories \[1,3\] is an extremely difficult task. As it was briefly mentioned above, there are some experimental difficulties originated by very low value of the absorption cross section $\sigma_{a}$($\widetilde{\nu}_{e}$) of solar antineutrinos by nuclei \[2\]. This leads to a lack in detection rate even in the case of using huge solar neutrino detectors. Another difficult problem is the necessity the detector background to be minimized. It requires neutrino detectors to be situated underground. To conclude whether the proposed method could be applied in solar neutrino experiments, brief comparison between some of the existing methods of detection of solar neutrinos and the proposed method is presented below. Up to now, there are only a few experimental studies of solar neutrinos, which are not considered in this paper. However, all of them were performed with giant neutrino detectors, like the solar neutrino detector designed by Davis \[1\]. This large facility has 615 tons of $C_{2}Cl_{4}$, used as a detector substance. The detection of solar antineutrino is based on the reaction of absorption of $\widetilde{\nu}_{e}$ by a nucleus of $^{37}Cl$ \[5\]:$$^{37}Cl+\widetilde{\nu}_{e}\rightarrow^{37}Ar+e^{-}$$ This reaction has a threshold of $0.816MeV.$ The detection of solar antineutrinos is based on the radiochemical reaction (3) and the detector cannot be used for energy measurements. The SNO \[4\] is another large facility that contains 1000 tons of $D_{2}O$ for study of the properties of solar neutrinos. The detection of $\widetilde{\nu}_{e}$ is released when $\widetilde{\nu}_{e}$ interacts with deuterium nuclei:$$d+\widetilde{\nu}_{e}\rightarrow p+p+e^{-}$$ The SNO facility is intended to be used manly for detection of high energy neutrinos. Therefore, it is not suitable for spectrometry of energies or velocities of solar antineutrinos. On the other hand, further development of the SSM needs the intensities of different groups of antineutrinos with fixed energies $E_{\widetilde{\nu}_{e}}$ to be known with permanently arising accuracy. Since there are no neutrino detectors capable to measure the energies of solar neutrinos, one can hope that the proposed method of measurement of the velocities of neutrinos could, to some extend, contribute the solution of this very important problem of generation of the solar energy. The counting rate $N_{\widetilde{\nu}_{e}}$ of a “$pipe$”-type neutrino detector is estimated below. It is desirable the value of $\Delta\alpha$ $\symbol{126}(\Phi/L)$ of this detector to be comparable with the angular size of the optical “image” of the sun, which is of $\symbol{126}0.5^{o}$. Also, it is necessary the value of $\Delta\alpha\symbol{126}0.5^{o}$ to be kept, if one would search for hypothetical “super-fast” neutrinos \[4\]. Suppose that the length $L$ and the diameter $\Phi$ of the $pipe$ are of $11.5m$ and $0.1m$ respectively. It seems to be reasonable, a detection technique, similar to that described by Reines at al. \[2\], to be chosen. The $pipe$ contains liquid scintillator with small amount of Cd (or Gd). This $pipe$ is inserted into the outer $pipe$ with larger diameter and six radial sections along the pipe. All radial sections are filled with liquid scintillator. Photomultiplier tubes are mounted along the section sides of outer $pipe$. The detection of solar electron antineutrino is based on the following nuclear reaction:$$\widetilde{\nu}_{e}+p\rightarrow n+e^{+}$$ Two annihilation gamma-quanta mark the reaction (5) and six slow neutron capture gamma-quanta are emitted due to the $Cd(n,\gamma)$ reaction \[2\]. A delayed coincidence technique used in \[2\] provides quite low detection background. Similar technique was successfully used in fission \[6\] and sub-threshold fission experiments \[7\] with $U$, $Pu$ and $Np$ isotope targets having very high specific $\alpha$ - and $\gamma$ - activity.  The counting rate $N_{\widetilde{\nu}_{e}}$ can be estimated by using the approximate relationship: $$N_{\widetilde{\nu}_{e}}\thickapprox n\sigma_{a}(\widetilde{\nu}_{e},p)\phi\epsilon_{\widetilde{\nu}_{e}}SLt$$ Here, $n$ is the number of hydrogen atoms in$1cm^{3}$ of liquid scintillator, $\sigma_{a}(\widetilde{\nu}_{e},p)=1.2.10^{-43}cm^{2}$\[2\]. The approximate value of the flux of solar electron antineutrinos at the surface of the Earth is accepted to be of $\phi\thickapprox5.10^{10}cm^{-2}s^{-1}$. The detection efficiency $\epsilon_{\widetilde{\nu}_{e}}$ is estimated to be of $\symbol{126}0.3$ if a delayed coincidences between two annihilation gamma-quanta and more than three captured gamma-quanta are realized. In Eq.(6) $\ S=(\pi/4)\Phi^{2}$, were $\Phi=0.1m$, $L=11.5m$ and$\ t=1s.$ Using these numbers one gets an estimated value of $N_{\widetilde{\nu}_{e}}\symbol{126}10^{-4}s^{-1}$, which means that one could collect $\symbol{126}3.10^{3}$ events per year. Further optimization of the “pipe”-type neutrino detector might strongly reduce the measurement time compare to the estimated value. Brief consideration of different methods of detection of electron antineutrinos, allows to be concluded, that the proposed method could be used for measurements of velocities of solar antineutrinos and, thus, provide a new experimental data for further development of the SSM. Also, having a moderate size, the proposed “$pipe$”-type neutrino detector can ensure a reasonable time of measurements. However, the most important preference of proposed method is the opportunity to search for existence of neutrinos with $V_{\widetilde{\nu}_{e}}>c$. Based on the theory of relativity, one should not expect an existence of such neutrinos in Nature. Then, the experiment based on the proposed method of measurement the velocities of neutrinos would be a strong confirmation of the theory of relativity. But, if neutrinos with $V_{\widetilde{\nu}_{e}}>c$ exist, one should expect deep change of our understanding of Nature. [9]{} Davis R. Bull. Amer. Phys. Soc. 1959, v.4, p.217 Davis R. Phys. Rev. Lett., 1964, v.12, p.303 Reines F., C.L. Cowan Jr., F.B. Harrison, A.D. McGuire, H.W.Kruse Phys. Rev., 1960, v.117, p.159 SNO - collaboration, nucl - ex/0309004 Dermendjiev Elmir,nucl - th/0505040 Pontecorvo Bruno, Chalck River Report, PD-205,1946 Wang Shi-di, Wang Yun-chang, E. Dermendjiev, Yu.V. Ryabov “Physics and Chemistry of Fission”, 1965, Vienna, v.1, p.287 Yu.V. Ryabov, WangYun-chang, E. Dermendjiev, Chjang Pey-shu Yadernaya Fizika, 1967, v.5, p.925 Borzakov S.B., E.Dermendjiev, A.A. Goverdovsky, A.Kalinin, V.Konovalov, I. Ruskov, S.M.Soloviov, Yu.S.Zamiatnin Yadernaya Fizika, 1996, v.59, p.1175
--- abstract: 'We report the activity measured in rainwater samples collected in the Greater Sudbury area of eastern Canada on 3, 16, 20, and 26 April 2011. The samples were $\gamma$-ray counted in a germanium detector and the isotopes [${}^{131}$I]{} and [${}^{137}$Cs]{}, produced by the fission of [${}^{235}$U]{}, and [${}^{134}$Cs]{}, produced by neutron capture on [${}^{133}$Cs]{}, were observed at elevated levels compared to a reference sample of ice-water. These elevated activities are ascribed to the accident at the Fukushima Dai-ichi nuclear reactor complex in Japan that followed the 11 March earthquake and tsunami. The activity levels observed at no time presented health concerns.' --- Activities of $\gamma$-ray emitting isotopes in rainwater from Greater Sudbury, Canada following the Fukushima incident 0.5cm B.T. Cleveland, F.A. Duncan, I.T. Lawson, N.J.T. Smith, E. Vázquez-Jáuregui 0.5cm SNOLAB, 1039 Regional Road 24 Lively ON, P3Y 1N2, Canada Introduction ============ The nuclear accident in the Fukushima area in Japan released radioisotopes to the atmosphere which have been measured in several locations in Asia [@Bolsunovsky; @Fushimi; @Momoshima], North America [@Bowyer; @Leon; @Norman; @Sinclair; @MacMullin], and Europe [@Clemenza; @Manolopoulou; @Pittauerova], as the radioactivity spread around the Earth. We report here the measurement of several isotopes in water samples collected in eastern North America during April 2011, from 3 to 7 weeks after the Fukushima incident. Experimental Methods ==================== To investigate the dispersal of radioactivity from the Fukushima incident we collected samples of rainwater in aluminum trays in Greater Sudbury, Ontario on 3, 16, 20, and 26 April, the first rainy days that followed 11 March. A reference sample of ice water from Meatbird Lake in Lively, Ontario was also collected on 3 April. Within 1–2 d of their collection the water samples were passed through Whatman Grade 1 filters (medium porosity, $>11~\mu$m) and then poured into 1 L polyethylene Marinelli beakers. Filtration was necessary to remove particulate material that was present in the samples because they were collected at ground level under windy conditions. The volume of all samples was very close to 1 L. The beakers were sealed, encapsulated in nearly air-tight bags, and transported to the SNOLAB underground laboratory where they were $\gamma$-ray counted by a high-purity germanium detector. To minimize the background from ambient [${}^{222}$Rn]{} in the mine air [@hpge], the samples began to be counted immediately after their arrival underground. The duration of counting was 1 d except for the ice sample which was counted for 2 d. Data on the samples and counting periods are given in Table \[sample\_data\]. --------------- ----------- -------- --------------- ---------- Date Volume Date counting Counting Sample collected (mL) began time (d) Ice 3 April 93.46 953 96.315 2.01 Rain 3 April 93.96 857 95.345 0.93 Rain 16 April 106.48 935 109.275 1.02 Rain 20 April 110.57 1015 123.314 0.97 Rain 26 April 116.44 1050 124.356 1.0 --------------- ----------- -------- --------------- ---------- : Data on samples. Dates of sample collection and start of counting are given in day of year 2011 in Eastern Standard Time (GMT-5 h). Dead time during counting was negligible. \[sample\_data\] The dimensions of the Ge detector crystal are 63-mm length by 67-mm diameter and its efficiency for the 1333-kev $\gamma$-rays from a [${}^{60}$Co]{} source is 47% relative to a 3-inch by 3-inch NaI(Tl) detector. The FWHM resolution of the detector at 1333 keV is 1.9 keV. To reduce local background the detector is shielded by 2 inches of high-purity copper and 8 inches of lead. The detector shield is enclosed in a sealed copper box through which pure nitrogen from liquid nitrogen boil-off is flowed at 2 L/min to purge [${}^{222}$Rn]{}. The efficiency of the detector for $\gamma$-rays has been measured with standard sources of known decay rate. ![image](meatbird){width="0.9\hsize"} -2.4cm ![image](april3){width="0.9\hsize"} -2.4cm ![image](april16){width="0.9\hsize"} -2.4cm ![image](april20){width="0.9\hsize"} -2.4cm ![image](april26){width="0.9\hsize"} -1.0cm [l l l l l l l]{} &\ Sample & [${}^{137}$Cs]{} & [${}^{134}$Cs]{} & [${}^{131}$I]{} & [${}^{238}$U]{} progeny & [${}^{232}$Th]{} progeny & [${}^{7}$Be]{} ------------------------------------------------------------------------ \ Ice 3 April & $<0.4$ & $0.4^{+0.8}_{-0.4}$ & $22.8\pm3.7$ & $26.1\pm3.1$ & $2.4\pm1.7$ & $80\pm18$\ Rain 3 April & $11.0\pm4.1$ & $8.3\pm2.4$ & $668\pm44$ & $32.8\pm6.2$ & $101\pm11$ & $1900\pm180$\ Rain 16 April & $22.7\pm4.9$ & $16.8\pm2.9$ & $64.0\pm8.9$ & $ 9.9\pm4.8$ & $9.4\pm2.9$ & $835 \pm95$\ Rain 20 April & $19.1\pm4.7$ & $13.1\pm2.5$ & $31\pm11$ & $<4.8$ & $5.9\pm2.7$ & $770 \pm90$\ Rain 26 April & $0.9^{+1.7}_{-0.9}$ & $0.7\pm0.7$ & $2.4^{+4.6}_{-2.4}$ & $3.3\pm2.6$ & $<0.3$ & $2700\pm240$\ \[samples\] This detector is usually used to measure the activity of samples of materials that are being considered for use in one of the SNOLAB experiments, all of which must be made from extremely low-background components. The detector sensitivity is 1 mBq/kg (0.1 ppb) for [${}^{226}$Ra]{}, 1.5 mBq/kg (0.3 ppb) for [${}^{228}$Th]{}, and 21 mBq/kg (0.7 ppm) for [${}^{40}$K]{}. Further information on the detector and its use is given in  [@hpge]. The samples were collected close to the SNOLAB laboratory which is located at $46^\circ 28.5'$ N latitude, $81^\circ 12.0'$ W longitude. The counting facility is 2092-m underground where the cosmic-ray muon flux is $3.31 \times 10^{-10}$/(cm$^2$s) [@sno]. Results and Interpretation ========================== The raw energy spectra from each sample are shown in Figure \[spectra\_samples\]. Peaks are evident from the emission of $\gamma$-rays by [${}^{137}$Cs]{} at 661.7 keV; by [${}^{134}$Cs]{} at 569.3, 604.7, and 795.9 kev; and by [${}^{131}$I]{} at 284.3, 364.5, 637.0, and 722.9 keV. $\gamma$-ray lines with intensity greater than background are also apparent at 477.6 keV from [${}^{7}$Be]{} (mainly produced by cosmic-ray spallation on [${}^{14}$N]{} and [${}^{16}$O]{}) and at several other energies from the decays of the progeny of [${}^{226}$Ra]{} and [${}^{228}$Th]{}, which were present in the water samples as impurities. The region of a peak in a typical spectrum is shown in Fig. \[example\_spectrum\]. The number of counts above background in each peak is determined by counting the number of events in a 2-FWHM region centered at the peak and subtracting half the number of events in regions of equal energy both above and below the peak. [${}^{137}$Cs]{} is an exception as there also must be subtracted a constant background in the peak of $1.72 \pm 0.13$ counts/d. This latter activity is contamination internal to the Ge detector crystal housing. ![Peak at 364.5 keV produced by [${}^{131}$I]{} decay in the rainwater sample collected on 16 April and measured on 19 April. The counting time was 1.02 d and 102 events were recorded in the peak. Background in equal energy-width windows was 10 events (below peak) and 11 events (above peak).[]{data-label="example_spectrum"}](april16_0364){width="1.0\hsize"} The specific activities of the isotopes observed in the water samples are given in Table \[samples\]. Because of its short half-life the [${}^{131}$I]{} activities have been corrected to the time of sample collection. For those isotopes that produced more than one peak we checked that the activity inferred from each peak was in agreement within uncertainty and we give their weighted average. The observation of the short-lived isotope [${}^{131}$I]{} and the high concentrations of [${}^{137}$Cs]{} and [${}^{134}$Cs]{} indicate a recent release into the atmosphere of typical reactor-produced isotopes. ![Specific activity of [${}^{131}$I]{} vs time. The solid line is a fit of the specific activity $A$ as a function of time $t$ to the function $A(t) = A(0) \exp(-t/t_1)$ where $A(0)$ and $t_1$ are constants whose best fit values are $A(0) = 1183 \pm 99$ mBq/kg and $t_1 = 5.33 \pm 0.30$ d.[]{data-label="131_vs_time"}](131i_vs_time){width="1.0\hsize"} Figure \[131\_vs\_time\] shows the decay of activity of [${}^{131}$I]{}. The half-life of the observed decay is 3.7 d, considerably less than the 8.0 d half-life of [${}^{131}$I]{}. We presume this is due to the transport of the radioactivity over our measuring location and washout of the isotope from the atmosphere. +5ex ![Ratio of specific activity of [${}^{134}$Cs]{} to [${}^{137}$Cs]{} vs time.[]{data-label="134to137_vs_time"}](134to137_vs_time){width="1.0\hsize"} The ratio of activity of [${}^{134}$Cs]{} to [${}^{137}$Cs]{} is shown in Figure \[134to137\_vs\_time\]. The weighted average ratio for the four measurements is $0.72 \pm 0.13$, in agreement with the value of $\sim$0.7 reported in [@Leon] and the measurements given in [@Momoshima]. This ratio is approximately constant because both of these isotopes are products of nuclear fission, [${}^{137}$Cs]{} directly, and [${}^{134}$Cs]{} by fission production of [${}^{133}$Cs]{} followed by neutron capture. Some laboratories [@Leon; @Norman; @MacMullin] have detected [${}^{132}$Te]{} from the Fukushima incident, but we did not observe this isotope. Our supposition is that this is because of its short half-life of 3.2 d, the appreciable delay between the release and our first measurements, and the low volatility of Te. At no time during our measurements were the activities of the isotopes we detected from Fukushima of any radioactive concern to the inhabitants of northern Ontario. The average radioactivity levels were much less than what is received by normal background radiation and we were only able to observe these isotopes because of our extremely sensitive well-shielded low-background apparatus. Summary ======= Several nuclear reactor fission products were observed in rainwater samples collected in Greater Sudbury and $\gamma$-ray counted in a high-purity germanium detector at SNOLAB. The short-lived isotope [${}^{131}$I]{} and the longer-lived isotopes [${}^{134}$Cs]{} and [${}^{137}$Cs]{}, were detected at concentrations much higher than in a background sample. The presence of all these isotopes is associated with their release to the atmosphere from the nuclear accident at the Fukushima Dai-ichi reactors in Japan. These data, along with measurements made in other places around the world, may aid our understanding of the release of radioactive fission products and their transport in the atmosphere. Acknowledgments {#acknowledgments .unnumbered} =============== This work utilises infrastructure supported by the Natural Sciences and Engineering Research Council, the Ontario Ministry of Research and Innovation, the Northern Ontario Heritage Fund, and the Canada Foundation for Innovation. We thank the SNOLAB technical staff for developing the infrastructure and their aid in our scientific endeavors, and Vale S. A. for hosting SNOLAB. [99]{} Bolsunovsky, A., Dementyev, D., 2011. Evidence of the radioactive fallout in the center of Asia (Russia) following the Fukushima Nuclear Accident, Journal of Environmental Radioactivity 102 1062-1064. Bowyer, T.W., Biegalski, S.R., Cooper, M., Eslinger, P.W., Haas,D., Hayes, J.C., Miley, H.S., Strom, D.J., Woods, V., 2011. Elevated radioxenon detected remotely following the Fukushima nuclear accident. Journal of Environmental Radioactivity 102, 681-687. Clemenza, M., Fiorini, E., Previtali, E., Sasa, E., 2011. Measurement of airborne [${}^{131}$I]{}, [${}^{134}$Cs]{}, and [${}^{137}$Cs]{} nuclides due to the Fukushima reactors accident in air particulate in Milan (Italy). arXiv:1106.4226. Fushimi, K., Nakayama, S., Sakama, M., Sakaguchi, Y., 2011. Measurement of airborne radioactivity from the Fukushima reactor accident in Tokushima, Japan. arXiv:1104.3611. Lawson, I. Cleveland, B., 2011, Low background counting at SNOLAB. AIP Conf. Proc. 1338, 68-77; doi: 10.1063/1.3579561. Leon, J.D., Jaffe, D.A., Kaspar, J., Knecht, A., Miller, M.L., Robertson, R.G.H., Schubert, A.G., 2011. Arrival time and magnitude of airborne fission products from the Fukushima, Japan, reactor incident as measured in Seattle, WA, USA. Journal of Environmental Radioactivity 102, 1032-1038; arXiv:1103.4853. MacMullin, S., Giovanetti, G.K., Green, M.P., Henning, R., Holmes, R., Vorren, K., Wilkerson, J.F., 2011. Measurement of airborne fission products in Chapel Hill, N.C., USA from the Fukushima I reactor accident. arXiv: 1111.4141. Manolopoulou, M., Vagena, E., Stoulos, S, Ioannidou, A., Papastefanou, C., 2011. Radioiodine and radiocesium in Thessaloniki, Northern Greece due to the Fukushima nuclear accident. Journal of Environmental Radioactivity 102, 796-797. Momoshima, N., Sugihara, S., Ichikawa, R., Yokoyama, H., 2011. Atmospheric radionuclides transported to Fukuoka, Japan remote from the Fukushima Daiichi nuclear power complex following the reactor accident. Journal of Environmental Radioactivity (in press). doi:10.1016/j.jenvrad.2011.09.001 Norman, E.B., Angell. C.T., Chodash, P.A., 2011. Observations of Fallout from the Fukushima Reactor Accident in San Francisco Bay Area Rainwater; arXiv:1103.5954. Pittauerová, D., Hettwig, B., Fischer, H.W., 2011. Fukushima fallout in Northwest German environmental media. Journal of Environmental Radioactivity 102 877-880. Sinclair, L.E., Seywerd, H.C.J., Fortin, R., Carson, J.M., Saull, P.R.B., Coyle, M.J., van Brabant, R.A., Buckle, J.L., Desjardins, S.M., Hall, R.M., 2011. Aerial measurement of radioxenon concentration off the west coast of Vancouver Island following the Fukushima reactor accident. Journal of Environmental Radioactivity 102 (2011) 1018-1023; arXiv: 1106.4043. SNO Collaboration (B. Aharmim et al.), 2009. Measurement of the Cosmic Ray and Neutrino-Induced Muon Flux at the Sudbury Neutrino Observatory. Phys.Rev. D80 (2009) 012001; arXiv:0902.2776.
--- abstract: 'We present six Chandra X-ray spectra and light curves obtained for the nova V1494Aql (1999 $\#$2) in outburst. The first three observations were taken with ACIS-I on days 134, 187, and 248 after outburst. The count rates were 1.00, 0.69 and 0.53 cps, respectively. We found no significant periodicity in the ACIS light curves. The X-ray spectra show continuum emission and lines originating from N and O. We found acceptable spectral fits using isothermal APEC models with significantly increased elemental abundances of O and N for all observations. On day 248 after outburst a bright soft component appeared in addition to the fading emission lines. The Chandra observations on days 300, 304, and 727 were carried out with the HRC/LETGS. The spectra consist of continuum emission plus strong emission lines of O and N, implying a high abundance of these elements. On day 300, a flare occurred and periodic oscillations were detected in the light curves taken on days 300 and 304. This flare must have originated deep in the outflowing material since it was variable on short time scales. The spectra extracted immediately before and after the flare are remarkably similar, implying that the flare was an extremely isolated event. Our attempts to fit blackbody, Cloudy, or APEC models to the LETG spectra failed, owing to the difficulty in disentangling continuum and emission line components. The spectrum extracted during the flare shows a significant increase in the strengths of many of the lines and the appearance of several previously undetected lines. In addition, some of the lines seen before and after the flare are not present during the flare. On day 727 only the count rate from the zeroth order could be derived, and the source was too faint for the extraction of a light curve or spectrum.' author: - 'J.G. Rohrbach, J.-U. Ness, S. Starrfield' bibliography: - 'aql.bib' title: 'Evolution of X-ray spectra and light curves of V1494Aquilae' --- Introduction ============ When hydrogen-rich material is lost by a low-mass main sequence star and accrets onto a white dwarf (WD) primary in a Cataclysmic Variable (CV), it settles onto the WD and eventually the bottom of the accreted layer becomes degenerate. When enough material has accumulated and the temperatures become high enough, a thermonuclear runaway is initiated and a Classical Nova (CN) outburst results. There is an initial short phase of X-ray emission which quickly fades as the ejecta expand and become optically thick. When the ejected shell expands and cools enough to become again transparent to X-rays, a soft, luminous X-ray source is typically observed, although, each CN evolves differently in X-rays. This phase of evolution in X-rays is called the SSS phase because X-ray spectra at this time resemble those of the class of super-soft X-ray sources (SSS; @kahab). V1494Aql was discovered in the optical by [@pereira99] on 1.785 December, 1999 at $m_{v}\cong6$ [@disc] and reached maximum light in the optical two days later, on 3.4 December, 1999 at $m_{v}\cong4.0$. It subsequently declined by two magnitudes in $6.6\,\pm\,0.5$ days thus classifying V1494Aql as a fast nova [@kissth00]. The distance to the nova was determined to be $1.6\pm0.2$ kpc by [@ii:1] and an orbital period of 0.13467 days has been suggested by [@orbit]. The hydrogen column density was estimated to $N_{\rm H}\approx4\times10^{21}$cm$^{-2}$ by [@ii:1] from sodium lines in the optical. X-ray spectra were taken with [*Chandra*]{}, and [@krautter] reported that the early evolution showed only emission lines, but that by Aug. 6, 2000 the spectrum had evolved into an SSS spectrum. They also reported an X-ray burst occurred (flare) and the presence of oscillations in one of the grating observations taken during the SSS phase. A detailed timing analysis was presented by [@drake:lc]. We re-extracted all [[*Chandra* ]{}]{}observations, and here we present the light curves and X-ray spectra. We carried out timing analyses, searching for periodic behavior in all observations. We present spectral modeling of the early observations that contain emission lines [@krautter] and provide a qualitative description of the SSS spectra. We also investigated spectral changes from spectra extracted before and after the flare event. In the next section we present the observations and explain the extraction techniques. We then focus on the timing analysis in §\[timing\] and the spectral analysis in §§\[acis\] and \[letg\]. We discuss spectral models in §\[models\] and summarize our results in §\[disc\]. Observations and Image Reduction {#reduction} ================================ [lccccccc]{} Start Date &Days After & Detector/& ObsId$^a$ & Exposure&Count Rate$^b$ & ‘Soft’ Count Rate$^c$\ (UT) & Outburst &  Grating & & (ksec) & photons/sec & (photons/sec)\ 2000, April 15, 01:01:27 &134&ACIS-I/none& 959 & 5.6 &1.0 & 0.12\ 2000, June 07, 02:48:14 &187&ACIS-I/none& 89 & 5.2 &0.69 & 0.07\ 2000, August 6, 22:02:05 &248&ACIS-I/none& 1709 & 5.6 &0.53 & 0.32\ 2000, September 28, 06:50:09&300&HRC-S/LETG & 2308 & 8.1 &0.65 & -\ 2000, October 1, 10:07:52 &304&HRC-S/LETG & 72 & 18.2 &0.84 & -\ & & &pre-flare& 8.4 &0.71 & -\ & & &flare & 1.8 &3.17 & -\ & & &post-flare&8.0 &0.74 & -\ 2001, November 28, 10:37:38 &727&HRC-S/LETG & 2681 & 25.8 &0.003& -\ \ We present six observations of V1494Aql taken with [[*Chandra* ]{}]{}in 2000 and 2001. Table \[tab1\] gives the start date of the observations, the number of days since outburst, instrumental setup, observation identification number (ObsId), exposure time, net count rate and count rate for ‘soft’ photons with an energy less than 0.6 keV. The first three observations were taken with the S-array of the Advanced CCD Imaging Spectrometer (ACIS-S), which is an array of CCD chips providing moderate spectral resolution in the energy range 0.2-10keV[^1]. After the SSS was detected with the ACIS observation taken on day 248 [@krautter], the next observation used the High Resolution Camera (HRC-S) in combination with Low Energy Transmission Grating Spectrometer (LETGS), yielding higher spectral resolution. While the HRC detector has no energy resolution, the LETGS disperses the incoming light and projects a dispersed spectrum onto the HRC. LETG spectra are extracted in wavelength units (range 1-170Å), but for consistency with the ACIS spectra we converted the LETG spectra to energy units. We carried out the reduction with the [[*Chandra* ]{}]{}-specific CIAO software suite, version 3.3. Since the CIAO standard data processing procedures (a.k.a. the ’pipeline’) have changed since the time of the observations, we began our treatment of the images with the level 1 event files which were taken from the [[*Chandra* ]{}]{}archives along with the accompanying calibration files. With the newest or most applicable calibration routines the exposures were reprocessed mimicking the pipeline reduction using standard routines from CIAO version 3.3.0.1 [@ciao]. These newly constructed event 2 files were used to create our light curves and spectra. The light curves were extracted using tools developed by [@ness_lc] which determine point spread function (PSF) corrected source count rates. The PSF for each observation was constructed by following the CIAO threads and are specific to observation, detector location, and energy [@ciao]. The light curves extracted from the ACIS and the non-dispersed photons in the LETG observations (zeroth order) were extracted in 20 second time bins from a source extraction region of 20 pixels. This region encloses $99\%$ of the ACIS point spread function (PSF) and $98\%$ of the HRC PSF. The spectra were extracted from the level 2 event files following the CIAO threads. The ACIS spectra were extracted using a 20 pixel radius circular source extraction region and the LETG spectra were obtained from the combined $\pm1$ spectral orders. In order to analyze the burst reported by [@drake:lc], we applied a time filter to the data set taken on day 304 using the same time limits for the start and end time of the flare. This separation was done with the level 1.5 event files instead of the level 2 files so that the correct good time intervals could be applied. After this separation we had a pre-flare observation with an exposure time of 8.4 ksec, a 1.8 ksec exposure of the flare, and a post-flare exposure of 8.0 ksec. Each of these three event files were processed by the same reconstructed pipeline as described above to produce the new level two event files and spectra. We expect pileup to not be a problem since the ACIS-S observations were designed to minimize this effect by placing the target approximately 7’ from the aim-point and reducing the duration of each exposure to 0.8 sec. While PIMMS simulations show a small amount of pileup ($\sim20 \%$) this is an over-estimate since PIMMS does not model off-axis pointing. Photometry {#timing} ========== The light curves for the ACIS observations are given in Fig. \[lc1\], plotted with error bars. The ACIS data show the total count rate dropping as the nova evolves (see Table \[tab1\]). The count rate drops by 31% between the first two observations. On day 248 the SSS spectrum emerged (@krautter; see also Fig. \[acis\_energy\]), but the total count rate had declined by 16% compared to day 187. While V1494Aql was detected on day 727 [@ness_lc], only the count rate from the zeroth order could be determined, and the source was too faint for the extraction of a spectrum or reliable light curve. Thus, ObsId 2681 is not further considered. The timing analysis was done using the methods of [@period] which constructs a periodogram, normalized by the total variance, through Fourier analysis which is suitable for evenly or unevenly spaced data. Normalizing by the total variance of the data allows one to calculate a ‘False Alarm Probability’ [@scargle] to aid in identification of false period detections. We examined periods ranging from twice the bin size to half the observation length. Periodograms were constructed for each observation while several versions were analyzed for the observation containing the flare. We checked the data set from day 304, ObsId 72, for periodicity over the whole observation with the flare removed, only the pre-flare segment, and only the post-flare segment. As seen in Fig. \[acis\_period\], none of the ACIS light curves show strong evidence for periodicity. The strongest signal corresponds to a false alarm probability of 42% (day 187 at $\sim95$s). The LETGS data, on the other hand, show periods with false alarm probabilities well below 0.1%, indicating a real signal, for each of the observations. In Fig. \[letg\_period\] we show the periodograms of the HRC data with all peaks labeled whose signal strength relates to a false alarm probability less than 1%. The flare is double peaked with additional structure. For further information the reader is directed to [@drake:lc]. We detected signals close to the 2500s period reported by [@drake:lc] in all of the grating data. None of those data showed any significant evidence for periods shorter than $\sim1200$s (longer than $8.33\times10^{-4}$Hz), so this region is not plotted. For the data from day 300, the $\sim2500$s signal was the only one detected although the data from day 304 also showed several other strong features. The pre-flare periodogram shows features at $\sim1740$s and $\sim3600$s while the post-flare has signals at $\sim$1200s and $\sim4300$s. The 1200s signal could be spectral leakage from both the 3600s and 2500s signals while the 1740s feature could be leakage from the 3600s feature. We also searched for longer period signals in the entire exposure taken on day 304 with the flared portion removed. No longer period signals were seen but the 2500s and 3600s features were reproduced while neither the 1200s or 1740s features were present. We are unable to test for the presence of the orbital period of 3.23 hours (=11.6ks) suggested by [@orbit], because with our exposure times we cannot sample any periods longer than 9ks. The short duration of the flare, along with its photometric variability imply that the source region of this flare must be associated with the hot WD. Variability on time scales of 200s (the width of one peak during the flare and 10 times our temporal bin size), yields a flare source region physical size of approximately 0.4 AU. While this size is far larger than either a white dwarf or that of the binary system, it is much smaller than the radius of the ejecta after 300 days of expansion. ACIS spectra {#acis} ============ The first spectrum, taken on day 134 (top panel of Fig. \[acis\_energy\]), shows the presence of N[vii]{} and O[viii]{} emission lines at 500 and 650 eV. The O[viii]{} line is stronger than the N[vii]{} line. In addition, continuum emission, unresolved lines, or the combination of both can be identified up to about 2 keV. By day 187, the lines of N[vii]{} and O[viii]{} as well as the continuum emission have weakened, but the relative strengths of the N[vii]{} and O[viii]{} lines seem unchanged. We see no evidence for the appearance of any new features and no excess emission at energies below 600 eV. The decline in the emission line strengths agrees with the reduction in count rate from the photometric data. The spectrum from day 248 shows further weakening of the O[viii]{} line and the possible emergence of a blend of Ne[ix]{} lines at $\sim 900$eV. In addition, a new bright feature appears at energies just below the N[vii]{} emission line at 500 eV. As seen in the photometry described in section \[timing\], the reduction in count rate is caused by the reduction of high-energy continuum emission. This reduction occurs in spite of the increase in count rate from below $\sim 600$ eV, probably caused by the appearance of the SSS [@krautter]. We tentatively interpret this feature as evidence that the density of the ejecta has declined, becoming optically thin to low-energy X-rays. We note, however, that the spectral resolution of the ACIS detector at low energies is limited by the quantum efficiency, and for studies of the soft component the LETGS is better suited. ----------- -------- ---------------- ----------------------------------------- $\lambda$ Energy Possible Transition (Å) (eV) Identification lower level - upper level 33.74 367.5 C[vi]{} 1s $^2$S$_{1/2}$ - 2p $^2$P$_{1/2,3/2}$ 29.53 419.9 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s2s $^3$S$_1$ 29.08 426.4 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s2p $^3$P$_{1,2}$ 28.79 430.7 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s2p $^1$P$_1$ 28.47 435.6 C[vi]{} 1s $^2$S$_{1/2}$ - 3p $^2$P$_{1/2,3/2}$ 26.99 459.4 C[vi]{} 1s $^2$S$_{1/2}$ - 4p $^2$P$_{1/2,3/2}$ 26.36 470.5 C[vi]{} 1s $^2$S$_{1/2}$ - 5p $^2$P$_{1/2,3/2}$ 24.96 496.8 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s3p $^3$P$_1$ 24.90 498.0 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s3p $^1$P$_1$ 24.78 500.4 N[vii]{} 1s $^2$S$_{1/2}$ - 2p $^2$P$_{1/2,3/2}$ 23.79 521.2 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s4p $^3$P$_1$ 23.77 521.7 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s4p $^1P_1$ 23.29 532.4 N[vi]{} 1s$^2$ $^1$S$_0$ - 1s5p $^3$P$_1$ 23.28 532.7 N[vi]{} 1s2 $^1$S$_0$ - 1s5p $^1P_1$ 22.10 561.1 O[vii]{} 1s$^2$ $^1$S$_0$ - 1s2s $^3$S$_1$ 21.80 568.5 O[vii]{} 1s$^2$ $^1$S$_0$ - 1s2p $^3$P$_{1,2}$ 21.60 574.0 O[vii]{} 1s$^2$ $^1$S$_0$ - 1s2p $^1$P$_1$ 18.97 653.6 O[viii]{} 1s $^2$S$_{1/2}$ - 2p $^2$P$_{1/2,3/2}$ 16.01 774.7 O[viii]{} 1s $^2$S$_{1/2}$ - 3p $^2$P$_{1/2,3/2}$ ----------- -------- ---------------- ----------------------------------------- : \[tab2\]Line Identification LETG spectra {#letg} ============ Day 300 shows emission lines sitting on top of a continuum, see the top panel of Fig. \[letg\_energy\]. This arrangement makes disentangling the continuum from the lines and line identifications difficult. As a result, we can only make qualitative arguments in this analysis. Table \[tab2\] gives a list of all the possible line identifications for the LETG data along with the associated wavelength, energy and atomic transitions (lower - upper states). Several unidentified lines exist between 400 and 500 eV, a characteristic that was also seen in the recent nova RS Oph as discussed by [@rsoph; @rsophshock]. We see lines from O[viii]{}, C[vi]{} and N[vi]{} as well as several unidentified lines. While the line at $\sim 500$ eV could be N[vii]{}, we interpret this line as N[vi]{} due to the presence of the N[vi]{} He-like $\gamma$ (1s-4p) and $\delta$ (1s-5p) lines and the absence of the N[vii]{} $\beta$ (1s-3p) line at 640.5 eV. The strongest line in the spectrum taken on day 300, at 442 eV, is an unidentified line. There are also unidentified lines at $\sim$ 383, 393, 397, 404, 449 and 482 eV, as well as several other possible weak lines throughout the spectrum. The energy of the O[viii]{} line (656 eV) is well above the Wien tail of the continuum ($\sim$ 530eV) and can therefore not be photoexcited. At least this one line must therefore be purely collisional. Since the O[viii]{} line is much weaker than the lines at lower energies, it is reasonable to assume that the stronger lines are mixture of radiative and collisional excitations. We constructed three sets of data from the observation made on day 304, as discussed in section \[reduction\], and their spectra are also presented in Fig. \[letg\_energy\]. The second panel shows the pre-flare spectrum, the third shows the flare spectrum, and the fourth shows the post-flare spectrum, each plotted with error bars and labels for the most likely line identifications. The spectrum during the pre-flare period again shows emission lines on top of a continuum. However, there is stronger emission from N[vi]{} at 430 eV and 498 eV but lines from O[vii]{} at 571 eV and C[vi]{} 367 eV were weaker than four days prior. The other lines, including all the unidentified lines, do not appear to significantly change in strength when compared to day 300. In order to isolate the spectrum of the flare, we subtracted the count rate spectra from before and after the flare from the data recorded during the flare event. The bottom panel of Fig. \[letg\_energy\] shows the difference spectrum from before and after the flare (in units of count rates per keV). This difference spectrum allows us to identify in which way the flare may have altered the emitting plasma in ways such as heating or photoionization. Examining the plot, we see that the flare event had little effect on the spectrum, yielding essentially the same spectrum as before the flare. This means that the difference between the pre-flare and post-flare count rate spectra yields the emission that originates from the plasma that emits the flare. This flare-only spectrum is shown in the third panel and an emission line spectrum with little or no sign of a continuum can be seen. During the flare the count rate in the 449 eV unidentified line, which was the strongest feature in the day 300 spectra, doubles and there are also increased count rates for the unidentified lines at 393 eV, N[vi]{} at 430 eV, C[vi]{} at 459 eV, and N[vi]{} at 498 eV. Also, unidentified lines appear at 405 and 475 eV that were not seen in any of the other spectra. The unidentified line at 442 eV is weak during the flare while the C[vi]{} 435 eV line and the unidentified line at 482 eV are not seen in the flare at all. After the flare most of the lines appear to return to near their initial pre-flare level. There is a slight increase in the strength of the N[vi]{} 420 eV line as well as in the 383 and 393 eV unidentified lines but no strong changes are seen in any of the other N[vi]{} lines. A significant portion of the total difference shown in the bottom panel of Fig. \[letg\_energy\] comes in the wings of the unidentified lines at 442 and 449 eV. There is a small increase in emission on either side of these lines after the flare but there is no difference in the strength of the peaks. This may be due to some change in the continuum or an increase in the temperature. There is also a small amount of emission seen post-flare at 475 eV which must be residual from the flare. Unfortunately, the flare-only spectrum is not well-enough exposed to detect any changes in the continuum level, because the flare lasted only a short time [@drake:lc]. If the flare-only spectrum is primarily an emission line spectrum, then the flare could originate from the same regions that emit the emission lines that blend with the continuum from the white dwarf. This would imply that two physically distinct components are present. In view of the short time scales of the flare, the regions that produce the emission line component could be rather compact. Another possibility could be that holes in the ejecta temporarily allowed more ionizing continuum emission to reach the surrounding medium and increase the degree of ionization. This would lead to stronger emission lines, while some of the ionizing continuum emission could be radiating into a different direction than the line of sight. We emphasize, however, that the data are too limited for any strong conclusions, and these suggestions are thus highly speculative. Spectral Models {#models} =============== In this section we describe our attempts to find suitable spectral models. We started with the ACIS spectra and fitted isothermal APEC models [@smith01]. The best-fit parameters for the models shown in Fig. \[acis\_energy\] are given in Table \[tab3\] for each observation. In order to achieve satisfactory fits to the data, high abundances of N and O are required. While high abundances of N and O are not unusual in novae, the amount by which these elements has to be increased appears rather unrealistic (see, for example, the compilation of model predictions and observations of typical nova abundances listed by @JH98, table 5). The best fit models require an oxygen abundance that is 20-30 times solar and a nitrogen abundance of several hundred times solar. We stress that, in addition to the quoted statistical uncertainties, there are significant sources of systematic uncertainties. We estimate the greatest source of uncertainty is the assumption of an isothermal plasma while the temperature structure of the emitting plasma is likely more complex, implying high- and low-temperature plasma. Since the O and N lines are formed at temperatures significantly below the temperatures of the isothermal model (see Table \[tab3\]), these lines are formed rather inefficiently. The only way to reproduce these lines and the high-energy emission at the same time is to increase the O and N abundances. A two-temperature model with a cool and a hot component yields lower N and O abundances (because the N and O lines are formed at low temperatures, while these elements are fully ionized at higher temperatures), however, such a model has more parameters and yields no significant improvement in reproducing the data. Another source of uncertainty is the parameter $N_{\rm H}$. Lower values of $N_{\rm H}$ require less emission at soft energies, thus leading to a higher resultant temperature and higher N and O abundances. The model shown in the bottom panel of Fig. \[acis\_energy\] (day 248) consists of an APEC and a blackbody component, however, we only show the APEC component. We found a good fit to the SSS component, yielding a combination of blackbody temperature and luminosity (see footnote in Table \[tab3\]) that is typically encountered when fitting blackbody curves to the SSS spectra of novae. The N abundance is not constrained, owing to the overlap of the backbody component with the N[vii]{} line. Based on the blackbody parameters, we have tested an alternative model to the observations taken on days 134 and 187. We have included a blackbody component with the same parameters as those found for day 248. We kept the blackbody parameters fixed and iterated only the APEC model parameters and $N_{\rm H}$. We found the surprising result that better fits can be obtained, yielding higher values of $N_{\rm H}$ ($4.4\times10^{21}$cm$^{-2}$ and $5.5\times10^{21}$cm$^{-2}$, respectively) and a lower APEC temperature. The abundances of N and O are also lower with the 2-component model. This result allows the possibility that a SSS component could have been present all the time and was only hidden behind a higher column of neutral hydrogen. However, this exercise also demonstrates the sensitivity of models to such soft spectra to the assumed amount of interstellar absorption. This leads to a large systematic uncertainty that is difficult to assess. ---------------------------------------- ------------------ ------------------ -------------------- Model Parameter 134 187 248$^a$ k$T$ (eV) $630\,\pm\,40$ $750\,\pm\,50$ $607\,\pm\,34$ $\log(VEM)$ (cm$^{-3}$) $55.7\,\pm\,0.1$ $55.3\,\pm\,0.1$ $55.23\,\pm\,0.02$ $N_{\rm H}$ ($\times10^{21}$cm$^{-2}$) $3.4\,\pm\,0.3$ $2.4\,\pm\,0.3$ $3.0\,\pm\,0.1$ O abundance ($\times $solar) $29\,\pm\,6$ $35\,\pm\,8$ $17\,\pm\,3$ N abundance ($\times $solar) $840\,\pm\,150$ $788\,\pm\,180$ $<112$ $\chi^2_{\rm red}$ 2.8 1.5 1.97 ---------------------------------------- ------------------ ------------------ -------------------- : \[tab3\]Model Best Fit Parameters $^a$Plus Blackbody with $T_{\rm bb}=31\,\pm\,2$ and $\log(L_{\rm bol})=37.7\,\pm\,0.4$ The LETG spectra are also extremely difficult to model. We first attempted blackbody fits, but we found no satisfactory fits. The problem is that no spectral range can be identified that is free from line emission. We then tested a number of Cloudy models with a wide array of parameters, but none gave satisfactory results. The combination of optically thick plus optically thin plasma emission makes the LETG spectra of V1494 Aql particularly challenging. A promising approach is to use the PHOENIX atmosphere code, but models for spectra like the ones presented here do not yet exist. Optimization of the code to fit these data is in progress (vanRossum & Hauschildt, priv. comm.). Discussion and Conclusions {#disc} ========================== The evolution of V1494Aql underwent two distinct phases. The first phase was characterized by hard X-ray emission, dominated by emission lines. Our APEC fits to the ACIS observations yielded satisfactory results, and the spectra imply that the ejected gas is an optically thin plasma in collisional equilibrium. It is possible that the source of emission is shocks within the ejecta, as has been proposed by [@obrien94]. Unfortunately, we cannot test the predicted decline in count rate, because the third ACIS observation was contaminated by the rising SSS, leaving us only with the two observations on days 134 and 187. Linear interpolation between these observations suggests a decline rate of 2.1 counts per second per year. For a strong shock, the post-shock temperature $T_s$ is given by $$T_s = \frac{3}{16} \frac{\bar m v_s^2}{\rm k}$$ where k is Boltzmann’s constant and $\bar m = 10^{-24}$g is the mean particle mass, including electrons [see, e.g., @bode06]. If the observed emission is induced by a shock, then the temperatures derived from the isothermal APEC models (Table \[tab3\]) correspond to a shock velocity of $700-800$kms$^{-1}$. In a multitemperature plasma, the hottest component determines the shock velocity (see, e.g., @bode06). If our spectra allowed us to resolve more temperature components, the shock velocity derived from the hottest component would be slightly higher (see, e.g., @rsophshock), but not by much, since the average temperature of the isothermal models is dominated by the hottest component. Since these observations were taken more than three months after outburst, significant deceleration will have taken place which explains why these velocities are lower than the early expansion velocities of $-1300$kms$^{-1}$ found by [@moro99]. However, with only two velocity values, we can not determine the power-law index for different scenarios. The second phase is the SSS phase that started some time before day 248 after outburst and overlapped in time with the first phase. Fig. \[letg\_energy\] shows that some residual emission from the first phase can still be recognized at high energies on days 300 and 304. This is a similar situation to that encountered for RSOph, where the shock emission was still detectable at high energies while the SSS spectrum dominated at low energies [@ness_2]. While the ACIS-S spectrum taken on day 248 shows a typical SSS spectrum, the details revealed by the [[*Chandra* ]{}]{}grating spectra are remarkably different from what SSS spectra are usually like. For comparison, the SSS Cal83 was observed with the same instrument, and [@lanz04] presented spectra that show continuum emission with absorption lines that can be fitted with atmosphere models, but little line emission can be seen. While V1494Aql was the first CN to have been observed with high spectral resolution in X-rays, later grating observations of novae during their SSS phase revealed spectra more similar to that of Cal83, e.g., V4743Sgr [@ness_4743] or RSOph [@ness_2]. Those spectra can be fit with stellar atmospheres [@lanz04; @petz05; @rsoph]. In contrast, the second-most prominent SSS, Cal87, is also dominated by emission lines [e.g., @greiner04] and may be more similar to V1494Aql. Since Cal87 is an eclipsing binary, the viewing geometry may be an explanation for the different X-ray spectra. However, no spectral analysis of the grating spectra of Cal87 has been presented, likely owing to the same complications that we encountered. Our photometric analysis revealed that the first phase of the evolution shows no periodic oscillations, while the SSS phase is modulated by short-period oscillations. The absence of such oscillations in the early observations supports the notion that they originate from the WD, which supports the interpretation by [@drake:lc] that these are non-radial g$^+$ pulsations. JGR and SS received partial support from NSF and NASA grants to ASU. JUN gratefully acknowledges support provided by NASA through [[*Chandra* ]{}]{} Postdoctoral Fellowship grant PF5-60039 awarded by the [[*Chandra* ]{}]{} X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. [^1]: http://cxc.harvard.edu/cdo/about\_chandra/\#ACIS
--- abstract: 'We present results from an SED analysis of two lensed high-$z$ objects, the $z=6.56$ galaxy HCM6A behind the cluster Abell 370 discovered by Hu  (2002) and the triple arc at $z \sim 7$ behind Abell 2218 found by Kneib  (2004). For HCM 6A we find indications for the presence of dust in this galaxy, and we estimate the properties of its stellar populations (SFR, age, etc.), and the intrinsic [Ly$\alpha$]{} emission. From the “best fit” reddening ($E(B-V) \sim 0.25$) its estimated luminosity is $L \sim (1-4) \times 10^{11} \lsun$, in the range of luminous infrared galaxies. For the arc behind Abell 2218 we find a most likely redshift of $z \sim$ 6.0–7.2 taking into account both our photometric determination and lensing considerations. SED fits indicate generally a low extinction but do not strongly constrain the SF history. Best fits have typical ages of $\sim$ 3 to 400 Myr. The apparent 4000 Å break observed recently by Egami (2004) from combination of IRAC/Spitzer and HST observations can also well be reproduced with templates of young populations ($\sim$ 15 Myr or even younger) and does not necessarily imply old ages. Finally, we briefly examine the detectability of dusty lensed high-z galaxies with Herschel and ALMA.' date: '?? and in revised form ??' title: 'Stellar populations and [Ly$\alpha$]{} emission from lensed $z {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}6$ galaxies' --- Introduction ============ Little is known about the stellar properties, extinction, and the expected intrinsic [Ly$\alpha$]{} emission of distant, high redshift galaxies. Indeed, although it has in the recent past become possible through various techniques to detect already sizeable numbers of galaxies at $z {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}5$ (see e.g. the reviews of Taniguchi et al. 2003 and Spinrad 2003) the information available on these objects remains generally scant. For example, in many cases the galaxies are just detected in two photometric bands and [Ly$\alpha$]{} line emission, when present, serves to determine the spectroscopic redshift (e.g. Bremer  2004, Dickinson  2004, Bunker  2004). Then the photometry is basically used to estimate the star formation rate (SFR) assuming standard conversion factors between the UV restframe light and the SFR, and nothing is known about the extinction, and the properties of the stellar population (such as age, detailed star formations history etc.) At higher redshift ($z {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}6$) even less information is generally available. Many objects are found by [Ly$\alpha$]{} emission, but remain weak or sometimes even undetected in the continuum (e.g. Rhoads & Malhotra 2001, Kodaira  2003, Cuby  2003, Ajiki  2003, Taniguchi et al. 2004). In these cases the [Ly$\alpha$]{} luminosity can be determined and used to estimate a SFR using again standard conversion factors. Also the [Ly$\alpha$]{} equivalent width is estimated, providing some possible clue on the nature of these source. However, this has lead to puzzling results e.g. for the sources from the LALA survey (Malhotra & Rhoads 2001, Rhoads  2003) leaving largely open the question of the nature of these objects, their stellar populations, extinction etc. Strong gravitation lensing is extremely “helpful” for a large number problems discussed at this conference, including also the present one. In particular strong lensing has allowed to detect several of the highest redshift galaxies known today (e.g. Ellis  2001, Hu  2002, Kneib  2004, Pelló  2004a, and the review of Pelló  2003). Also, thanks to the lensing magnification, it has been possible to obtain photometric observations of reasonable quality in several bands for some of these objects. For example it has even very recently been possible to image a $z \sim 7$ galaxy with the Spitzer observatory at 3.6 and 4.5 (Egami  2004) ! As we’ll show below (Sects. 2 and 3) this allow us to perform a quantitative SED analysis to constrain properties of the stellar populations, such as age and star formation (hereafter SF) history (burst or constant SF?), their extinction, intrinsic [Ly$\alpha$]{} emission etc. A detailed account of this work will be published elsewhere (Schaerer & Pelló 2004). As such, gravitational lensing provides a unique opportunity to learn more about some selected high-$z$ galaxies. If generalised and applied to larger samples in the near future, systematic studies of the properties of lensed high-$z$ galaxies could provide unique insights and complementary information to other deep/ultra-deep surveys targetting blank fields. Also, extensions to wavelengths beyond the optical and near-IR with existing facilities (e.g. in the radio, mm, and possibly sub-mm) and future observatories should be of great interest, as briefly outlined for Herschel and ALMA in Sect. 4. Stellar populations and dust in a lensed $z=6.56$ starburst galaxy ================================================================== The lensed $z=6.56$ galaxy HCM6A was found by Hu et al. (2002) from a narrow-band survey in the field of the lensing cluster Abell 370. Its redshift is established from the broad-band SED including a strong spectral break, and from the observed asymmetry of the detected emission line identified as [Ly$\alpha$]{}. We have recently analysed the SED of this object by means of quantitative SED fitting techniques using a modified version of the [*Hyperz*]{} code of Bolzonella et al. (2000) [^1]. The observed $VRIZJHK^\prime$ data are taken from Hu et al. (2002). The gravitational magnification of the source is $\mu=4.5$ according to Hu et al. The main free parameters of the SED modeling are the spectral template, extinction, and the reddening law. Empirical and theoretical templates including in particular starbursts and QSOs (SB+QSO templates), and predictions from synthesis models of Bruzual & Charlot (BC+CWW group) and from Schaerer (2003, hereafter S03) are used. Overall the SED of HCM 6A (see Fig. 1) is “reddish”, showing an increase of the flux from Z to H and even to K[^2]. From this simple fact it is already clear qualitatively that one is driven towards stellar populations with a) “advanced” age and little extinction or b) constant or young star formation plus extinction. However, for HCM6A a) can be excluded as no [Ly$\alpha$]{} emission would be expected in this case. Quantitatively, the best solutions obtained for three “spectral template groups” are shown in the left panel of Fig. 1. The solutions shown correspond to bursts of ages $\sim$ 50–130 Myr and little or no extinction. However, as just mentioned, solutions lacking young (${\raisebox{-0.5ex}{$\,\stackrel{<}{\scriptstyle\sim}\,$}}$ 10 Myr) massive stars can be excluded since [Ly$\alpha$]{} emission is observed. The best fit empirical SB+QSO template shown corresponds to the spectrum of a metal-poor starburst galaxy with an extinction of $A_V \sim 1.$ On the basis of the present observations a narrow line (type II) AGN cannot be ruled out. To reconcile the observed SED with [Ly$\alpha$]{}, a young population or constant SF is required. In any of these cases fitting the “reddish” SED requires a non negligible amount of reddening. Although all best fit models require reddening, this result is at present indicative and need to be firmed up. Quantitatively (e.g. for constant star formation, solar metallicity models, Calzetti law) $A_V$ is typically $\sim$ 0.5–1.8 mag at the 68 % confidence level. Also somewhat smaller extinction can be obtained if the steeper SMC extinction law of Prévot et al. (1984) is adopted. Zero extinction cannot be ruled out at the $\sim 2 \sigma$ level. Better photometric accuracy, especially in the JHK bands, is needed to reduce the present uncertainties and hence confirm the indication for dust. From the best fit constant SF models we deduce an extinction corrected star formation rate of the order of SFR(UV) $\sim$ 11 – 41  for a Salpeter IMF from 1 to 100  or a factor 2.55 higher for the often adopted lower mass cut-off of 0.1 . For continuous SF over timescales $t_{\rm SF}$ longer than $\sim$ 10 Myr, the total (bolometric) luminosity output is typically $\sim 10^{10}$  per unit SFR (in ) for a Salpeter IMF from 1-100 , quite independently of metallicity. The total luminosity associated with the observed SF is therefore $L \sim (1-4) \times 10^{11} \lsun$, in the range of luminous infrared galaxies (LIRG). For $t_{\rm SF} \sim$ 10 Myr the estimated stellar mass is $M_\star \approx t_{\rm SF} \times SFR \sim (1-4) \times 10^8$ . Other properties such as the “[Ly$\alpha$]{} transmission” can also be estimated from this approach. A relatively high [Ly$\alpha$]{} transmission of $\sim$ 20–50 % but possibly up to $\sim$ 90 % is estimated from our best fit models (see Schaerer & Pelló 2004). It is interesting to examine the SEDs predicted by the various models at longer wavelengths, including the rest-frame optical domain, which is potentially observable with the sensitive IRAC camera onboard the Spitzer Observatory and other future missions. In the right panel of Fig. 1 we plot again the 3 best fits. We see that these solutions have fluxes comparable to or above the detection limit of IRAC/Spitzer [^3]. On the other hand the strongly reddened constant SF or young burst solutions do not exhibit a Balmer break and are hence expected to show fluxes just below the IRAC sensitivity at 3.6  and significantly lower at longer wavelengths. As [Ly$\alpha$]{} emission is expected only for the reddened SEDs the latter solutions are predicted to apply to HCM 6A. If possible despite the presence of other nearby sources, IRAC/Spitzer observations of HCM 6A down to the detection limit or observations with other future satellites could allow to verify our prediction and therefore provide an independent (though indirect) confirmation of the presence of dust in this high-z galaxy. A lensed galaxy at $z \sim$ 6–7 behind Abell 2218 ================================================= This interesting triply imaged object, a possible $z \sim 7$ galaxy, has recently been discovered by Kneib et al. (2004, hereafter KESR) from deep $Z$ band observations with ACS/HST. In the meantime it has also been observed with Spitzer (see Richard , these proceedings; Egami  2004). The currently available observations include (undetected), , , $J$, , , and 3.6 and 4.5  with IRAC/Spitzer. The photometry from these authors has been adopted here to analyse the properties of this object in a similar way as for HCM 6A. In practice, small differences are found in the published photometry; we therefore adopt three different SEDs (SED1-3) to describe this object (see Schaerer & Pelló 2004 for details). No emission line has so far been detected for Abell 2218 KESR. Its spectroscopic redshift remains therefore presently unknown but the well-constrained mass model for the cluster strongly suggests a redshift $z \sim$ 6.5–7 for this source. The magnification factors of both images a and b is $\mu=25 \pm 3$, according to KESR. As a spectroscopic redshift has not been obtained (yet) for this galaxy we here examine its photometric redshift estimate. In Fig. \[fig\_7\] (left) we show the photometric redshift probability distributions  for the three SEDs (SED1-3) of Abell 2218 KESR using three spectral template groups and adopting a minimum photometric error of 0.15 mag. For each redshift,  quantifies the quality of the best fit model obtained varying all other parameters (i.e. extinction, , spectral template amoung template group). Given the excellent HST (WFPC2, ACS and NICMOS) photometry, is quite well defined: the photometric redshift ranges typically between $z_{\rm phot} \sim$ 5.5 and 7.3. Outside of the plotted redshift range  is essentially zero. To summarise (but cf. Schaerer & Pelló 2004), given the absence of a spectroscopic redshift, a fair number of good fits is found to the observations of Abell 2218 KESR when considering all the free parameters. Three of them are illustrated in Fig. \[fig\_7\] (right). The main conclusions from these “best fits” are: - Generally the determined extinction is negligible or zero quite independently of the adopted extinction law. For few empirical templates we find good fits requiring an additional $A_V \sim$ 0.2–0.6 mag, depending on the adopted extinction law. - Although generally burst models fit somewhat better than those with constant star formation among the theoretical templates, the data does not strongly constrain the star formation history. - Typical ages between $\sim$ 15 and 400 Myr are obtained. A reasonable 1-$\sigma$ upper bound on the age of $\sim$ 650 Myr can be obtained assuming constant star formation. Young solutions ($\sim$ 15 and even younger) are obtained with burst models or some empirical templates. The relatively modest strength Balmer break observed between the HST and Spitzer broad-band photometry does not necessarily imply old ages. - Given degeneracies of the restframe UV spectra between age and metallicity (cf. above) no clear indication on the galaxian metallicity can be derived, in contrast to the claim of KESR. Good fits to the available data can even be found with solar metallicity starburst templates. - Depending on the star formation history and age one may or may not expect intrinsic [Ly$\alpha$]{} emission, i.e. an important [H [ii]{}]{} region around the object. The apparent absence of observed [Ly$\alpha$]{} emission does therefore not provide much insight. The theoretical templates can also be used to estimate the stellar mass involved in the starburst or the star formation rate when constant star formation is assumed. For this aim we assume a typical redshift of $z=6.6$, and the magnification $\mu=25$ determined by KESR. For constant SF we obtain $SFR \sim (0.9-1.1)$ (for a Salpeter IMF from 1 to 100 ). For the best fit ages of $\sim$ 400–570 Myr the total mass of stars formed would then correspond to $\sim (3.6-6.3) \times 10^8$ . The mass estimated from best fit burst models (of ages $\sim$ 6–20 Myr) is slightly smaller, $M_\star \sim (0.3 - 1) \times 10^8$ . If we assume a Salpeter IMF with  $=0.1$  the mass and SFR estimates would be higher by a factor 2.55, and in good agreement with the values derived by KESR and Egami . In all the above cases the total luminosity (unlensed) is typically $L_{\rm bol} \sim 2 \times 10^{10}$ . $z \protect{\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}6$ starbursts: with Herschel and ALMA, and now $\ldots$ =========================================================================================================================== Let us now assume that starburst galaxies with dust exist at $z {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}6$ and briefly examine their observability with facilities such as Herschel and ALMA. To do so we must assume a typical galaxy spectrum including the dust emission. For simplicity we here adopt the SED model by Melchior et al. (2001) based on PEGASE.2 stellar modeling, on the Désert et al. (1990) dust model, and including also synchrotron emission. Their predicted SED for a galaxy with an SFR and/or total luminosity quite similar to that estimated above for HCM6A is shown in Fig. 3. placed at redshift $z=$0.1, 0.5, 1, 2, 3, 5, 10, 20, and 30. The thresholds of the JWST (here NGST), PACS and SPIRE onboard Herschel, and ALMA are also presented. Figure taken from Melchior et al. (2001) with kind permission \[\] Figure 3 shows the exquisite sensitivity of ALMA in the various bands allowing in principle an easy detection of such objects up to redshift $\sim$ 10 or even higher! On the other hand, with the sensitivity of PACS and SPIRE on Herschel blank field observations of such an object are limited to smaller redshift ($z {\raisebox{-0.5ex}{$\,\stackrel{<}{\scriptstyle\sim}\,$}}$ 1–4). However, already with a source magnification of $\mu \sim$ 3–10 or more the “template galaxy” shown in Fig. 3 becomes observable with SPIRE at $\sim$ 200–670 . In fact, such magnifications (and even higher ones) are not exceptional in the central parts of massive lensing clusters. E.g. in our near-IR search conducted in two ISAAC fields ($\sim$ 2.5x2.5 arcmin$^2$) of two lensing clusters, a fair number ($\sim$ 10–20) of $z {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}$ 6–7 galaxy candidates with $\mu {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}5$ are found (Pelló et al. 2004b and these proceedings, Richard et al. 2004, in preparation). More than half of them have actually magnifications $\mu {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}10$. Such simple estimates show already quite clearly the potential of strong gravitational lensing to extend the horizon of SPIRE/Herschel observations beyond redshift $z {\raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$}}5$! Obviously a more rigorous feasibility study must also address the following issues: How frequent is dust present in high-z galaxies? and up to what redshift? We now have some indications for dust in one lensed $z=6.56$ galaxy (see Section 2) and of course in high-z quasars. But how general/frequent is this? How typical is the SED adopted above? The long wavelength emission due to dust depends on various parameters such as metallicity, the dust/gas ratio, geometry, the ISM pressure etc. Furthermore spatial resolution and source confusion are key issues which must be addressed and which should vary quite strongly between blank fields and cluster environments. Last, but not least, the field of view of the various instruments is determinant for the efficiency with which high-z candidates can be found and studied. Several of these issues have already been partly addressed earlier (cf. the 2000 Herschel conference proceedings of Pilbratt et al. 2001, also Blain et al. 2002). It is evident that various ground-based and space bourne facilities and instruments will be used together to provide an optimal coverage in wavelength, spatial resolution and field size, and to obtain imaging as well as spectroscopy. Near-IR wide field imagers and near-IR multi-object spectrographs on 8-10m class telescopes and later with ELTs will undoubtably “team up” with the JWST, Herschel and ALMA to explore the first galaxies in the Universe and their evolution from the Dark Ages to Cosmic Reionisation. The wonderful power offered by gravitational lensing will continue to provide deeper or “enhanced” views of prime interest for the exploration of the early Universe. Ajiki, M., et al., 2003, , 126, 2091 , 2000, , 313, 559 , Physics Reports, 369, 111 , 2000, , 363, 476 , 2004, , in press \[astro-ph/0409488\] Bremer, M.N, et al., 2004, , 347, L7 Bunker, A., et al., 2004, , 355, 374 , 2000, , 533, 682 , 2003, , 405, L19 , 1999, , 237, 215 Dickinson, M., et al. 2004, , 600, L99 , 2004, , submitted \[astro-ph/0411117\] 2001, , 560, L119 , 2002, , 123, 1247 , 1999, in “The Birth of Galaxies”, B. Guiderdoni, et al. (eds), Editions Frontieres, \[astro-ph/9902141\] , 2002, , 568, L75; Erratum: , 576, L99 , 2004, , 607, 697 (KESR) , 2003, PASJ, 55, L17 , 2001, in “The Promise of the Herschel Space Observatory”, ESA SP-460, p. 467 \[astro-ph/0102086\] , Eds., 2001, “The promise of the Herschel Space Observatory”, ESA-SP 460 ., 2003, “Gravitational Lensing: a unique tool for Cosmology”, D. Valls-Gabaud, J.P. Kneib, Eds., ASP Conf. Series, in press \[astro-ph/0305229\] ., 2004a, , 416, L35 ., 2004b, IAU Symposium No. 225, “The Impact of Gravitational Lensing on Cosmology”, Y. Mellier and G. Meylan, Eds., in press \[astro-ph/0410132\] 1984, , 132, 398 Rhoads, J.E., Malhotra, S., 2001, , 563, L5 Rhoads, J.E., et al., 2003, , 125, 1006 , 2003, , 397, 527 , 2004, in “Starbursts: from 30 Doradus to Lyman break galaxies”, Eds. de Grijs, González Delgado,. ApSS, in press , 2004, A&A, submitted , 2004, MNRAS, submitted \[astro-ph/0403585\] [Steidel, C. C., et al.]{}, 2003, 592, 728 Taniguchi, Y., et al., 2004, PASJ, submitted \[astro-ph/0407542\] , 2004, ApJ, 615, L17 [^1]: To convert the observed/adjusted quantities to absolute values we adopt the following cosmological parameters: $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$. [^2]: The significance of a change of the SED slope between JH and HK seems weak, and difficult to understand. [^3]: See [http://ssc.spitzer.caltech.edu/irac/sens.html]{}
--- author: - | J. Blaha[^1], N. Geffroy, and Y. Karyotakis\ Laboratoire d’Annecy-le-Vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3,\ 9 Chemin de Bellevue 74980 Annecy-le-Vieux, France\ E-mail: title: 'Impact of dead zones on the response of a hadron calorimeter with projective and non-projective geometry' --- Introduction ============ Design of the future particle physics detectors for the International Linear Collider (ILC) [@ilc] is optimized for usage of the Particle Flow Algorithm (PFA) [@pfa]. By this strategy, a jet energy resolution of $30\%/\sqrt{E}$ can be reached and thus allows the reconstruction of the invariant masses of W’s, Z’s, and tops with resolutions close to the natural widths of these particles. On the other hand, new challenges are put on the construction of the detector subsystems, particularly on calorimeters which must have imaging capability allowing the assignment of energy cluster deposits to charged or neutral particles with high accuracy. In order to fulfill this requirement, several detector technology for active part of the calorimeters are currently under development. In parallel, the work on the engineering design of whole calorimeters and their assembly methods are well advanced. The aim of this study is to find an optimal mechanical design of the hadronic calorimeter (HCAL) for SiD detector [@sid] which takes into account engineering as well as physics constrains. Therefore, the impact of the various HCAL mechanical design on the calorimeter response has been evaluated. Since the calorimeter is composed from independent modules, which due to mechanical reasons creates discontinuities in the detection, the study is focused on the hadronic shower behavior close to the boundaries between these modules for different calorimeter geometry and for size of dead areas along this boundary. Calorimeter geometry and simulation tools ========================================= Projective and non-projective geometry -------------------------------------- The SiD hadronic calorimeter is a sampling calorimeter, which is located inside the magnet coil and surrounds the electromagnetic calorimeter. The barrel part is divided into twelve azimutals modules, each one consisting of 40 layers with a passive part composed of 1.89 cm thick stainless steel absorber and an active part equipped with RPC chambers (SiD baseline detector choice for HCAL) or alternative detectors, such as Micromegas, GEM or Scintillator [@sid]. Thus the total calorimeter depth is 4.5 $\lambda$ and the overall dimensions are: $l=6036$ mm, $R_{int}=1419$ mm, and $R_{ext}=2583$ mm, where $l, R_{int}, R_{ext}$ denote calorimeter length, and its internal and external radii, respectively. Two calorimeter geometries have been proposed so far. The first one is a projective geometry consisting of twelve identical trapezoidal modules whose edges are pointing to the beam axis, see Fig. \[geometry\] left. The second geometry was designed in order to avoid cracks in calorimeter between the modules. Therefore it is an off-pointing or non-projective geometry, where the modules boundaries are not projective with respect to the beam axis. It consists of six trapezoidal and six rectangular modules, which are arranged on a rota basis as it is shown in Fig. \[geometry\] right. The shape of the trapezoidal modules of the non-projective geometry is such, that internal and external surface of both proposed geometries are identical, so called do-decagon. A detail description of the geometry for SiD HCAL can be find in Ref. [@geometry]. ![A quarter of the projective (left) and non-projective (right) calorimeter geometry, both displayed in black color. The simulation configuration of two modules and their position with respect to the interaction point is shown in red color. Blue and green arrows represent cones with a vertex angle of 2$^{\circ}$ or 15$^{\circ}$, respectively. The impinging particles are randomly generated within these vertex angles.[]{data-label="geometry"}](./figures/projective.eps "fig:"){width="0.49\columnwidth"} ![A quarter of the projective (left) and non-projective (right) calorimeter geometry, both displayed in black color. The simulation configuration of two modules and their position with respect to the interaction point is shown in red color. Blue and green arrows represent cones with a vertex angle of 2$^{\circ}$ or 15$^{\circ}$, respectively. The impinging particles are randomly generated within these vertex angles.[]{data-label="geometry"}](./figures/nonprojective.eps "fig:"){width="0.49\columnwidth"}\ Simulation set-up ----------------- ### Geometry configurations Since the description of the barrel calorimeter (including 12 modules) with all necessary mechanical details (different shape of the modules and utilization of the supporting stringers) had not been available for a detailed simulation study, a simplified geometry using only two adjacent rectangular modules has been implemented. These modules are placed under two different angles with respect to the interaction point, as it is illustrated in Fig. \[geometry\], which correspond to the boundary position of the projective and non-projective module configuration, respectively. Because the study is focused on the boundary effects with single impinging particles, the simplified geometry with one module transition can be considered as a good approximation of the two modules with trapezoidal shape as well. Moreover, due to the axial symmetry of the calorimeter, the results obtained with the simplified geometry can be extended to the whole calorimeter having twelve modules. The overall dimensions of each rectangular module is $2000\times2000\times1076$ mm$^3$, where a calorimeter depth of 1076 mm is corresponding to 4.5. Each module consists of 40 absorbers made from a 1.89 cm thick stainless steel plates interlayed by 8 mm gaps for Micromegas chambers which were chosen as an calorimeter active medium for this study. An active part of the chamber is 6.5 mm thick and includes: 1.2 mm of PCB material, 2.3 mm of material for the chips and other passive components and 3 mm of gas (Ar/Isobutane 95/5). A detail geometry description of the existing Micromegas prototype which has been implemented in the simulation can be found in Ref. [@micromegas]. Modules in the SiD calorimeter are hold together by supporting stringers made of 2 cm thick stainless steel plates which contribute to the dead areas in the calorimeter. In order to evaluate how they affect the calorimeter response, configurations with different stringer thickness were considered for the projective and non-projective geometry, respectively. The first configuration is ideal with two rectangular modules interconnected without supporting plates and hence without any cracks between them. The second and third configurations have 1 and 2 cm thick supporting plates, respectively. The last configuration is similar to the third one, but material of the electromagnetic calorimeter, which is equal to about 1 $\lambda$, was added in front of the hadronic calorimeter. In this configuration the electromagnetic calorimeter is used only as a passive detector component in order to estimate the fraction of the hadronic shower deposition within its material. ### Generated data samples Two sets of Monte Carlo data were generated by the GEANT4-based simulator SLIC [@slic] with the `QGSP_BERT` physics list for the four above described configurations and for the projective and non-projective geometry. For each set, 50 GeV negative pions and muons were generated randomly within a vertex cone angle of about 2$^{\circ}$ and 15$^{\circ}$, directed from the interaction point toward the modules boundary, see Fig. \[geometry\]. Therefore an impact area around the boundary was restricted to a disc with a radius of 2.5 cm for smaller the cone angle and about 19.1 cm for the larger cone angle. Precise values of the cone angles and impact areas for projective and non-projective geometry are displayed in Fig. \[appxGeometry\] (Appx. A). The small impact area was studied for the reason that the most of the hadronic showers develop close to the boundary between two modules and hence its influence can be directly quantified and compared. On the other hand, the large cone angle will simulate all possible directions of incoming particles and thus can be approximated to the calorimeter which covers whole polar angle. The generated data, 10,000 events for smaller angle and 20,000 events for larger angle for each calorimeter configuration, were subsequently reconstructed and analyzed by a standalone program using the org.lcsim framework [@slic]. ### Digitization The Micromegas detector has very fine lateral segmentation (about 1$\times$1), which is read out by digital electronics embedded directly on the detector. Calorimeter with digital readout is based on the principle that the number of counted hits is proportional to energy deposited in calorimeter. In given case, one hit is counted in a 1$\times$1cell only when the energy measured in the cell is higher than a threshold. The readout threshold that was used within this study is 0.5 MIP MPV. In order to be as close as possible to the real detector conditions, the a full digitization of the signals is performed in the simulation. This includes mainly the conversion from energy deposited in 3gas gap from GeV to charge in pC and electronics digitization. The most important parameters, such as chamber geometry, mesh transparency, gas amplification, electronics noise, and chamber efficiency, where delivered from dedicated laboratory and test beam measurements. ![Distributions of the total number of hits for 50 GeV single pions generated within 2$^\circ$ vertex cone angle for different simulation configurations: ideal geometry ([*NoFeP*]{}), geometries with various thickness of the supporting plates ([*1cmFeP*]{} and [*2cmFeP*]{}), and geometry including the electromagnetic calorimeter ([*2cmFeP\_WE*]{}). The distributions on the left are for the projective geometry, on the right for the non-projective geometry, respectively.[]{data-label="distribution1"}](./figures/pro_nbHits_40layers_withBoundaryCells_2DegCone.eps "fig:"){width="0.49\columnwidth"} ![Distributions of the total number of hits for 50 GeV single pions generated within 2$^\circ$ vertex cone angle for different simulation configurations: ideal geometry ([*NoFeP*]{}), geometries with various thickness of the supporting plates ([*1cmFeP*]{} and [*2cmFeP*]{}), and geometry including the electromagnetic calorimeter ([*2cmFeP\_WE*]{}). The distributions on the left are for the projective geometry, on the right for the non-projective geometry, respectively.[]{data-label="distribution1"}](./figures/nonPro_nbHits_40layers_withBoundaryCells_2DegCone.eps "fig:"){width="0.49\columnwidth"}\ Comparison of the projective and non-projective geometry ======================================================== Calorimeter response -------------------- Figure \[distribution1\] shows distributions of the total number of hits counted in calorimeter for 50 GeV single pions generated within 2$^\circ$ cone vertex angle for different simulation configurations and for the projective and non-projective geometry. In case of the projective geometry, smaller number of hits and thus less visible energy is seen as the thickness of the absorber plate increases, see Fig. \[distribution1\] left. This is because the trajectory of the primary pions is close to the boundary and therefor most hadronic showers develop near the boundary. Thus the significant part of the shower is absorbed in inactive material of the supporting plate and is not measured by the calorimeter. Moreover, in configuration with electromagnetic calorimeter the smallest number of hits is counted due to the events with showers that start already inside the electromagnetic calorimeter. That is why its distribution has a broad left-hand tail. Of course, this part of the shower energy can be retrieved if the electromagnetic calorimeter is active. On the other hand, in case of the non-projective geometry, the number of hits collected in calorimeter with and without absorbing plate is almost identical, see Fig. \[distribution1\] right. This is due to the fact that most primary pions cross the boundary between the modules before starting a shower, and also because the showers develop in a direction of primary particle. Therefore a very small fraction of the showers, if even any, is lost in the supporting plate. ![Similar as the Fig. \[distribution1\], but for 50 GeV single pions generated within 15$^\circ$ vertex cone angle. The distributions on the left are for the projective geometry, on the right for the non-projective geometry, respectively.[]{data-label="distribution2"}](./figures/pro_nbHits_40layers_withBoundaryCells_15DegCone.eps "fig:"){width="0.49\columnwidth"} ![Similar as the Fig. \[distribution1\], but for 50 GeV single pions generated within 15$^\circ$ vertex cone angle. The distributions on the left are for the projective geometry, on the right for the non-projective geometry, respectively.[]{data-label="distribution2"}](./figures/nonPro_nbHits_40layers_withBoundaryCells_15DegCone.eps "fig:"){width="0.49\columnwidth"}\ Different situation arises in case of the large cone vertex angle (15$^\circ$), for which, both projective and non-projective geometry, show similar behaviors, see Fig. \[distribution2\]. Though the direction of the primary pions is different for both geometries, the total number of the counted hits is almost same. This is because the volume of the supporting plate is small with respect to the total volume where energy of the hadronic showers can be deposited. Thus the number of hits which is lost due to the crack between the modules is small in comparison with the average total number of registered hits. Impact of the detector dead zone along the module boundary ---------------------------------------------------------- Due to mechanical constrains, the thickness of the stringers which support the calorimeter modules is limited to 2 cm of stainless steel. Thus the thickness of the stringer defines the minimum size of the dead area between modules. Another contribution to the dead zone is given by the thickness of the frame around the active medium, which can be, depending on the technology of the active layer, up to 2 cm. Therefore a total width of the dead zone is between 2 and 6 cm. Fig. \[meanVsPlate\] shows the calorimeter response to single pions measured as a mean value of the total number of hits registered in calorimeter for all events versus thickness of the supporting plate for the projective and non-projective geometry. The figure compares also results for a full area active layer and layer with 1 cm dead zone along the edge next to the supporting plate. ![Mean number of counted hits as a function of the absorbing plate thickness for configuration with and without readout cells along the boundary, and for the projective and non-projective geometry. Results for pions generated within 2$^\circ$ and 15$^\circ$ cone vertex angle are shown in the left and right figure, respectively.[]{data-label="meanVsPlate"}](./figures/nbHitsMeanVsPlate_40layers_2DegCone.eps "fig:"){width="0.49\columnwidth"} ![Mean number of counted hits as a function of the absorbing plate thickness for configuration with and without readout cells along the boundary, and for the projective and non-projective geometry. Results for pions generated within 2$^\circ$ and 15$^\circ$ cone vertex angle are shown in the left and right figure, respectively.[]{data-label="meanVsPlate"}](./figures/nbHitsMeanVsPlate_40layers_15DegCone.eps "fig:"){width="0.49\columnwidth"}\ ![Mean number of counted hits as a function of the dead zone size for the configuration with 2 cm supporting plate and for the projective and non-projective geometry. Results for pions generated within 2$^\circ$ and 15$^\circ$ cone vertex angle are shown in the left and right figure, respectively.[]{data-label="meanVsDeadZone"}](./figures/nbHitsMeanVsDeadZone_40layers_2DegCone.eps "fig:"){width="0.49\columnwidth"} ![Mean number of counted hits as a function of the dead zone size for the configuration with 2 cm supporting plate and for the projective and non-projective geometry. Results for pions generated within 2$^\circ$ and 15$^\circ$ cone vertex angle are shown in the left and right figure, respectively.[]{data-label="meanVsDeadZone"}](./figures/nbHitsMeanVsDeadZone_40layers_15DegCone.eps "fig:"){width="0.49\columnwidth"}\ Generally, for the reasons explained in previous section, the calorimeter response decrease with increasing supporting plate thickness. For small cone vertex angle (see Fig. \[meanVsPlate\] left), the projective geometry, where showers are father from the boundary, only a small decrease in the number of hits is shown if an additional dead zone of 2 cm in the active material (1 cm in each module) is considered. Contrary, the projective geometry shows significant decrease in response (about 10%), because most showers take place close to the boundary. For large cone vertex angle (see Fig. \[meanVsPlate\] right), same behavior has been found for the projective as well as for non-projective geometry with a response variation of -2.9%/cm of stringers thickness. The response falls-off by about 2% for configuration with 2 cm supporting plate in case of 2 cm dead zone in the active layer (1 cm in each module). In order to quantify how the size of the dead area in active layer affects the calorimeter response, the size of the dead zone has been varied from 0 up to 4 cm for each calorimeter module. Thus the total size including both the absorber plate and dead areas in active medium have been varied between 2 and 10 cm. As expected, large difference have been found between the projective and non-projective geometry for small cone vertex angle, as it is shown in Fig.\[meanVsDeadZone\] left. In case of large vertex cone angle (see Fig.\[meanVsDeadZone\] right), both configurations behave similarly with a response variation of -4.4%/cm of dead area. ![Mean number of counted hits versus distance from the 2 cm thick supporting plate between modules for configuration with (left) and without (right) readout cells along the boundary, and for projective (purple) and non-projective (green) geometry, respectively. The thickness of the supporting plate is not included in the distance from boundary (x-axis).[]{data-label="aroundCrack"}](./figures/sid2mod2cmFePnoB_ProVsNonPro.eps "fig:"){width="0.47\columnwidth"} ![Mean number of counted hits versus distance from the 2 cm thick supporting plate between modules for configuration with (left) and without (right) readout cells along the boundary, and for projective (purple) and non-projective (green) geometry, respectively. The thickness of the supporting plate is not included in the distance from boundary (x-axis).[]{data-label="aroundCrack"}](./figures/sid2mod2cmFePwithB_ProVsNonPro.eps "fig:"){width="0.47\columnwidth"}\ Response versus distance from a crack ------------------------------------- An appearance shape of the calorimeter response around a crack for the projective and non-projective geometry without readout cells along the boundary (1 cm of dead space along the boundary in each module) is shown in Fig. \[aroundCrack\] left. The smallest response is observed for the projective geometry, for which the number of hits degreases down to 40% of its maximal value at it is measured far from the crack. As expected, the minimum of the response is seen close to the crack. In case of the non-projective geometry, the calorimeter response drops by 20%. Moreover, due to the off-pointing geometry, the minimal value of the response is located about 10 cm away from the crack. A similar behavior is expected also in whole non-projective HCAL, where the degrease in response due to a crack will be present mainly in trapezoidal modules and smaller part will be seen in rectangular modules. Finally, if the dead space in the active medium is not considered, the calorimeter response is about 5 % higher than in previous case (see Fig. \[aroundCrack\] right). Comparison for muons -------------------- Another important question in comparing the projective and non-projective geometry is what is the fraction of muons that will be not identified in the calorimeter due to cracks. This is an important issue for PFA where the total jet energy is computed from individual particle energy contributions in a jet. For performance of the PFA will be very useful if muons crossing the calorimeter can be easily identified and assigned to the ones measured in muon chambers. ![Distributions of the total number of hits (left) and mean number of hits versus distance from the 2 cm thick supporting plate between modules (right). The distributions are for 50 GeV single muons generated within 15$^\circ$ vertex cone angle and for configuration without readout cells along the boundary, and for projective (black or purple) and non-projective (red or green) geometry, respectively.[]{data-label="muons"}](./figures/ProAndNonPro_nbHits_40layers_noBoundary2.eps "fig:"){width="0.49\columnwidth"} ![Distributions of the total number of hits (left) and mean number of hits versus distance from the 2 cm thick supporting plate between modules (right). The distributions are for 50 GeV single muons generated within 15$^\circ$ vertex cone angle and for configuration without readout cells along the boundary, and for projective (black or purple) and non-projective (red or green) geometry, respectively.[]{data-label="muons"}](./figures/proVsNonProMuons50GeV.eps "fig:"){width="0.47\columnwidth"}\ Therefore a test with 50 GeV negative single muons (20,000 events) generated within 15$^\circ$ vertex cone angle has been performed and results for the SiD baseline configuration having 2 cm thick supporting plate and 1 cm of a dead zone on each module side are shown in Fig. \[muons\]. For an ideal muon event, which has passed a calorimeter without crossing the module boundary, one hit per layer is measured (this results to 40 hits in SiD HCAL). In reality, the number of hits can be slightly smaller due to detection inefficiency of the active layer, or a bit higher due to interactions along the muon track. If a muon goes through the crack, number of registered hits decrease proportionally to the distance which the muon has passed in the crack. In Fig. \[muons\] left is shown that for the projective geometry, a significant number of muons are not registered at all. On the other hand, for every event in non-projective geometry, a clear muon track with more that about 25 hits is recorded. In other words, if one considers that at least 4 hits are necessary for the muon reconstruction in a calorimeter, then about 5 % of muons events is lost in case of the projective geometry and no one is lost for the non-projective geometry. It needs to be noted that the fraction of muons, which are not detected due to the crack, will be slightly smaller in a real detector due to the presence of the solenoid magnetic field. From an appearance shape of the calorimeter response around the crack (Fig. \[muons\] right) it is seen that for the projective geometry the response is affected in an area of several cm around the crack where the response drops sharply to the zero registered hits. In case of the non-projective geometry, the affected area is larger, around 20 cm, but the number of registered hits decreases by about 20%, which still allows to determine a passing muon close to the module boundary. Summary and conclusions ======================= The projective and non-projective HCAL geometries, which are proposed for SiD detector, have been investigated in order to determine the most suitable one. The comparison for various design configurations allowed to evaluate the impact of the supporting stringers and dead areas along the module boundary which has been studied with single pions and muons. The impact of the boundary between modules is clearly seen for area about 20 cm around a crack. This corresponds to a cone angle of 8$^\circ$ which means that for 12 modules the affected area is about 26% of the whole polar angle. Thus, it can be concluded that the decrease of the calorimeter response close to the boundary (for small angles of impinging particles) is significantly smaller for the non-projective geometry. If the impinging particles are distributed in the large polar angles, the global response is similar for both geometries. In this case a response variation of -2.9%/cm of stringers thickness and -4.4%/cm of dead area have been found. This confirms that for base-line SiD design (2 cm thick stringers and 1 cm of dead area in each module) an average response will decrease by about 7 to 8% around a crack. The advantage of the non-projective geometry is that no muons will be lost in a crack, contrary to about 5% of muon events which will be not identified in the projective geometry. On the other hand, the drawback of the non-projective geometry is the mechanical design requiring two different module shapes in comparison with the projective geometry where all the modules are identical. All this aspects need to be carefully weighted in order to give a final decision. Finally, it needs to be pointed out that because this study has been performed for the simplified geometry description with single pions and muons, it provides very important, but only the first image how the cracks may impact on the calorimeter performance. Also the absence of the magnetic field and information from the other detectors subsystems need to be taken into account for the reconstruction performance in a real detector. Therefore, it is advisable to perform a complementary study with whole SiD detector and real physics analysis of events with jets. We would like to thank to our colleague A. Espargilière for implementing the Micromegas digitization driver that has been used within this study. Also we would like to express our thanks to the SLIC and lcsim developers whose software has been used and tested in this study and who have always promptly answered all our technical questions concerning this simulation and analysis framework. [99]{} , Int. Linear Collider Reference Design Report M. A. Thomson, [*Particle flow calorimetry and the PandoraPFA algorithm*]{}, NIM [**A611**]{} (2009) SiD detector, <http://silicondetector.org/display/SiD/home> Geffroy at al., [*Proposal of a new HCAL geometry avoiding cracks in the calorimeter*]{}, LAPP-TCECH-2008-02, Sep. 2008 C. Adloff [*et al.*]{}, [*Micromegas chambers for hadronic calorimetry at a future linear collider*]{}, 2009 [*JINST*]{} 4 P11023 SLIC and org.lcsim, <http://www.lcsim.org/software/slic/> and <http://lcsim.org/> Detailed description of the geometry as it is used in simulation ================================================================ ![Description of the projective (top) and non-projective (bottom) geometry as it was used in simulation. In blue and red colors are shown cross-sections of the cones defining direction of the particle coming from the interaction point (IP) and impinging on the front face of the calorimeter. For each cone the cone angle and impact area is shown.[]{data-label="appxGeometry"}](./figures/Projective.eps "fig:"){width="0.96\columnwidth"}\ ![Description of the projective (top) and non-projective (bottom) geometry as it was used in simulation. In blue and red colors are shown cross-sections of the cones defining direction of the particle coming from the interaction point (IP) and impinging on the front face of the calorimeter. For each cone the cone angle and impact area is shown.[]{data-label="appxGeometry"}](./figures/Non_projective.eps "fig:"){width="0.96\columnwidth"}\ [^1]: Corresponding author.
--- abstract: | A continuous map $f$ on a compact metric space $X$ induces in a natural way the map $\f$ on the hyperspace $\mathcal K(X)$ of all closed non-empty subsets of $X$. We study the question of transmission of chaos between $f$ and $\f$. We deal with generic, generic $\varepsilon$-, dense and dense $\varepsilon$-chaos for interval maps. We prove that all four types of chaos transmit from $f$ to $\f$, while the converse transmission from $\f$ to $f$ is true for generic, generic $\varepsilon$- and dense $\varepsilon$-chaos. Moreover, the transmission of dense $\varepsilon$- and generic $\varepsilon$-chaos from $\f$ to $f$ is true for maps on general compact metric spaces.\ address: 'Mathematical Institute, Silesian University, 746 01 Opava, Czech Republic' author: - Michaela Mlíchová - 'Marta Štef'' ankov'' a' title: On generic and dense chaos for maps induced on hyperspaces --- Introduction and preliminary results ==================================== Let $(X,d)$ be a compact metric space endowed with the metric $d$. A dynamical system $(X,f)$, where $f: X \to X$ is continuous, induces in a natural way the system $(\mathcal K(X), \f)$ on the hyperspace $\mathcal K(X)$ consisting of all closed non-empty subsets of $X$. A natural question arises what are the connections between the [*individual*]{} dynamics given by $(X,f)$ and the [*collective*]{} dynamics given by $(\mathcal K(X), \f)$. Hyperspaces ----------- Let us recall some definitions and properties concerning hyperspaces. We will give here only those we will need in the following text. For the proofs and further study of topology of hyperspaces see, e.g., [@EM], [@IN] and [@Mac]. The map $\f : \mathcal K(X) \to \mathcal K(X)$ induced by a continuous map $f :X \to X$ on the space $\mathcal K(X) = \{ K\subset X;\ K \neq\emptyset {\rm \ is\ compact} \}$ is defined by $\f(K) = f(K) = \{ f(x);\ x\in K \}$, $K\in\mathcal K(X)$; note that $\mathcal K(X)$ is invariant for $\f$, and if $f$ is continuous then $\f$ is also continuous. Let $x\in X$ be a point, $A\subset X$ a non-empty set, and $\varepsilon > 0$. Define the [*distance from the point $x$ to the set $A$*]{} by ${\rm dist}(x, A) := {\rm dist}(\{ x\}, A) = \inf\{d(x, y);\ y\in A \}$, and the [*$\varepsilon$-neighborhood of the set $A$*]{} by $N(A, \varepsilon) := \{ x\in X;\ {\rm dist}(x, A)<\varepsilon\}$. The [*Hausdorff distance*]{} on $\mathcal K(X)$ is defined as follows. For any $A, B \in \mathcal K(X)$, $$d_H(A, B) := \inf \{ \varepsilon \ge 0;\ A\subset N(B, \varepsilon) {\rm \ and\ } B\subset N(A,\varepsilon) \}.$$ Let us note that $(\mathcal K(X), d_H)$ is a compact metric space. Denote by $\mathbb N$ the set of positive integers. Let $S_1,\dots , S_n \subset X$, $n\in \mathbb N$, be a finite collection of non-empty sets. Define a subset $\langle S_1, \dots , S_n \rangle$ of $\mathcal K(X)$ by $$\langle S_1, \dots , S_n \rangle := \{ K\in\mathcal K(X);\ K\subset\bigcup_{i=1}^n S_i,\ K\cap S_i \neq\emptyset {\rm \ for\ } i=1, \dots , n \}.$$ The family $$\mathcal B = \{\langle U_1, \dots , U_n \rangle ;\ U_1,\dots , U_n {\rm \ are\ open\ in\ } X {\rm\ and\ } n\in\mathbb N \}$$ of subsets of $\mathcal K(X)$ is a basis for the so called [*Vietoris topology*]{} on $\mathcal K(X)$. Note that the topology induced by the Hausdorff metric and the Vietoris topology for $\mathcal K(X)$ coincide. The first systematic investigation of dynamical properties of induced maps was done by Bauer and Sigmund in their paper “Topological dynamics of transformations induced on the space of probability measures” from 1975 (see [@BauSig]). The authors studied which of topological properties of the individual system $(X,f)$ (like, e.g., transitivity, mixing properties, specification properties, distality) are “inherited” by collective system $(\mathcal K(X), \f)$. Note that, as the title suggests, the authors in this paper considered simultaneously systems induced on the space of probability measures. There are also many papers concerning connections between chaotic behavior of individual and collective systems. Properties like Devaney chaos, Li-Yorke chaos, distributional chaos, $\omega$-chaos or topological chaos have been considered (see, e.g., [@GKLOP] where also an extensive list of references is given). The main aim of the present paper is to study connections between other two kinds of chaos, namely generic and dense chaos, of $f$ and $\f$. Generic and dense chaos {#chaos} ----------------------- Let $f\in C(X)$, the class of continuous maps $X\to X$, and let $\varepsilon > 0$. In the following, we will use the notation which is taken from [@S1]: $$C_1(f) := \{ (x, y) \in X^2;\ \liminf_{n\to\infty} d(f^n(x), f^n(y)) = 0 \},$$ $$C_2(f) := \{ (x, y) \in X^2;\ \limsup_{n\to\infty} d(f^n(x), f^n(y)) > 0 \},$$ $$C_2(f, \varepsilon) := \{ (x, y) \in X^2;\ \limsup_{n\to\infty} d(f^n(x), f^n(y)) > \varepsilon \},$$ $$C(f) := C_1(f) \cap C_2(f),$$ $$C(f, \varepsilon) := C_1(f) \cap C_2(f, \varepsilon).$$ Note that $C(f)$ (resp. $C(f, \varepsilon)$) is the set of [*Li-Yorke pairs*]{} (resp. [*$\varepsilon$-Li-Yorke pairs*]{}. In this notation, the well known definition of chaos in the sense of Li and Yorke reads as follows: A map $f\in C(X)$ is [*LY-chaotic*]{} if there exists an uncountable set $S$ such that $C(f) \supset S\times S \setminus \{ (x, x);\ x\in X \}$. In the 80s, A. Lasota proposed a new concept of Li-Yorke chaos, the so called generic chaos, see [@Pio1]. Inspired by this idea, [L-0.08cm39]{}. Snoha [@S1] introduced three other variants of this notion. Recall that a set is of [*first category*]{} if it is a union of a countable family of nowhere dense sets; a set that is not of first category is a [*second category*]{} set. A set is [*residual*]{} if its complement is a first category set or, equivalently, if it contains a dense $G_{\delta}$ subset. \[def\] A map $f\in C(X)$ is called 1. [*generically chaotic*]{}, if the set $C(f)$ is residual in $X^2$, 2. [*generically $\varepsilon$-chaotic*]{}, if the set $C(f,\varepsilon)$ is residual in $X^2$, 3. [*densely chaotic*]{}, if the set $C(f)$ is dense in $X^2$, 4. [*densely $\varepsilon$-chaotic*]{}, if the set $C(f,\varepsilon)$ is dense in $X^2$. Properties of generic chaos for interval maps have been very deeply studied in [@S1]. Snoha showed there that generically chaotic, generically $\varepsilon$-chaotic and densely $\varepsilon$-chaotic interval maps, where $\varepsilon$ is a positive real number, are equivalent. Moreover, he characterized such maps in terms of behavior of subintervals of $I = [0,1]$. Densely chaotic interval maps have been characterized in [@S2]. It has been proved there, among others, that in the class of piecewise monotone maps with finite number of pieces, dense chaos and generic chaos coincide. Properties of generically and densely chaotic interval maps are summarized in Theorems \[th1\] and \[th2\]. Note that the minimum of topological entropies of both generically and densely chaotic interval maps is $(1/2)\log 2$, see [@S1] and [@Rue]. Properties of generically $\varepsilon$-chaotic maps on metric spaces have been studied in [@Mur]. Murinov' a showed there that many of the properties of generically $\varepsilon$-chaotic interval maps from [@S1] can be extended to a large class of metric spaces, e.g., that generic $\varepsilon$-chaos is equivalent to dense $\varepsilon$-chaos but, on the other hand, there is a convex continuum on the plane on which generic chaos and generic $\varepsilon$-chaos are not equivalent. Let us recall here characterizations of generically (resp. densely) chaotic interval maps proved by Snoha. Note that Theorem \[th1\] is rewritten from [@S1], since we use almost all properties occurring there (the only properties we do not need are (d) and (e) but they are interesting in themselves). The properties in Theorem \[th2\] are chosen from [@S2], Theorem 1.2, and the subsequent text in the way most convenient for our purposes. Let $I$ be the compact unit interval $[0, 1]$. By an interval we mean a nondegenerate (not necessarily compact) interval lying in $I$. If $J$ is an interval then diam$J$ denotes its length. The [*distance of two sets*]{} $A, B \subset I$ is defined by dist$(A,B) := \inf \{ |x-y|;\ x\in A,\ y\in B \}$, and recall that the distance from a point $a$ to a set $B$ is dist$(a,B) =\ $dist$(\{ a\},B)$. A compact interval $J$ is an [*invariant transitive interval of*]{} $f$ if $f(J)\subset J$ and the restriction of $f$ to $J$ is topologically transitive. For any set $A$, int$(A)$ (resp. cl$(A)$) denotes the interior (resp. closure) of $A$. Denote Orb$(f, A) := \bigcup_{n=0}^{\infty} f^n (A)$. \[th1\] [([[@S1], Theorem 1.2)]{}]{} Let $f \in C(I)$. The following conditions are equivalent: 1. $f$ is generically chaotic, 2. for some $\varepsilon>0$, $f$ is generically $\varepsilon$-chaotic, 3. for some $\varepsilon>0$, $f$ is densely $\varepsilon$-chaotic, 4. $C_1(f)$ is dense in $I^2$ and $C_2(f)$ is a second category set in any interval $J^2 \subset I^2$, 5. $C_1(f)$ is dense in $I^2$ and for some $\varepsilon >0$, $C_2(f, \varepsilon)$ is dense in $I^2$, 6. the following two conditions are fulfilled simultaneously: 1. for every two intervals $J_1, J_2$, $\liminf \limits_{n \to \infty} {\rm dist} (f^n(J_1), f^n(J_2))=0$, 2. there exists an $a>0$ such that for every interval $J$,\ $\limsup \limits _{n \to \infty} {\rm diam\,} f^n (J)>a $, 7. the following two conditions are fulfilled simultaneously: 1. there exists a fixed point $x_0$ of $f$ such that for every interval $J$, $\lim \limits_{n \to \infty} {\rm dist} (f^n(J), x_0)=0$, 2. there exists a $b>0$ such that for every interval $J$,\ $\liminf \limits _{n \to \infty} {\rm diam\,} f^n (J)>b$, 8. the following two conditions are fulfilled simultaneously: 1. $f$ has a unique invariant transitive interval or two invariant transitive intervals having one point in common, 2. for every interval $J$ there is an invariant transitive interval $T$ of $f$ such that ${\rm Orb} (f, J) \cap {\rm int} (T) \neq \emptyset$. Moreover, the equivalences ${\rm (b)} \Leftrightarrow {\rm (c)} \Leftrightarrow {\rm (e)} \Leftrightarrow {\rm (f)} $ hold with the same $\varepsilon$ and with $a = \varepsilon$ in [(f-2)]{}. \[th2\] [@S2] A function $f \in C(I)$ is densely chaotic if and only if the following three conditions are fulfilled simultaneously: 1. there is a fixed point $x_0$ of $f$ such that for every interval $J$, $$\lim\limits_{n \to \infty} {\rm dist} (f^n(J), x_0)=0,$$ 2. for every interval $J$, $\liminf_{n\to \infty} {\rm diam\,} f^n (J)>0$, 3. every one-sided punctured neighbourhood of the point $x_0$ contains points $x, y$ with $(x, y) \in C(f)$ and moreover, if $x_0 \in {\rm int} (I)$ then every neighbourhood of $x_0$ contains points $x<x_0<y$ with $(x, y)\in C(f)$. In Section 2 we show that, for interval maps, if $\f$ is generically chaotic then also $f$ is generically chaotic. In Section 3 we prove that, for maps on general compact metric spaces, dense $\varepsilon$-chaoticity of $\f$ implies that $f$ has this property, too. Section 4 concerns the opposite implications; we show that, for interval maps, dense (resp., dense $\varepsilon$-) chaos of $f$ imply dense (resp., dense $\varepsilon$-) chaos of $\f$. In Section 5 we give some results concerning the question of transmission of dense chaos from $\f$ to $f$. We also provide a scheme, where the obtained results together with their corollaries are given in a clearly arranged form. Generically chaotic $\f$ implies generically chaotic $f$ ======================================================== In this section we will prove the following \[main1\] Let $f \in C(I)$ be such that the induced map $\f$ is generically chaotic. Then the function $f$ is also generically chaotic. To prove Theorem \[main1\] we need several lemmas. The next lemma, which follows easily from uniform continuity of $\f$, is a version of Lemmas 4.1 and 4.2 from [@S1]. \[new1\] Let $f \in C(I)$, $g=f^k$ for some positive integer $k$, and let $\f$ and $\g$ be induced by $f$ and $g$. Moreover, let $\mathcal{A, B} \subset \mathcal{K}(I)$ be non-empty sets. Then 1. $\liminf\limits_{ n \to \infty}{\rm dist} (\f^n(\mathcal A), \f^n(\mathcal B))=0$ iff $\liminf\limits_{ n \to \infty}{\rm dist} (\g^n(\mathcal A), \g^n(\mathcal B))=~0$, 2. $\lim\limits_{ n \to \infty}{\rm dist} (\f^n(\mathcal A), \f^n(\mathcal B))=0$ iff $\lim\limits_{ n \to \infty}{\rm dist} (\g^n(\mathcal A), \g^n(\mathcal B))=~0$, 3. $\liminf \limits _{n \to \infty} {\rm diam\,} \f^n (\mathcal A)=0$ iff $\liminf \limits _{n \to \infty} {\rm diam\,} \g^n (\mathcal A)=0$, 4. $\limsup \limits _{n \to \infty} {\rm diam\,} \f^n (\mathcal A)=0$ iff $\limsup \limits _{n \to \infty} {\rm diam\,} \g^n (\mathcal A)=0$, 5. $C_1(\f)=C_1(\g)$ and $C_2(\f)=C_2(\g)$, 6. $\f$ is generically or densely chaotic if and only if $\g$ is generically or densely chaotic. \[lm3\][([@S1], Lemma 4.3)]{} Let $f \in C(I)$. Then the following three conditions are equivalent: 1. $C_1(f)$ is residual in $I \times I$, 2. $C_1(f)$ is dense in $I \times I$, 3. for every two intervals $J_1, J_2$, $\liminf \limits_{n \to \infty} {\rm dist} (f^n(J_1), f^n(J_2))=0$ (i.e., condition (f-1) from Theorem \[th1\]). \[new3\] Let $f \in C(I)$ and let $\f$ be induced by $f$. If $C_1(\f)$ is residual in $\mathcal K(I) \times \mathcal K(I)$, then $C_1(f)$ is residual in $I \times I$. Let $J_1$ and $J_2$ be arbitrary intervals. Since $C_1(\f)$ is residual, there exist non-empty closed sets $U_1 \subset J_1$, $U_2 \subset J_2$ such that $$\liminf\limits_{n \to \infty} d_H (\f ^n (U_1), \f ^n(U_2))=0.$$ Obviously, $\liminf\limits_{n\to \infty}{\rm dist}(f ^n (U_1), f ^n(U_2))=0$ and thus $f$ satisfies [(iii)]{} from Lemma \[lm3\].\ The following property is easy. \[top\] In arbitrary topological space, if $B \subset A$ is dense in $A$, then $\mathcal K(B)$ is dense in $\mathcal K(A)$. The first part of the proof of the following lemma might seem to be identical to the proof of Lemma 4.8 in [@S1]. But, in fact, our lemma has different assumptions, in the proof we sometimes work with $\f$ (not with the original $f$) and, moreover, in the conclusion of our proof we need some sets constructed in the first part. Hence we give here the full version of this proof. \[lm8\] Let $f \in C(I)$ and let $\f$ be induced by $f$. Let $x_0 \in I$ be a fixed point of $f$ and let $\f$ be generically chaotic. Then there exists a $\delta>0$ such that no interval containing $x_0$ and with diameter less than $\delta$ is $f$-invariant. Since the closure of an invariant interval is an invariant interval with the same diameter, it suffices to prove the claim of our lemma for compact intervals. Assume on the contrary that for every $\delta >0$ there is a compact invariant interval $J(\delta)$ containing $x_0$ and with diameter less than $\delta$. Then infinitely many of the intervals $J(1/n)$, $n =1, 2, \ldots$, have the right endpoints greater than $x_0$ or infinitely many of them have the left endpoints less than $x_0$. Without loss of generality we may suppose the first possibility. Further, observe that the intersection of two compact invariant intervals is a compact invariant interval. Now it is not difficult to see that there exists a sequence of invariant intervals $J_n=[x_0-a_n, x_0+b_n]$, $n=1, 2, \ldots$, where $\lim_{n\to \infty}a_n=0$, $\lim_{n\to \infty}b_n=0$, and for every $n$, $0<b_{n+1}<b_n, 0 \le a_{n+1} \le a_n$, and $a_{n+1}=a_n$ if and only if $a_n=0$. Consider two cases. [**Case 1.**]{} For every $n$, $a_n>0$. Then for every $n$ we have $J_{n+1} \subset {\rm int} J_n$. Let $m$ be a positive integer. By Lemmas \[new3\] and \[lm3\], the set $C_1(f)$ is residual in $I \times J_{m+1}$. Thus there exists a set $B_m \subset I$ such that $B_m$ is residual in $I$ and for every $x \in B_m$ there is $y \in J_{m+1}$ with $\liminf_{n \to \infty} |f^n(x)-f^n(y)|=0$. Since $J_{m+1} \subset {\rm int} J_m$ and the interval $J_m$ is invariant, we can see that for every $x \in B_m$ there exists a positive integer $m(x)$ such that ${\rm Orb}(f^{m(x)}(x))\subset J_m$. Now we consider the set $B=\bigcap_{m=1}^\infty B_m$. It is residual in $I$ and it is easy to see that for every $x \in B$, $\lim_{n \to \infty} f^n(x)=x_0$. [**Case 2.**]{} For some $n$, $a_n=0$. Without loss of generality we may assume that $a_1=0$ and consequently $a_n=0$ for all $n$. Now we cannot use the inclusions $J_{n+1} \subset {\rm int} J_n$ from Case 1. But it suffices to take into account that $f(J_1) \subset J_1$, and analogously as in Case 1 we can provide that the orbits of points from $J_1$ generically converge to $x_0$, i.e., that there exists a set $B \subset J_1$, $B$ residual in $J_1$, such that $\lim_{n \to \infty} f^n(x)=x_0$ for all $x\in B$. We can see that in either case there are an interval $A \subset I$ and a set $B \subset A$, $B$ is residual in $A$, such that for any $x \in B$, $\lim_{n \to \infty}f^n(x)=x_0$. So $B$ contains an intersection of countably many open dense sets $G_n$, and hence $\mathcal K(B) \supset \mathcal K\left(\bigcap_{n=1}^\infty G_n\right)$. By the definition of $\mathcal K(\cdot)$ we have $$\begin{aligned} \mathcal K\left(\bigcap_{n=1}^\infty G_n\right) &= &\{P \in \mathcal K(A): P \subset G_n\ {\rm for\ any}\ n\} = \\ & = & \bigcap_{n=1}^\infty \{P \in \mathcal K(A): P \subset G_n\}= \bigcap_{n=1}^\infty \langle G_n\rangle. $$ Since the sets $\langle G_n\rangle$ are open and, by Lemma \[top\], they are dense in $\mathcal K(A)$, the set $\mathcal K (B )$ is residual in $\mathcal K(A)$. Obviously, $\mathcal K(B) \times \mathcal K(B)$ is residual in $\mathcal K(A) \times \mathcal K(A)$, and thus $C_2(\f)$ is of first category in $\mathcal K(A) \times \mathcal K(A)$. This contradicts our assumption that $\f$ is generically chaotic (i.e., $C_2(\f)$ is residual). The following three lemmas from [@S1] will be used in the proof of the subsequent Lemma \[lm16\]. \[lm7\][([@S1], Lemma 4.7)]{} Let $f\in C(I)$ and $J$ be a compact interval with $\limsup_{n\to \infty} {\rm diam\,} f^n(J)>0$. Then ${\rm Orb}(f, J)$ contains a periodic point of $f$. Moreover, if the conditions (f-1) from Theorem \[th1\] is fulfilled then ${\rm Orb}(f, J)$ contains a periodic point of $f$ with period $1$ or $2$. \[lm9\][([@S1], Lemma 4.9)]{} Let $f\in C(I)$, $x_0$ be a fixed point of $f$ and let for every two intervals $J_1, J_2$, $\liminf \limits_{n \to \infty} {\rm dist} (f^n(J_1), f^n(J_2))=0$ (i.e., the condition (f-1) from Theorem \[th1\] be fulfilled). Let there exist arbitrarily small $f$-invariant intervals arbitrarily close to the point $x_0$. Then there exist arbitrarily small $f$-invariant intervals containing the point $x_0$. \[lm12\][([@S1], Lemma 4.12)]{} Let $f \in C(I)$ and $J$ be an interval. Then the following two conditions are equivalent: 1. there are $x, y \in J$ with $\liminf\limits_{n \to \infty} |f^n(x)-f^n(y)|>0$, 2. there are $x, y \in J$ with $\limsup\limits_{n \to \infty} |f^n(x)-f^n(y)|>0$. Further, the following two conditions are equivalent: 1. $\liminf\limits_{n \to \infty} {\rm diam\,} f^n(J)>0$, 2. $\limsup\limits_{n \to \infty} {\rm diam\,} f^n(J)>0$, and either of the conditions [(i)]{}, [(ii)]{} implies either of the conditions [(iii)]{}, [(iv)]{}. Moreover, if $J$ is a compact interval then all the conditions [(i)]{} – [(iv)]{} are equivalent. The following lemma together with its proof is based on [@S1], Lemma 4.16, the part concerning implication (vi) & (f-1) $\Rightarrow$ (i). \[lm16\] Let $f \in C(I)$ and let $\f$ be induced by $f$. Assume that $\f$ is generically chaotic. Then there exists a real number $a>0$ such that for any interval $J \subset I$, $\limsup_{n \to \infty} {\rm diam\, } f^n(J)>a$, i.e., $f$ satisfies the condition (f-2) from Theorem \[th1\]. By [@S1], Lemma 4.1 (an original version of our Lemma \[new1\] for $f$), it suffices to prove the claim of our lemma for $g=f^2$, i.e., there exists $a>0$ such that for every interval $J \subset I$, $\limsup_{n \to \infty} {\rm d iam} g^n(J)>a$. Moreover, we can consider only the intervals containing fixed points of $g$. Indeed, since $\f$ is generically chaotic, i.e., $C_2(\f)$ is residual in $\mathcal K(I) \times \mathcal K(I)$, hence $\limsup_{n\to \infty} {\rm diam\,} f^n(J)>0$ for any interval $J,$ and thus also $$\limsup_{n\to \infty} {\rm diam\,} f^n({\rm cl}(J)) > 0.$$ By Lemma \[lm7\], ${\rm Orb}(f, {\rm cl}(J))$ contains a periodic point of $f$ with period $1$ or $2$, hence ${\rm Orb}(g, {\rm cl}(J))$ contains a fixed point of $g$, and consequently $g^s({\rm cl}(J))$ contains a fixed point of $g$ for some $s$; it suffices to take into account that $$\limsup_{n \to \infty} {\rm diam\,} g^n(J)=\limsup_{n \to \infty} {\rm diam\,} g^n({\rm cl}(J))=\limsup_{n \to \infty} {\rm diam\,} g^{sn}({\rm cl}(J)).$$ Let $\mathcal J$ be the collection of all intervals containing a fixed point of $g$. For any fixed point $p$ of $g$, let $\mathcal J(p)$ be a collection of all intervals $J \in \mathcal J$ containing $p$. To prove that $g$ satisfies (f-2) we show that $$\inf \{ \limsup_{n \to \infty} {\rm diam\,} g^n(J): J \in \mathcal J\}>0.$$ By a contradiction, assume that for every $i \in \mathbb N$ there exists a fixed point $x_i$ of $g$, and $K_i \in \mathcal J(x_i)$ such that $$\lim_{i\to \infty} \limsup_{n \to \infty} {\rm diam\,} g^n(K_i)=0.$$ Without loss of generality, suppose that the sequence of fixed points $x_i$ converges to a fixed point $p$ of $g$. For any $i\in \mathbb N$, denote $$\limsup_{n \to \infty} {\rm diam\,} g^n(K_i)=\varepsilon_i.$$ Obviously, for every $i \in \mathbb N$ there is a $k_i \in \mathbb N$ such that for any $k \ge k_i$, we have ${\rm diam\,} g^k(K_i) \le 2\varepsilon_i$. Let $J_i=\bigcap_{k=1}^\infty g^k(K_i)$. Since $x_i \in K_i$ and $g(x_i)=x_i$ we have $x_i \in J_i$ and by the definition of $k_i$ it is obvious that ${\rm diam\ J_i} \leq 4\varepsilon_i$. Moreover, the set $J_i$ is an invariant interval. Since the function $\f$ is generically chaotic, by Lemma \[new1\](v), the set $C_2(\g)$ is residual. Further, by Lemma \[lm12\], $g^k(K_i)$ is not a singleton for every $i$ and $k$. So we have shown that arbitrarily close to the fixed point $p$ of $g$ there are arbitrarily small $g$-invariant intervals $J_i$. Moreover, by Lemma \[new3\], Lemma \[lm3\](iii) and by [@S1], Lemma 4.1(i) (i.e., the original version of our Lemma \[new1\](i) for $f$), for every two intervals $J, J^\prime$, $$\liminf \limits_{n \to \infty} {\rm dist} (g^n(J), g^n(J^\prime))=0.$$ Now it follows from Lemma \[lm9\] that there exist arbitrarily small $g$-invariant intervals containing the point $p$. Since, by Lemma \[new1\](vi), $\g$ is generically chaotic, we have a contradiction with Lemma \[lm8\].\ [**Proof of Theorem \[main1\].**]{} It follows by Theorem \[th1\], Lemmas \[lm3\] and \[new3\], and Lemma \[lm16\].\ Densely $\varepsilon$-chaotic $\f$ implies densely $\varepsilon$-chaotic $f$ ============================================================================ Let $(X,d)$ be a compact metric space and $f\in C(X)$. Recall that a pair $(x,y)\in X \times X$ is called [*distal*]{} if $\liminf_{n\to\infty} d (f^n(x), f^n(y)) > 0$, and [*asymptotic*]{} if $\lim_{n\to\infty} d (f^n(x), f^n(y)) = 0$; it is $\delta$-[*asymptotic*]{}, with $\delta >0$, if $\limsup_{n\to\infty} d (f^n(x), f^n(y)) <\delta$. And for completeness recall also a statement of the well known\ [**Baire Category Theorem.** ]{}[*If a non-empty complete metric space is the union of a sequence of its closed subsets, then at least one set in the sequence must have non-empty interior.* ]{} \[distal\] Let $f\in C(X)$. If the induced map $\f$ is densely chaotic then the set of distal pairs of $f$ is a first category subset of $X\times X$. Denote by $D\subset X\times X$ the set of distal pairs for $f$. For every $\delta > 0$ and every $n\in \mathbb N$ let $$D_{\delta, n} := \{ (x,y)\in X\times X;\ d(f^j(x), f^j (y)) \geq \delta,\ {\rm for\ any\ } j\geq n \}.$$ Obviously, every $D_{\delta, n}$ is a closed set and $D =\bigcup_{\delta > 0} \bigcup_{n\in\mathbb N} D_{\delta, n} = \bigcup_{k\in\mathbb N} \bigcup_{n\in\mathbb N} \\ D_{1/k, n}$. So, it suffices to show that every $D_{\delta, n}$ is nowhere dense in $X\times X$. Assume on the contrary that, for some $\delta_0 > 0$ and $n_0\in\mathbb N$, int($D_{\delta_0, n_0})\neq \emptyset$. Then there are non-empty open sets $U, V \subseteq X$ such that $U \times V \subseteq D_{\delta_0, n_0}$. It follows that for every $j\geq n_0$ and every pair $(x,y) \in f^j(U\times V)$, $d (x,y) \geq \delta_0$, whence, for every non-empty compact sets $M\subseteq U$, $N\subseteq V$, $d_H(\f^j(M), \f^j(N)) \geq \delta_0$ – a contradiction.\ \[main2\] Let $f \in C(X)$ be such that the induced map $\f$ is densely $\varepsilon$-chaotic. Then $f$ is densely $\varepsilon$-chaotic. Assume on the contrary that $f$ is not densely $\varepsilon$-chaotic. Then there are open sets $\emptyset\ne U_0, V_0\subseteq X$ such that $U_0\times V_0$ contains no $\varepsilon$-Li-Yorke pair. Let $D$ the set of distal pairs of $f$, and let for every $n\in\mathbb N$, $$A_{n}:=\{ (x,y)\in U_0\times V_0;\ d(f^j(x),f^j(y))\le\varepsilon, \ {\rm for\ any} \ j\ge n\}.$$ Obviously, every $A_{n}$ is a closed set, $A_{1}\subseteq A_{2}\subseteq\dots$, and the set of $\varepsilon$-asymptotic pairs of $f$ contained in $U_0\times V_0$ is a subset of $\bigcup_{n\in\mathbb N} A_{n}$. Since $U_0\times V_0\subseteq D \cup \bigcup_{n\in\mathbb N} A_{n}$, and $D$ is (by Lemma \[distal\]) a first category set, the Baire Category Theorem implies that there is an $n_0\in\mathbb N$ such that $A_{n_0}$ has non-empty interior. Consequently, there are non-empty open sets $U_1\subseteq U_0, V_1\subseteq V_0$ such that $U_1\times V_1\subseteq A_{n_0}$. Since $\f$ is densely $\varepsilon$-chaotic there are compact sets $M\subset U_1, N\subset V_1$ such that $\limsup_{j\to\infty}$ $d_H(\overline f^j(M), \overline f^j(N))>\varepsilon$. On the other hand, $$d(f^j(x),f^j(y))\le \varepsilon, \ {\rm for\ every}\ (x,y)\in M\times N \ {\rm and\ every} \ j\ge n_0,$$ whence $\limsup_{j\to\infty}d_H(\f^j(M),\f^j(N))\le\varepsilon$ – a contradiction. Using Theorem \[main2\] and [@Mur] we obtain the following \[cor3\] Let $f \in C(X)$. If the induced map $\f$ is generically $\varepsilon$-chaotic, then $f$ is generically $\varepsilon$-chaotic. Densely ($\varepsilon$-)chaotic $f$ implies densely ($\varepsilon$-)chaotic $\f$ ================================================================================ In this section we show firstly, that densely chaotic $f$ implies densely chaotic $\f$ (Theorem \[main3a\]), and secondly, that analogous result is true for dense $\varepsilon$-chaos (Theorem \[main3b\]). \[main3a\] Let $f \in C(I)$ be densely chaotic and let $\f$ be induced by $f$. Then $\f$ is densely chaotic. Let $\langle U_1, U_2, \ldots , U_n\rangle$ and $\langle V_1, V_2, \ldots , V_m\rangle$ be arbitrary open sets in $\mathcal K(I)$ with $m, n \in \mathbb N$. To prove that $\f$ is densely chaotic it suffices to show that $\langle U_1, U_2, \ldots , U_n\rangle \times \langle V_1, V_2, \ldots , V_m\rangle$ contains a Li-Yorke pair $(U, V)$ for $\f$ (i.e., that $(U, V) \in C(\f)$). Since we will frequently work with the systems $\{U_i\, ; i=1, \ldots , n\}$ and $\{V_j\, ; j=1, \ldots , m\}$ in the following proof, we will use, to shorten the notation, the symbols $i$ and $j$ [*only*]{} in this sense, i.e., $i=1, \ldots, n$ and $j=1, \ldots, m$. Let $\delta_1:=\min\limits_i \{\liminf\limits_{k\to\infty} {\rm diam\,} f^k(U_i)\}$ and $\delta_2:=\min\limits_j\{\liminf\limits_{k\to\infty} {\rm diam\,} f^k(V_j)\}$. By Theorem \[th2\], $\delta_1,\delta_2 > 0$. Put $$\delta := \min \{\delta_1,\delta_2 \}.$$ By Theorem \[th2\] (a), (b) there are fixed point $x_0 \in I$ of $f$ and $k \in \mathbb N$ such that for every $i, j$ we have $$\label{eq11} {\rm dist} (f^k(U_i), x_0)< \delta/4 \quad {\rm and\quad } {\rm dist} (f^k(V_j), x_0)< \delta/4 ,$$ and also $$\label{eq111} {\rm diam\,} f^k(U_i) > \delta/2 \quad {\rm and \quad } {\rm diam\,} f^k(V_j) > \delta/2.$$ Denote $${\mathscr S_1}:=\{U_i: x_0-\delta/4 \in f^k(U_i)\} \cup \{V_j: x_0 - \delta/4 \in f^k(V_j)\},$$ $${\mathscr S_2}:=\{U_i: U_i \notin {\mathscr S_1}\} \cup \{V_j: V_j \notin {\mathscr S_1}\}$$ and put $$\label{eq1} S_1: = \bigcap_{A \in {\mathscr S_1}}f^k(A),\quad S_2 := \bigcap_{A \in {\mathscr S_2}}f^k(A).$$ Note that, for any set $A \in {\mathscr S_2}$, $x_0+\delta/4 \in f^k(A)$. Since a set $A\in {\mathscr S_1}$ is an open interval, by (\[eq11\]) and (\[eq111\]) we get that $f^k(A)$ is a non-degenerate interval containing $x_0 - \delta/4$ together with its right neighbourhood and hence, $S_1$ is also a non-degenerate interval. Analogously, $S_2$ is a non-degenerate interval. There are three possibilities.\ [**Case 1.**]{} There exist $i_1, i_2 \in \{1, \ldots , n\}$ and $j_1, j_2\in \{1, \ldots , m\}$ such that $U_{i_1}, V_{j_1}\in {\mathscr S_1}$ and $U_{i_2}, V_{j_2}\in {\mathscr S_2}$. Then $S_1,$ $S_2$ (see (\[eq1\])) are non-empty and, since $f$ is densely chaotic, there is a Li-Yorke pair $(s, p) \in S_1 \times S_1$. Let $r \in S_2$ be arbitrary. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ in the following way $$u_i= \left\{ \begin{array}{l} f^{-k}(s) \quad {\rm if}\ U_i \in {\mathscr S_1}, \\ f^{-k}(r) \quad {\rm if}\ U_i \in {\mathscr S_2}, \end{array} \right.$$ $$v_j= \left\{ \begin{array}{l} f^{-k}(p) \quad {\rm if}\ V_j \in {\mathscr S_1}, \\ f^{-k}(r) \quad {\rm if}\ V_j \in {\mathscr S_2}. \end{array} \right.$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Consequently, $U \in \langle U_1, U_2, \ldots , U_n \rangle$, $V \in \langle V_1, V_2, \ldots , V_m \rangle$, and moreover $\f^k(U)=\{s, r\}$ and $\f^k(V) =\{p, r\}$. Obviously, $(U, V)$ is a Li-Yorke pair for $\f$.\ [**Case 2.**]{} Let ${\mathscr S_1}, {\mathscr S_2}$ be non-empty and assume that one of them contains either all $U_i$, $i=1, \ldots ,n$, or all $V_j$, $j=1, \ldots ,m$. Without loss of generality, assume that $U_i \in {\mathscr S_1}$ for any $i$ (hence ${\mathscr S_2}$ contains only the sets $V_j$ for some $j$; note that ${\mathscr S_1}$ can also contain some intervals $V_j$). Similarly as in Case 1, there exists a Li-Yorke pair $(s, r) \in S_1 \times S_2$. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ in the following way $$u_i= f^{-k}(s)$$ and $$v_j= \left\{ \begin{array}{l} f^{-k}(s) \quad {\rm if}\ V_j \in {\mathscr S_1}, \\ f^{-k}(r) \quad {\rm if}\ V_j \in {\mathscr S_2}. \end{array} \right.$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Then $\f^k(U)=\{s\}$ and $\{r\} \subseteq \f^k(V) \subseteq \{s, r\}$ and hence $(U, V)$ is a Li-Yorke pair for $\f$.\ [**Case 3.**]{} Either ${\mathscr S_1}$ or ${\mathscr S_2}$ is empty. Without loss of generality assume that ${\mathscr S_2} = \emptyset$. Since $f$ is densely chaotic, there is a Li-Yorke pair $(s, p) \in S_1 \times S_1$. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ by $$u_i= f^{-k}(s) \qquad {\rm and}\qquad v_j=f^{-k}(p).$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Then $\f^k(U)=\{s\}$ and $\f^k(V) = \{p\}$ and so, $(U, V)$ is a Li-Yorke pair for $\f$.\ The following two results on transitive maps will be used in the proof of Theorem \[main3b\]. \[bc\] [([@BC], p. 156, Proposition 42)]{} Let $f: I \rightarrow I$ be transitive. Then exactly one of the following alternatives holds: 1. for every positive integer $s$, $f^s$ is transitive, 2. there exist non-degenerate closed intervals $J, K$ with $J \cup K = I$ and $J \cap K = \{y\}$, where $y$ is a fixed point of $f$, such that $f(J)=K$ and $f(K)=J$. \[bc2\][([@BC], p. 157, Proposition 44)]{} Let $f\in C(I)$. Then $f^2$ is transitive if and only if, for every open subinterval $J$ and every closed subinterval $H$ which does not contain an endpoint of $I$, there is a positive integer $N$ such that $H \subseteq f^n(J)$ for every $n>N$. \[main3b\] Let $f \in C(I)$ be densely $\varepsilon$-chaotic and let $\f$ be induced by $f$. Then $\f$ is densely $\varepsilon$-chaotic. By Theorem \[th1\], the map $f$ is generically $\varepsilon$-chaotic. Let $\langle U_1, U_2, \ldots, \\ U_n \rangle$ and $\langle V_1, V_2, \ldots , V_m \rangle$ be open sets in $\mathcal K(I)$, where $n, m \in \mathbb N$ and $U_i, V_j$ are open intervals in $I$ for $1 \le i \le n, 1 \le j \le m$. To prove that $\f$ is densely $\varepsilon$-chaotic it suffices to find an $\varepsilon$-Li-Yorke pair $(U, V) \in \langle U_1, U_2, \ldots , U_n\rangle \times \langle V_1, V_2, \ldots , V_m\rangle$ (i.e., $(U, V) \in C(\f, \varepsilon)$). Similarly as in the proof of Theorem \[main3a\], we will use the symbols $i$ and $j$ [*only*]{} for this purpose, i.e., $i=1, \ldots, n$ and $j=1, \ldots, m$. By Theorem \[th1\] (h-2) and (g-2), there are a positive integer $l$ and open intervals $U'_i, V'_j$, for any $i, j$, such that $U'_i \subset f^l(U_i) \cap\ {\rm int}(T_{U_i})$ (resp. $V'_j \subset f^l(V_j) \cap\ {\rm int}(T_{V_j})$ ), where $T_{U_i}$ (resp. $T_{V_j}$) is an invariant transitive interval. By Theorem \[th1\] (h-1), $f$ has a unique invariant transitive interval or two transitive intervals having one point in common.\ [**(a)**]{} Consider first the existence of two invariant transitive intervals $T_1, T_2$ with one common point $x_0$. Obviously, $x_0$ is a fixed point. Assume that $T_1$ lies on the left of $T_2$. Denote $${\mathscr S_1}:= \{U'_i: U'_i \subset T_1\} \cup \{V'_j: V'_j \subset T_1\},$$ $${\mathscr S_2}:= \{U'_i: U'_i \subset T_2\} \cup \{V'_j: V'_j \subset T_2\}.$$ By Theorem \[th1\] (g-2), there is a positive $b$ such that $\liminf\limits_{n \to \infty} {\rm diam\,}f^n(J)>b$ for every interval $J$, and by Theorem \[th1\] (g-1), there is a non-negative integer $k$ such that, for every $i, j$, we have $$\label{eq22} {\rm dist} (f^k(U'_i), x_0)< b/4 \quad {\rm and\quad } {\rm dist} (f^k(V'_j), x_0)< b/4 ,$$ and moreover $$\label{eq222} {\rm diam\,} f^k(U'_i) > b/2 \quad {\rm and \quad } {\rm diam\,} f^k(V'_j) > b/2.$$ Put $$\label{eq2} S_1: = \bigcap_{A \in {\mathscr S_1}}f^k(A),\quad S_2 := \bigcap_{A \in {\mathscr S_2}}f^k(A).$$ Since a set $A\in {\mathscr S_1}$ is an open interval, by (\[eq22\]) and (\[eq222\]), $f^k(A)$ is a non-degenerate interval containing $x_0 - b/4$ together with its right neighbourhood, and hence $S_1$ is also a non-degenerate interval. Analogously, $S_2$ is a non-degenerate interval. Similarly as in the proof of Theorem \[main3a\] there are three possibilities.\ [**Case 1.**]{} There exist $i_1, i_2 \in \{1, \ldots , n\}$ and $j_1, j_2\in \{1, \ldots , m\}$ such that $U'_{i_1}, V'_{j_1}\in {\mathscr S_1}$ and $U'_{i_2}, V'_{j_2}\in {\mathscr S_2}$. Then $S_1,$ $S_2$ (see (\[eq2\])) are non-empty and, since $f$ is densely $\varepsilon$-chaotic, there is an $\varepsilon$-Li-Yorke pair $(s, p) \in S_1 \times S_1$. Let $r \in S_2$ be arbitrary. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ in the following way $$u_i= \left\{ \begin{array}{l} f^{-(k+l)}(s) \quad {\rm if}\ U'_i \in {\mathscr S_1}, \\ f^{-(k+l)}(r) \quad {\rm if}\ U'_i \in {\mathscr S_2}, \end{array} \right.$$ $$v_j= \left\{ \begin{array}{l} f^{-(k+l)}(p) \quad {\rm if}\ V'_j \in {\mathscr S_1}, \\ f^{-(k+l)}(r) \quad {\rm if}\ V'_j \in {\mathscr S_2}. \end{array} \right.$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Consequently, $U \in \langle U_1, U_2, \ldots , U_n \rangle$, $V \in \langle V_1, V_2, \ldots , V_m \rangle$ and moreover, $\f^{k+l}(U)=\{s, r\}$ and $\f^{k+l}(V) =\{p, r\}$. Since $r \in T_2$ and $(s, p) \in T_1 \times T_1$ is an $\varepsilon$-Li-Yorke pair, we have that $(U, V)$ is an $\varepsilon$-Li-Yorke pair for $\f$.\ [**Case 2.**]{} Let ${\mathscr S_1}, {\mathscr S_2}$ be non-empty and assume that one of them contains either all $U'_i$, $i=1, \ldots ,n$, or all $V'_j$, $j=1, \ldots ,m$. Without loss of generality we may assume that $U'_i \in {\mathscr S_1}$ for any $i$ (hence ${\mathscr S_2}$ contains only the sets $V'_j$ for some $j$; note that ${\mathscr S_1}$ can also contain some intervals $V'_j$). Similarly as in Case 1, there exists an $\varepsilon$-Li-Yorke pair $(s, r) \in S_1 \times S_2$. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ in the following way $$u_i= f^{-(k+l)}(s),$$ and $$v_j= \left\{ \begin{array}{l} f^{-(k+l)}(s) \quad {\rm if}\ V'_j \in {\mathscr S_1}, \\ f^{-(k+l)}(r) \quad {\rm if}\ V'_j \in {\mathscr S_2}. \end{array} \right.$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Then $\f^{k+l}(U)=\{s\}$ and $\{r\} \subseteq \f^{k+l}(V) \subseteq \{s, r\}$, and hence $(U, V)$ is an $\varepsilon$-Li-Yorke pair for $\f$.\ [**Case 3.**]{} Either ${\mathscr S_1}$ or ${\mathscr S_2}$ is empty. Without loss of generality we assume that ${\mathscr S_2} = \emptyset$. Since $f$ is densely $\varepsilon$-chaotic there is an $\varepsilon$-Li-Yorke pair $(s, p) \in S_1 \times S_1$. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ by $$u_i= f^{-(k+l)}(s) \qquad {\rm and}\qquad v_j=f^{-(k+l)}(p).$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Then $\f^{k+l}(U)=\{s\}$ and $\f^{k+l}(V) = \{p\}$, and so $(U, V)$ is an $\varepsilon$-Li-Yorke pair for $\f$.\ [**(b)**]{} Now we consider that $f$ has a unique invariant transitive interval $T$. By Lemma \[bc\], either $f^s|_T$ is transitive for every positive integer $s$, or there exist non-degenerate closed intervals $J, K$ with $J \cup K = T$ and $J \cap K = \{x_0\}$, where $x_0$ is a fixed point of $f|_T$, such that $f|_T(J)=K$ and $f|_T(K)=J$. Let the second possibility hold, put $g:=f^2$. Note that since $f$ is densely $\varepsilon$-chaotic (and hence also generically chaotic), $g$ is also densely $\varepsilon$-chaotic (see [@S1], Lemma 4.2). By Lemma \[new1\](i) and since $$\limsup\limits_{n \to \infty} {\rm dist} (\g^n(A), \g^n(B))>\varepsilon \Rightarrow \limsup\limits_{n \to \infty} {\rm dist} (\f^n(A), \f^n(B))>\varepsilon,$$ it suffices to prove that $\g$ is densely $\varepsilon$-chaotic. Since $J, K$ are two $g$-invariant transitive intervals with one common point $x_0$, we may proceed analogously as in part (a) of this proof. Hence $\g$ is densely $\varepsilon$-chaotic. Finally, let $f^s|_T$ be transitive for every positive integer $s$ (specially $f^2|_T$ is transitive), let $U'$ (respectively $V'$) denote the closure of the convex hull of $\bigcup_i U'_i$ (respectively $\bigcup_j V'_j$). From the construction of $U'_i$ and $V'_j$ we may assume that $U'$ and $V'$ do not contain an endpoint of $T$. Then, by Lemma \[bc2\], there exists a positive integer $k$ such that $f^k(U'_i) \supseteq U'$ and $f^k(V'_j) \supseteq V'$, for every $i, j$. Since $f$ is densely $\varepsilon$-chaotic, there is an $\varepsilon$-Li-Yorke pair $(s, p) \in U' \times V'$. For any $i, j$, define $u_i \in U_i$ and $v_j \in V_j$ in the following way $$u_i=f^{-(k+l)}(s) \quad {\rm and}\quad v_j=f^{-(k+l)}(p).$$ Let $U=\{u_1, u_2, \ldots , u_n\}$ and $V=\{v_1, v_2, \ldots , v_m\}$. Then $\f^{k+l}(U)=\{s\}$ and $\f^{k+l}(V) =\{p\}$. Hence $(U, V)$ is an $\varepsilon$-Li-Yorke pair for $\f$. The proof of the theorem is complete. From Theorem \[main3b\], [@Mur], [@S1] and Theorem \[main1\] (see also the scheme on Figure \[scheme\]) we obtain the following \[cor4\] Let $f \in C(I)$ be generically $\varepsilon$-chaotic (resp., generically chaotic), then the induced map $\f$ is generically $\varepsilon$-chaotic (resp., generically chaotic). Moreover, if $\f$ is generically chaotic, then $\f$ is generically $\varepsilon$-chaotic. [**Remark.**]{} Note that the equivalence of generic and generic $\varepsilon$-chaos for both $f$ and $\f$ in the interval case is no more true for maps on general compact metric spaces: Murinov' a in [@Mur] gave an example of a map $f$ on the plane which is generically chaotic but not generically $\varepsilon$-chaotic. It is easily seen that the same is true for the induced map $\f$. Now we will present an example of a densely chaotic function $f : I \to I$ such that neither $f$, nor $\f$ is densely $\varepsilon$-chaotic, for any $\varepsilon >0$. This example is taken from [@S1] but, for convenience, we give it here adding its graph. ([@S1], Example 3.6) \[exmp\] For $n = 0, 1, 2, \ldots$, let $$a_n:=1-\frac{1}{3^n}, \quad b_n:= 1-\frac{1}{4\cdot 3^{n-1}},\quad c_n:=1-\frac{1}{2 \cdot 3^{n}},$$ and let $I_n$ be a closed interval $[a_n, 1]$. Define a sequence $\{f_n\}_{n=0}^\infty$ of piecewise linear functions $f_n \in C(I) $ such that for $i=0, 1, \ldots , n$, $f_n$ is linear on each of intervals $[a_i, b_i]$, $[b_i, c_i]$, $[c_i, a_{i+1}]$, $$f_n(a_i)=a_i, \quad f_n(b_i)=1, \quad f_n(c_i)=a_i$$ and $f_n|_{I_{n+1}}$ is the identity function. Let $f \in C(I)$ be the uniform limit of $f_n$ for $n \to \infty$. A sketch of the graph of $f$ is provided in Figure \[graph\]. The function $f$ is densely chaotic, but not densely $\varepsilon$-chaotic for any $\varepsilon>0$ (for more details, see [@S1]). Thus, by Theorems \[main3a\] and \[main1\], $\f$ is densely chaotic, but not densely $\varepsilon$-chaotic for any $\varepsilon>0$. (-0.2,0) – (7.6,0); (0pt,2pt) – (0pt,-2pt) node\[below\] [$0$]{}; (0pt,2pt) – (0pt,-2pt) node\[below\] [$\frac{1}{2}$]{}; (0pt,2pt) – (0pt,-2pt) node\[below\] [$1$]{}; (0,-0.1) – (0, 7.6); (-2pt,0pt) – (2pt,0pt) node\[left\] [$1$]{}; (7,0)–(7,7.069)–(0, 7.069); (0,0)–(7,7); (0,0)–(1.75, 7)–(3.5, 0) –(4.667, 4.667)–(5.25, 7)–(5.833, 4.667) –(6.222, 6.222)–(6.417, 7)–(6.611, 6.222) –(6.7407, 6.7407)–(6.8056, 7)–(6.87037, 6.7407) –(6.91358, 6.91358)–(6.93519, 7)–(6.95679, 6.91358) –(6.9711934, 6.9711934)–(6.978395, 7)–(6.9855967, 6.9711934) –(6.990397805, 6.990397805)–(6.99279835390947, 7)–(6.99519890260631, 6.990397805) –(6.99679926840421, 6.99679926840421)–(6.99759945130316, 7)–(6.9983996342021, 6.99679926840421) –(6.99893308946807, 6.99893308946807) –(7,7); Related results and an open problem =================================== The results of this section are motivated by the question of transmission of dense chaos from $\f$ to $f$. We assume that $(X,d)$ is a compact metric space and $f\in\mathcal C(X)$. We do not know whether dense chaoticity of $\overline f$ implies the same property for $f$, even for interval maps. Instead, we are able to prove the following weaker result. Its proof is based on similar techniques as that of Theorem \[main2\].\ Let us recall that by $C(f) = C_1(f)\cap C_2(f) \subseteq X \times X$ (resp. $C(\f) = C_1(\f)\cap C_2(\f) \subseteq \mathcal K(X) \times \mathcal K(X)$) we denote the set of Li-Yorke pairs for $f$ (resp. $\f$); and similarly for $C(f,\varepsilon)$, resp. $C(\f, \varepsilon)$, see Section \[chaos\]. \[husty\] Let $f\in C(X)$. Let for any non-empty open set $G \subseteq \mathcal K(X) \times \mathcal K(X)$, $G \cap C(\f)$ be a second category set. Then $f$ has also this property (i.e., for any open $\emptyset \neq G \subseteq X \times X$, $G \cap C(f)$ is a second category set). Assume on the contrary that $f$ does not have the required property. Then there are a first category set $E$, and open sets $\emptyset \neq U_0, V_0 \subseteq X$ such that every pair of points in $(U_0 \times V_0) \setminus E$ is either distal or asymptotic. Denote by $C_{U\times V}(\f, \varepsilon) := C(\f, \varepsilon) \cap (\mathcal K(U) \times \mathcal K(V))$. Obviously, $C_{U\times V}(\f, \varepsilon_1) \supset C_{U\times V}(\f, \varepsilon_2)$, whenever $\varepsilon_1 < \varepsilon_2$. Since $\bigcup_{k\in\mathbb N} C_{U_0 \times V_0}(\f, \frac 1k)$ is a second category set, there is an $\varepsilon_0 > 0$ and non-empty open sets $U_1 \subseteq U_0$ and $V_1 \subseteq V_0$ such that $C(\f, \varepsilon_0)$ is dense in $\mathcal K(U_1) \times \mathcal K(V_1)$. Denote by $A$ the set of asymptotic pairs of $f$ contained in $U_1 \times V_1$, and, for every $\delta > 0$ and $n\in\mathbb N$, let $$A_{\delta, n} = \{ (x,y) \in U_1\times V_1;\ d(f^j(x), f^j(y)) \leq \delta ,\ {\rm for\ any\ } j\geq n \}.$$ Obviously, for every $\delta > 0$ and $n\in\mathbb N$, $A_{\delta, n}$ is a closed set, $A\subseteq \bigcup_{n\in\mathbb N} A_{\delta, n}$, and $A_{\delta, 1} \subseteq A_{\delta, 2} \subseteq \dots$. Since $E$ is a first category set, Lemma \[distal\] and Baire category theorem imply that, for any $\delta > 0$ there is an $n_0 \in \mathbb N$ such that $A_{\delta, n_0}$ has non-empty interior. Consequently, there are non-empty open sets $U_2 \subseteq U_1$, $V_2 \subseteq V_1$, and $n_0\in\mathbb N$ such that $U_2 \times V_2 \subseteq A_{{\varepsilon_0}/{2}, n_0}$. Since $C(\f, \varepsilon_0)$ is dense in $\mathcal K(U_2) \times \mathcal K(V_2)$, there are compact sets $M \subset U_2$, $N\subset V_2$ such that $\limsup_{j\to\infty} d_H(\f^j(M), \f^j(N)) \geq\varepsilon_0$. On the other hand, $$d(f^j(x), f^j(y)) \leq \frac{\varepsilon_0}{2}, {\rm \ for\ every\ } (x,y) \in M\times N {\rm \ and\ every\ } j\geq n_0,$$ whence $d_H(\f^j(M), \f^j(N)) < \varepsilon_0 /2$ – a contradiction. Obviously, the property from Theorem \[husty\] is stronger than dense chaoticity; note that the map from Example \[exmp\] is densely chaotic but does not have this property. On the other hand, any generically chaotic map has this property, so we get the following \[cor5\] Let $f\in C(X)$. If the induced map $\f$ is generically chaotic, then $f$ is densely chaotic. Let us now survey the obtained results in a scheme. In this scheme the arrows mean implications, the dashed arrows are corollaries which follow by the transitivity of implications. Since some of the properties are true not only for interval maps but even for maps on general compact metric spaces, this fact together with the numbers of the corresponding theorems and corollaries are indicated in the brackets next to the corresponding arrows. (uz3) at (-0.09,0.2)[$f$ is densely $\varepsilon$-chaotic]{}; (uz33) at (6.7, 0.2) [$\f$ is densely $\varepsilon$-chaotic]{}; (uz2) at (-0.09,-1.8)[$f$ is generically $\varepsilon$-chaotic]{}; (uz22) at (6.7,-1.8) [$\f$ is generically $\varepsilon$-chaotic]{}; (uz1) at (-0.09,-4.2)[$f$ is generically chaotic]{}; (uz11) at (6.7,-4.2) [$\f$ is generically chaotic]{}; (uz4) at (-0.09,-7.3)[$f$ is densely chaotic]{}; (uz44) at (6.7,-7.3) [$\f$ is densely chaotic]{}; (\[xshift=0.15cm, yshift=0.1cm\]uz1.east) – (\[xshift=-0.15cm, yshift=0.1cm\]uz11.west) node\[midway,above\] [[Cor. \[cor4\] ($I$)]{}]{} ; (\[xshift=0.15cm, yshift=-0.1cm\]uz1.east) – (\[xshift=-0.15cm, yshift=-0.1cm\]uz11.west) node\[midway,below\] [[Thm. \[main1\] ($I$)]{}]{} ; (\[xshift=0.15cm, yshift=0.1cm\]uz2.east) – (\[xshift=-0.15cm, yshift=0.1cm\]uz22.west) node\[midway,above\] [[Cor. \[cor4\] ($I$)]{}]{} ; (\[xshift=0.15cm, yshift=-0.1cm\]uz2.east) – (\[xshift=-0.15cm, yshift=-0.1cm\]uz22.west) node\[midway,below\] [[Cor. \[cor3\] ($X$)]{}]{} ; (\[xshift=0.15cm, yshift=0.1cm\]uz3.east) – (\[xshift=-0.15cm, yshift=0.1cm\]uz33.west) node\[midway,above\] [[Thm. \[main3b\] ($I$)]{}]{} ; (\[xshift=0.15cm, yshift=-0.1cm\]uz3.east) – (\[xshift=-0.15cm, yshift=-0.1cm\]uz33.west) node\[midway,below\] [[Thm. \[main2\] ($X$)]{}]{} ; (\[xshift=0.15cm, yshift=0.1cm\]uz4.east) – (\[xshift=-0.15cm, yshift=0.1cm\]uz44.west) node\[midway,above\] [[Thm. \[main3a\] ($I$)]{}]{} ; (\[xshift=0.15cm, yshift=-0.1cm\]uz4.east) – (\[xshift=-0.15cm, yshift=-0.1cm\]uz44.west) node\[midway,below\] [[?]{}]{} ; (\[xshift=0.15cm, yshift=0.1cm\]uz1.north) – (\[xshift=0.15cm, yshift=-0.1cm\]uz2.south) node\[midway,left\] [ [[@S1] ($I$)]{}]{}; (\[xshift=0.45cm, yshift=0.1cm\]uz1.north) – (\[xshift=0.45cm, yshift=-0.1cm\]uz2.south) node\[midway,right\] [ [Def. \[def\] ($X$)]{}]{}; (\[xshift=-1.4cm, yshift=0.1cm\]uz1.north) – node\[midway, left\] [$\not$]{} (\[xshift=-1.4cm, yshift=-0.1cm\]uz2.south) node\[midway,left\] [ [[@Mur] ($X$)]{}]{}; (\[xshift=-0.15cm, yshift=0.1cm\]uz2.north) – (\[xshift=-0.15cm, yshift=-0.1cm\]uz3.south) node\[midway,left\] [ [Def. \[def\] ($X$)]{}]{}; (\[xshift=0.15cm, yshift=0.1cm\]uz2.north) – (\[xshift=0.15cm, yshift=-0.1cm\]uz3.south) node\[midway,right\] [ [[@Mur] ($X$)]{}]{}; (\[xshift=-0.15cm, yshift=-0.1cm\]uz1.south) – node\[midway, left\] [$\not$]{} (\[xshift=-0.15cm, yshift=0.1cm\]uz4.north) node\[midway,left\] [[Exmp. \[exmp\] ($I$)]{}]{} ; (\[xshift=0.15cm, yshift=-0.1cm\]uz1.south) – (\[xshift=0.15cm, yshift=0.1cm\]uz4.north)node\[midway,right\] [[Def. \[def\] ($X$)]{}]{}; (\[xshift=0.55cm, yshift=0.1cm\]uz11.north) – (\[xshift=0.55cm, yshift=-0.1cm\]uz22.south)node\[midway,left\] [[Cor. \[cor4\] ($I$)]{}]{} ; (\[xshift=0.85cm, yshift=0.1cm\]uz11.north) – (\[xshift=0.85cm, yshift=-0.1cm\]uz22.south) node\[midway,right\] [[Def. \[def\] ($X$)]{}]{}; (\[xshift=-1.6cm, yshift=0.1cm\]uz11.north) – node\[midway, left\] [$\not$]{} (\[xshift=-1.6cm, yshift=-0.1cm\]uz22.south) node\[midway,left\] [ [[@Mur] ($X$)]{}]{}; (\[xshift=-0.15cm, yshift=0.1cm\]uz22.north) – (\[xshift=-0.15cm, yshift=-0.1cm\]uz33.south) node\[midway,left\] [ [Def. \[def\] ($X$)]{}]{}; (\[xshift=0.15cm, yshift=0.1cm\]uz22.north) – (\[xshift=0.15cm, yshift=-0.1cm\]uz33.south) node\[midway,right\] [ [[@Mur] ($X$)]{}]{}; (\[xshift=0.15cm, yshift=-0.1cm\]uz11.south) – (\[xshift=0.15cm, yshift=0.1cm\]uz44.north)node\[midway,right\] [[Def. \[def\] ($X$) ]{}]{}; (\[xshift=-0.15cm, yshift=-0.1cm\]uz11.south) – node\[midway, left\] [$\not$]{} (\[xshift=-0.15cm, yshift=0.1cm\]uz44.north) node\[midway,left\] [Exmp. \[exmp\] ($I$)]{} ; (\[xshift=-2.2cm, yshift=-0.1cm\]uz11.south) – (\[xshift=2cm\]uz4.north)node\[sloped,above, midway\][[  Cor. \[cor5\] ($X$)       ]{}]{}; [**Remark.**]{} Note that for piecewise monotone interval maps $f$ (with finite number of pieces of monotonicity) all four variants of generic or dense chaos we deal with in the present paper are equivalent (see [@S2]). It can be easily shown that analogous result is true for $\f$. Really, if $\f$ is densely chaotic, then obviously $f$ fulfills property (b) from Theorem \[th2\] and (f-1) from Theorem \[th1\]. From [@S2] it follows that these two properties imply (a) from Theorem \[th2\]. Using Theorem 1.4 from [@S2] we obtain that $f$ is generically chaotic, and hence also $\f$ is generically chaotic (see Corollary \[cor4\]). Conseqently, for piecewise monotone interval maps, generic, generic $\varepsilon$-, dense and dense $\varepsilon$-chaos for both $f$ and $\f$ are all equivalent.\ [**Open problem.**]{} The question whether dense chaos is transmitted from $\f$ to $f$ for maps on general compact metric spaces as well as for (non piecewise monotone) interval maps (i.e., the dotted arrow indicated with the question mark in the scheme) remains open.\ Let us conclude with the proof of the property that dense chaoticity of $\f$ cannot occur in spaces with isolated points. Let $f\in C(X)$. If the induced map $\f$ is densely chaotic, then the set of asymptotic pairs of $f$ has empty interior. Let $A$ be the set of asymptotic pairs of $f$ and denote by $A_{k, n} = \{ (x,y) \in X\times X;\ d(f^j(x), f^j(y)) \leq 1/k ,\ {\rm for\ any\ } j\geq n \}$. Assume, on a contrary, that $int(A)\neq \emptyset$. Then there are open sets $\emptyset\neq U, V \subseteq X$ such that $U\times V \subseteq A$ and, by the hypothesis, there are compact sets $M\subseteq U, N\subseteq V$ such that $(M, N) \in C(\f, \varepsilon)$. Let $k>1/\varepsilon$. Since $\bigcup_{n\in\mathbb N} A_{k,n} \supseteq A$ and, for every $n\in \mathbb N$, $A_{k,n} \subseteq A_{k, n+1}$, there is an $n_0 \in \mathbb N$ such that $A_{k, n_0} \supseteq M\times N$. Hence $d(f^j(x), f^j(y)) < 1/k < \varepsilon$, for every $x\in M$, $y\in N$, and any $j\geq n_0$. Consequently, $\limsup_{j\to\infty} d_H(\f^j(M), \f^j(N)) < \varepsilon$ – a contradiction.\ Let $f\in C(X)$. If the induced map $\f$ is densely chaotic, then $X$ has no isolated points. At the very end, we give an example of a system with an interesting behavior - while the system itself is asymptotic, the induced system contains LY-pairs. \[exmp2\] Let $\Sigma_2=\{0, 1\}^\mathbb N$ be the space of all sequences of two symbols $0$ and $1$ equipped with the metric $d(x, y)=\max\{1/i; x_i \neq y_i\}$, for any distinct $x=\{x_i\}_{i\in\mathbb N}$ and $y=\{y_i\}_{i\in \mathbb N}$. Denote by $A$ the set of all sequences containing at most one symbol $0$ (i.e., the sequence $1^\infty$ of ones and all sequences of the form $1^r01^\infty$ where $r$ is a nonnegative integer). Let $\sigma : A \rightarrow A$ be the standard shift map. Obviously, $\sigma(A)=A$ and for every $x \in A$ there is $n \in \mathbb N$ such that $\sigma^n(x)= 1^\infty$. Hence any pair $(x, y) \in A \times A$ is asymptotic and the set of LY-pairs $C(\sigma)=\emptyset$. On the other hand, for the induced system, $C(\overline{\sigma})\neq\emptyset$. Indeed, if $M:=\{1^\infty\}$ and $$N:=\{1^{n_i}01^\infty ; i=0, 1, 2, \ldots, n_0=0\ {\rm and}\ n_{i+1}=n_i+i+2 \}$$ then $(M, N) \in \mathcal K(A)\times \mathcal K(A)$ is a LY-pair for $\overline{\sigma}$. Note that any LY-pair on a shift space is always an $\varepsilon$-LY-pair, for some $\varepsilon>0$ (in our space $(\Sigma_2, d)$, $\varepsilon = 1$).\ [**Acknowledgments.**]{} The authors would like to thank prof. J. Sm' ital for fruitful discussions and valuable comments. [00]{} W. Bauer and K. Sigmund, [*Topological dynamics of transformations induced on the space of probability measures*]{}, Monatsh. Math. 79 (1975), 81 Ð 92. Block L. and Coppel W. A., [*Dynamics in One Dimension*]{}, Lecture Notes in Math. 1513 (1992), Springer, Berlin. J. L. García Guirao, D. Kwietniak, M. Lampart, P. Oprocha and A. Peris, [*Chaos on hyperspaces*]{}, Nonlinear Anal. 71 (2009), 1 – 8. E. Michael, [*Topologies on spaces of subsets*]{}, Trans. Amer. Math. Soc. 71 (1951) 151–182. A. Illanes and S. B. Nadler Jr, [*Hyperspaces.*]{} Monographs and textbooks in pure and applied mathematics, vol. 216. New York: Marcel Dekker Inc.; 1999. S. Macias, [*Topics on continua.*]{} Boca Raton, FL: Chapman & Hall/CRC; 2005. E. Murinov' a, [*Generic chaos in metric space*]{}, Acta Univ. M. Belii Ser. Math. 8 (2000) 43–50. J. Pi' orek, [*On the generic chaos in dynamical systems*]{}, Acta Math. Univ. Iagell. 25 (1985), 293 – 298. S. Ruette, [*Dense chaos for continuous interval maps*]{}, Nonlinearity 18 (2005), 1691 – 1698. [L-0.08cm39]{}. Snoha, [*Generic chaos*]{}, Comment. Math. Univ. Carolinae 31 (1990) 793–810. [L-0.08cm39]{}. Snoha, [*Dense chaos*]{}, Comment. Math. Univ. Carolinae 33 (1992) 747–752.
--- abstract: 'Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition. We consider four aspects of monotonicity inferences and test whether the models can systematically interpret on different training/test splits. A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets. However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set. This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.' author: - | \ $^1$ $^2$ $^3$ $^4$\ bibliography: - 'acl2020.bib' title: | Do Neural Models Learn Systematicity\ of Monotonicity Inference in Natural Language? --- Introduction {#sec:intro} ============ Natural language inference (NLI), a task whereby a system judges whether given a set of premises $P$ semantically entails a hypothesis $H$ [@series/synthesis/2013Dagan; @Bowman2015], is a fundamental task for natural language understanding. As with other NLP tasks, recent studies have shown a remarkable impact of deep neural networks in NLI [@DBLP:journals/corr/WilliamsNB17; @wang2018glue; @BERT2018new]. However, it remains unclear to what extent DNN-based models are capable of learning the compositional generalization underlying NLI from given labeled training instances. Systematicity of inference (or *inferential systematicity*)  [@Fodor1988-FODCAC; @Aydede1997] in natural language has been intensively studied in the field of formal semantics. From among the various aspects of inferential systematicity, in the context of NLI, we focus on *monotonicity* [@10.2307/25001141; @moss2014] and its *productivity*. Consider the following premise–hypothesis pairs (1)–(3), which have the target label *entailment*: \[ex:1\] \[ex:1a\] ***Some*** \[[**$\textcolor{{red!80!black}}{\uparrow}$]{}\] *ran*. \[ex:1b\] ***Some** ran*. \[ex:2\] \[ex:2a\] ***No*** \[[**$\textcolor{{blue!80!black}}{\downarrow}$]{}\] *ran*. \[ex:2b\] ***No** ran*. \[ex:3\] \[ex:3a\] ***Some*** \[*puppies which chased **no*** \[[**$\textcolor{{blue!80!black}}{\downarrow}$]{}\]\] *ran*. \[ex:3b\] ***Some** dogs which chased **no** ran*. As in (\[ex:1\]), for example, quantifiers such as *some* exhibit **upward monotone** (shown as \[...[**$\textcolor{{red!80!black}}{\uparrow}$]{}\]), and replacing a phrase in an upward-entailing context in a sentence with (replacing *puppies* in $P$ with *dogs* as in $H$) yields a sentence inferable from the original sentence. In contrast, as in (\[ex:2\]), quantifiers such as *no* exhibit **downward monotone** (shown as \[...[**$\textcolor{{blue!80!black}}{\downarrow}$]{}\]), and replacing a phrase in a downward-entailing context with (replacing *cats* in $P$ with *small cats* as in $H$) yields a sentence inferable from the original sentence. Such primitive inference patterns combine recursively as in (\[ex:3\]). This manner of produces a potentially infinite number of inferential patterns. Therefore, NLI models must be capable of systematically interpreting such primitive patterns and reasoning over unseen combinations of patterns. To this aim, we introduce a new evaluation protocol where we (i) synthesize training instances from sampled sentences and (ii) systematically control which patterns are shown to the models in the training phase and which are left unseen. The rationale behind this protocol is two-fold. First, patterns of monotonicity inference are highly systematic, so we can create training data with arbitrary combinations of patterns, as in examples (\[ex:1\])–(\[ex:3\]). Second, evaluating the performance of the models trained with well-known NLI datasets such as MultiNLI [@DBLP:journals/corr/WilliamsNB17] might severely underestimate the ability of the models because such datasets tend to contain only a limited number of training instances that exhibit the inferential patterns of interest. Furthermore, using such datasets would prevent us from identifying which combinations of patterns the models can infer from which patterns in the training data. This paper makes two primary contributions. First, we introduce an evaluation protocol[^1] using the systematic control of the training/test split under various combinations of semantic properties to evaluate whether models learn inferential systematicity in natural language. Method ====== Basic idea ---------- Figure \[fig:goodpic\] illustrates the basic idea of our evaluation protocol on monotonicity inference. $G_d^{\textbf{Q}}$ by a context-free grammar $G$ with depth $d$ (i.e., the maximum number of applications of recursive rules), given a set of quantifiers $\textbf{Q}$. Then, by applying $G_d^{\textbf{Q}}$ to elements of a set of functions for predicate replacements (or *replacement functions* for short) $\textbf{R}$ , we obtain a set $\textbf{D}_d^{\textbf{Q},\textbf{R}}$ of premise–hypothesis pairs defined as $$\begin{aligned} \textbf{D}_d^{\textbf{Q},\textbf{R}} =\;& \{(P, H) \mid \, P \in G_d^{\textbf{Q}},\;\exists r \in \textbf{R}\;\;(r(P) = H)\}.\end{aligned}$$ For example, the premise *Some puppies ran* is generated from the quantifier *some* in **Q** and the production rule $\textit{S}\rightarrow \textit{Q},\textit{N},\textit{IV}$, and thus it is an element of $G_1^{\textbf{Q}}$. By applying this premise to a replacement function that replaces the word in the premise with its hypernym (e.g., $\textit{puppy} \sqsubseteq \textit{dog}$), we provide the premise–hypothesis pair $\textit{\textbf{Some} \underline{puppies} ran}\Rightarrow \textit{\textbf{Some} \underline{dogs} ran}$ in Fig. \[fig:goodpic\]. We can control which patterns are shown to the models during training and which are left unseen by systematically splitting $\textbf{D}_d^{\textbf{Q},\textbf{R}}$ into training and test sets. , we consider how to test the systematic capacity of models with unseen combinations of quantifiers and predicate replacements. To expose models to primitive patterns regarding **Q** and **R**, we fix an arbitrary element $q$ from $\textbf{Q}$ and feed various predicate replacements into the models from the training set of inferences $\textbf{D}_d^{\{q\}, \textbf{R}}$ generated from combinations of the fixed quantifier and all predicate replacements. Also, we select an arbitrary element $r$ from $\textbf{R}$ and feed various quantifiers into the models from the training set of inferences $\textbf{D}_d^{\textbf{Q}, \{r\}}$ generated from combinations of all quantifiers and the fixed predicate replacement. We then test the models on the set of inferences generated from unseen combinations of quantifiers and predicate replacements. That is, we test them on the set of inferences $\textbf{D}_d^{\overline{\{q\}},\overline{\{r\}}}$ generated from the complements $\overline{\{q\}},\overline{\{r\}}$ of $\{q\}, \{r\}$. we can evaluate whether models generalize to one deeper depth. By testing models with an arbitrary training/test split of $\textbf{D}_d^{\textbf{Q},\textbf{R}}$ based on semantic properties of monotonicity inference , we can evaluate whether models systematically interpret them. Evaluation protocol ------------------- Let **R** be a set of replacement functions $\{r_1, \ldots, r_m\}$, and $d$ be the embedding depth, with $1 \leq d \leq s$. (4) is an example of an element of ${\ensuremath{\textbf{D}_{1}}}^{\textbf{Q},\textbf{R}}$, containing the quantifier *some* in the subject position and the predicate replacement $\textit{dogs}\sqsubseteq\textit{animals}$ in its upward-entailing context without embedding. #### I. Systematicity of predicate replacements The following describes how we test the extent to which models generalize to unseen combinations of quantifiers and predicate replacements. Here, we expose models to all primitive patterns of predicate replacements like (4) and (5) and all primitive patterns of quantifiers like (6) and (7). and correctly interpret unseen combinations of quantifiers and predicate replacements like (8) and (9). Here, we consider a set of inferences ${\ensuremath{\textbf{D}_{1}}}^{\textbf{Q},\textbf{R}}$ whose depth is 1. We move from harder to easier tasks by gradually changing the training/test split according to combinations of quantifiers and predicate replacements. First, we expose models to primitive patterns of **Q** and **R** with the minimum training set. Thus, we define the initial training set $\textbf{S}_1$ and test set $\textbf{T}_1$ as follows: $$\begin{aligned} (\textbf{S}_1, \textbf{T}_1) =\;& (\textbf{D}_1^{\{q\},\textbf{R}} \cup \textbf{D}_1^{\textbf{Q},\{r\}},\ \textbf{D}_1^{\overline{\{q\}},\overline{\{r\}}})\end{aligned}$$ where $q$ is arbitrarily selected from **Q**, and $r$ is arbitrarily selected from **R**. Next, we gradually add the set of inferences generated from combinations of an upward–downward quantifier pair and all predicate replacements to the training set. In the examples above, we add (8) and (9) to the training set to simplify the task. We assume a set $\textbf{Q}'$ of a pair of upward/downward quantifiers, namely, $\{({\ensuremath{q^{\uparrow}}},{\ensuremath{q^{\downarrow}}}) \mid ({\ensuremath{q^{\uparrow}}},{\ensuremath{q^{\downarrow}}})\subseteq \textbf{Q}^{\uparrow} \times \textbf{Q}^{\downarrow},\ {\ensuremath{q^{\uparrow}}},{\ensuremath{q^{\downarrow}}}\neq q\}$. We consider a set ${\ensuremath{\mathsf{perm}}}(\textbf{Q}')$ consisting of permutations of $\textbf{Q}'$. For each $p \in {\ensuremath{\mathsf{perm}}}(\textbf{Q}')$, we gradually add a set of inferences generated from $p(i)$ to the training set $\textbf{S}_i$ with $1 < i \leq n-1$. Then, we provide a test set $\textbf{T}_i$ generated from the complement $\overline{\textbf{Q}_i}$ of $\textbf{Q}_i = \{x\mid\exists y (x,y) \in \textbf{Q}_i' \; \text{or} \; \exists y (y,x) \in \textbf{Q}_i' \}$ and $\overline{\{r\}}$ where $\textbf{Q}_i' = \{p(1), \ldots, p(i)\}$. This protocol is summarized as $$\begin{aligned} \textbf{S}_{i+1} =\;& \textbf{S}_i \cup {\ensuremath{\textbf{D}_{1}}}^{ \{{\ensuremath{q^{\uparrow}}}_i,{\ensuremath{q^{\downarrow}}}_i\},\textbf{R}}, \\ \textbf{T}_i =\;&{\ensuremath{\textbf{D}_{1}}}^{\overline{\textbf{Q}_i},\overline{\{r\}}} \quad \text{with} \; 1 <i \leq n-1\end{aligned}$$ where $({\ensuremath{q^{\uparrow}}}_i,{\ensuremath{q^{\downarrow}}}_i)=p(i)$. To evaluate the extent to which the generalization ability of models is robust for different syntactic structures, we use an additional test set $\textbf{T}'_i = {\ensuremath{\textbf{D}_{1}}}^{\overline{\textbf{Q}_i},\overline{\{r\}}}$ generated using three production rules. The first is the case where one adverb is added at the beginning of the sentence, as in example (\[ex:adv\]). The second is the case where a three-word prepositional phrase is added at the beginning of the sentence, as in example (\[ex:prep\]). \[ex:prep\] *Near the shore, **several** ran* *Near the shore, **several** ran* The third is the case where the replacement is performed in the object position, as in example (\[ex:obj\]). \[ex:obj\] *Some tiger touched **several*** *Some tiger touched **several*** We train and test models $|{\ensuremath{\mathsf{perm}}}(\textbf{Q}')|$ times, then take the average accuracy as the final evaluation result. #### II. Systematicity of embedding quantifiers To properly interpret embedding monotonicity, models should detect both (i) the monotonicity direction of each quantifier and (ii) the type of predicate replacements in the embedded argument. The following describes how we test whether models generalize to unseen combinations of embedding quantifiers. We expose models to all primitive combination patterns of quantifiers and predicate replacements like (4)–(9) with a set of non-embedding monotonicity inferences ${\ensuremath{\textbf{D}_{1}}}^{\textbf{Q},\textbf{R}}$ and some embedding patterns like (\[ex:12\]), where ***Q*$_1$** and ***Q*$_2$** are chosen from a selected set of upward or downward quantifiers such as *some* or *no*. We then test the models with an inference with an unseen quantifier *several* in (\[ex:16\]) to evaluate whether models can systematically interpret embedding quantifiers. \[ex:12\] ***Q$_1$** animals that chased **Q$_2$** ran* ***Q$_1$** animals that chased **Q$_2$** ran* \[ex:16\] ***Several** animals that chased **several** ran* ***Several** animals that chased **several** ran* We move from harder to easier tasks of learning embedding quantifiers by gradually changing the training/test split of a set of inferences ${\ensuremath{\textbf{D}_{2}}}^{\textbf{Q},\textbf{R}}$ whose depth is 2, i.e., inferences involving one embedded clause. We assume a set $\textbf{Q}'$ of a pair of upward and downward quantifiers as $\textbf{Q}' \equiv \{({\ensuremath{q^{\uparrow}}},{\ensuremath{q^{\downarrow}}})\mid({\ensuremath{q^{\uparrow}}},{\ensuremath{q^{\downarrow}}})\subseteq \mathbf{Q}^{\uparrow}\times \mathbf{Q}^{\downarrow}\}$, and consider a set ${\ensuremath{\mathsf{perm}}}(\textbf{Q}')$ consisting of permutations of $\textbf{Q}'$. For each $p \in {\ensuremath{\mathsf{perm}}}(\textbf{Q}')$, we gradually add a set of inferences ${\ensuremath{\textbf{D}_{2}}}$ generated from $p(i)$ to the training set $\textbf{S}_i$ with $1\leq i \leq n-1$. We test models trained with $\textbf{S}_i$ on a test set $\textbf{T}_i$ generated from the complement $\overline{\textbf{Q}_i}$ of $\textbf{Q}_i = \{x\mid\exists y (x,y) \in \textbf{Q}_i' \; \text{or} \; \exists y (y,x) \in \textbf{Q}_i' \}$ where $\textbf{Q}_i' = \{p(1), \ldots, p(i)\}$, summarized as $$\begin{aligned} \textbf{S}_0=\;& \textbf{D}_1^{\textbf{Q},\textbf{R}},\\ \textbf{S}_i=\;& \textbf{S}_{i-1} \cup {\ensuremath{\textbf{D}_{2}}}^{ \{{\ensuremath{q^{\uparrow}}}_i,{\ensuremath{q^{\downarrow}}}_i\},\textbf{R}},\\ \textbf{T}_i=\;&{\ensuremath{\textbf{D}_{2}}}^{ \overline{\textbf{Q}_i},\textbf{R}} \quad \text{with} \; 1 \leq i \leq n-1\end{aligned}$$ where $({\ensuremath{q^{\uparrow}}}_i,{\ensuremath{q^{\downarrow}}}_i)=p(i)$. We train and test models $|{\ensuremath{\mathsf{perm}}}(\textbf{Q}')|$ times, then take the average accuracy as the final evaluation result. #### III. Productivity Productivity is a concept related to systematicity, which refers to the capacity to grasp an indefinite number of natural language sentences or thoughts . The following describes how we test whether models generalize to unseen deeper depths in embedding monotonicity (see also the right side of Figure \[fig:goodpic\]). For example, we expose models to all primitive non-embedding/single-embedding patterns like (\[ex:17\]) and (\[ex:18\]) and then test them with deeper embedding patterns like (\[ex:19\]). \[ex:17\] ***Some** ran* ***Some** ran* \[ex:18\] ***Some** animals which chased **some** ran* ***Some** animals which chased **some** ran* \[ex:19\] ***Some** animals which chased **some** cats which followed **some** ran* ***Some** animals which chased **some** cats which followed **some** ran* To evaluate models on the set of inferences involving embedded clauses with depths exceeding those in the training set, we train models with $\bigcup_{d \in \{1,\ldots, i+1\}}{\ensuremath{\textbf{D}_{d}}}$, where we refer to ${\ensuremath{\textbf{D}_{d}}}^{\textbf{Q},\textbf{R}}$ as ${\ensuremath{\textbf{D}_{d}}}$ for short, and test the models on $\bigcup_{d \in \{i+2,\ldots, s\}}{\ensuremath{\textbf{D}_{d}}}$ with $1 \leq i \leq s-2$. #### IV. Localism According to the principle of compositionality, the meaning of a complex expression derives from the meanings of its constituents and how they are combined. One important concern is how local the composition operations should be [@Pagin2010-PAGCID]. We therefore test whether models trained with inferences involving embedded monotonicity locally perform inferences composed of smaller constituents. Specifically, we train models with examples like (\[ex:19\]) and then test the models with examples like (\[ex:17\]) and (\[ex:18\]). We train models with [$\textbf{D}_{d}$]{} and test the models on $\bigcup_{k \in \{1,\ldots, d\}}{\ensuremath{\textbf{D}_{k}}}$ with $3 \leq d \leq s$ . Experimental Setting ==================== Data creation {#sec:data} ------------- To prepare the datasets shown in Table \[tab:examples\], we first generate premise sentences involving quantifiers from a set of context-free grammar (CFG) rules and lexical entries, shown in Table \[tab:lexicon\] in the Appendix. We select 10 words from among nouns, intransitive verbs, and transitive verbs as lexical entries. we use a set of four downward quantifiers $\textbf{Q}^{\downarrow}=${*no, at most three, less than three, few*} and a set of four upward quantifiers $\textbf{Q}^{\uparrow}=${*some, at least three, more than three, a few*}, which have the same monotonicity directions in the first and second arguments. We thus consider $n\!=\!|\textbf{Q}^{\uparrow}|\!=\!|\textbf{Q}^{\downarrow}|\!=\!4$ . The ratio of each monotonicity direction (upward/downward) of generated sentences is set to $1:1$. We then generate hypothesis sentences by applying replacement functions to premise sentences according to the polarities of constituents. The set of replacement functions $\textbf{R}$ is composed of the seven types of lexical replacements and phrasal additions in Table \[tab:replace\]. We remove unnatural premise–hypothesis pairs in which the same words or phrases appear more than once. For embedding monotonicity, we consider inferences involving four types of replacement functions in the first argument of the quantifier in Table \[tab:replace\]: hyponyms, adjectives, prepositions, and relative clauses. We generate sentences up to the depth $d=5$. In this paper, we consider three types of embedded clauses: peripheral-embedding clauses and two kinds of center-embedding clauses, shown in Table \[tab:lexicon\] in the Appendix. The number of generated sentences exponentially increases with the depth of embedded clauses. Thus, we limit the number of inference examples to 320,000, split into 300,000 examples for the training set and 20,000 examples for the test set. We guarantee that all combinations of quantifiers are included in the set of inference examples for each depth. Gold labels for generated premise–hypothesis pairs are automatically determined according to the polarity of the argument position (upward/downward) and the type of predicate replacements (with more general/specific phrases). The ratio of each gold label (entailment/non-entailment) in the training and test sets is set to $1:1$. To double-check the gold label, we translate each premise–hypothesis pair into a logical formula (see the Appendix for more details). We prove the entailment relation using the theorem prover Vampire[^2], checking whether a proof is found in time for each entailment pair. Models ------ We consider three DNN-based NLI models. The first architecture employs long short-term memory (LSTM) networks [@Hochreiter:1997:LSM:1246443.1246450]. We set the number of layers to three with no attention. Each premise and hypothesis is processed as a sequence of words using a recurrent neural network with LSTM cells, and the final hidden state of each serves as its representation. The second architecture employs multiplicative tree-structured LSTM (TreeLSTM) networks [@tran-cheng-2018-multiplicative]. Each premise and hypothesis is processed as a tree structure by bottom-up combinations of constituent nodes using the same shared compositional function, input word information, and between-word relational information. In LSTM and TreeLSTM, the dimension of hidden units is 200, and we initialize the word embeddings with 300-dimensional GloVe vectors [@pennington-etal-2014-glove]. Both models are optimized with Adam [@Adam], and no dropout is applied. The third architecture is a Bidirectional Encoder Representations from Transformers (BERT) model [@BERT2018new]. We used the base-uncased model pre-trained on Wikipedia and BookCorpus from the pytorch-pretrained-bert library[^3], fine-tuned for the NLI task using our dataset. In fine-tuning BERT, no dropout is applied, We train all models over 25 epochs or until convergence, and select the best-performing model based on its performance on the validation set. We perform five runs per model and report the average and standard deviation of their scores. Experiments and Discussion ========================== #### I. Systematicity of predicate replacements {#sec:lex} Figure \[fig:lex1\_4\] shows the performance on unseen combinations of quantifiers and predicate replacements. In the minimal training set $\textbf{S}_1$, the accuracy of LSTM and TreeLSTM was almost the same as chance, but that of BERT was around 75%, suggesting that only BERT generalized to unseen combinations of quantifiers and predicate replacements. When we train BERT with the training set $\textbf{S}_2$, which contains inference examples generated from combinations of one pair of upward/downward quantifiers and all predicate replacements, the accuracy was 100%. This indicates that The accuracy of LSTM and TreeLSTM increased with increasing the training set size, but did not reach 100%. This indicates that LSTM and TreeLSTM also generalize to inferences involving similar quantifiers to some extent, but their generalization ability is imperfect. When testing models with inferences where adverbs or prepositional phrases are added to the beginning of the sentence, the accuracy of all models significantly decreased. This decrease becomes larger as the syntactic structures of the sentences in the test set become increasingly different from those in the training set. These results indicate that the models tend to estimate the entailment label from the beginning of a premise–hypothesis sentence pair, and that inferential systematicity to draw inferences involving quantifiers and predicate replacements is not completely generalized at the level of arbitrary constituents. #### II. Systematicity of embedding quantifiers {#sec:embq} Figure \[fig:emb3\_6\] shows the performance of all models on unseen combinations of embedding quantifiers. Even when adding the training set of inferences involving one embedded clause and two quantifiers step-by-step, no model showed improved performance. The accuracy of BERT slightly exceeded chance, but the accuracy of LSTM and TreeLSTM was nearly the same as or lower than chance. These results suggest that all the models fail to generalize to unseen combinations of embedding quantifiers even when they involve similar upward/downward quantifiers. #### III. Productivity {#sec:depth} Table \[tab:emb1\_2\] shows the performance on unseen depths of embedded clauses. [The accuracy on [$\textbf{D}_{1}$]{} and [$\textbf{D}_{2}$]{} was nearly 100%, indicating that all models almost completely generalize to inferences containing previously seen depths.]{} When ${\ensuremath{\textbf{D}_{1}}} {+}{\ensuremath{\textbf{D}_{2}}}$ were used as the training set, the accuracy of all models on [$\textbf{D}_{3}$]{} exceeded chance. Similarly, when ${\ensuremath{\textbf{D}_{1}}} {+}{\ensuremath{\textbf{D}_{2}}} {+}{\ensuremath{\textbf{D}_{3}}}$ were used as the training set, the accuracy of all models on [$\textbf{D}_{4}$]{} exceeded chance. However, standard deviations of BERT and LSTM were around 10, suggesting that these models did not consistently generalize to inferences containing embedded clauses one level deeper than the training set. While the distribution of monotonicity directions (upward/downward) in the training and test sets was uniform, the accuracy of LSTM and BERT tended to be smaller for downward inferences than for upward inferences. This also indicates that these models fail to properly compute monotonicity directions of constituents from syntactic structures. The standard deviation of TreeLSTM was smaller, indicating that inference patterns containing embedded clauses one level deeper than the training set. However, the performance of all models trained with ${\ensuremath{\textbf{D}_{1}}} {+}{\ensuremath{\textbf{D}_{2}}}$ on [$\textbf{D}_{4}$]{} and [$\textbf{D}_{5}$]{} significantly decreased. Also, performance decreased for all models trained with ${\ensuremath{\textbf{D}_{1}}} {+}{\ensuremath{\textbf{D}_{2}}} {+}{\ensuremath{\textbf{D}_{3}}}$ on [$\textbf{D}_{5}$]{}. Specifically, there was significantly decreased performance of all models, including TreeLSTM, on inferences containing embedded clauses two or more levels deeper than those in the training set. These results indicate that all models fail to develop productivity on inferences involving embedding monotonicity. #### IV. Localism {#sec:decomp} Table \[tab:emb7\_8\] shows the performance of all models on localism of embedding monotonicity. When the models were trained with [$\textbf{D}_{3}$]{}, [$\textbf{D}_{4}$]{} or [$\textbf{D}_{5}$]{}, all performed at around chance on the test set of non-embedding inferences [$\textbf{D}_{1}$]{} and the test set of inferences involving one embedded clause [$\textbf{D}_{2}$]{}. These results indicate that even if models are trained with a set of inferences containing complex syntactic structures, the models fail to locally interpret their constituents. #### Prior studies [@yanaka2019; @Richardson2019] have shown that given BERT initially trained with MultiNLI, further training with synthesized instances of logical inference improves performance on the same types of logical inference while maintaining the initial performance on MultiNLI. To investigate whether the results of our study are transferable to current work on MultiNLI, we trained models with our synthesized dataset mixed with MultiNLI, and checked (i) whether our synthesized dataset degrades the original performance of models on MultiNLI[^4] and (ii) whether MultiNLI degrades the ability to generalize to unseen depths of embedded clauses. Table \[tab:mnliemb1\_2\] shows that training BERT on our synthetic data ${\ensuremath{\textbf{D}_{1}}}{+}{\ensuremath{\textbf{D}_{2}}}$ and MultiNLI increases the accuracy on our test sets ${\ensuremath{\textbf{D}_{1}}}$ (46.9 to 100.0), ${\ensuremath{\textbf{D}_{2}}}$ (46.2 to 100.0), and ${\ensuremath{\textbf{D}_{3}}}$ (46.8 to 67.8) while preserving accuracy on MultiNLI (84.6 to 84.4). This indicates that training BERT with our synthetic data does not degrade performance on commonly used corpora like MultiNLI while improving the performance on monotonicity, which suggests that our data-synthesis approach can be combined with naturalistic datasets. For TreeLSTM and LSTM, however, adding our synthetic dataset decreases accuracy on MultiNLI. One possible reason for this is that a pre-training based model like BERT can mitigate catastrophic forgetting in various types of datasets. Regarding the ability to generalize to unseen depths of embedded clauses, the accuracy of all models on our synthetic test set containing embedded clauses one level deeper than the training set exceeds chance, but the improvement becomes smaller with the addition of MultiNLI. In particular, with the addition of MultiNLI, the models tend to change wrong predictions in cases where a hypothesis contains a phrase not occurring in a premise but the premise entails the hypothesis. Such inference patterns are contrary to the heuristics in MultiNLI [@mccoy2019]. This indicates that there may be some trade-offs in terms of performance between inference patterns in the training set and those in the test set. Related Work ============ The question of whether neural networks are capable of processing compositionality has been widely discussed [@Fodor1988-FODCAC; @962dc7dfb35547148019f194381d2cc6]. Recent empirical studies illustrate the importance and difficulty of evaluating the capability of neural models. Generation tasks using artificial datasets have been proposed for testing whether models compositionally interpret training data from the underlying grammar of the data [@Lake2017GeneralizationWS; @hupkes2018; @saxton2018analysing; @loula-etal-2018-rearranging; @hupkes2019; @Bernardy2018CanRN]. However, these conclusions are controversial, and it remains unclear whether the failure of models on these tasks stems from their inability to deal with compositionality. Previous studies using logical inference tasks have also reported both positive and negative results. Assessment results on propositional logic [@Evans2018CanNN], first-order logic [@mul2019], and natural logic [@Bowman2015] show that neural networks can generalize to unseen words and lengths. In contrast, @Geiger2019 obtained negative results by testing models under fair conditions of natural logic. Our study suggests that these conflicting results come from an absence of perspective on combinations of semantic properties. Regarding assessment of the behavior of modern language models, @linzen-etal-2016-assessing, @tran-etal-2018-importance, and @goldberg2019 investigated their syntactic capabilities by testing such models on subject–verb agreement tasks. Many studies of NLI tasks [@liu-etal-2019-inoculation; @glockner-shwartz-goldberg:2018:Short; @poliak-EtAl:2018:S18-2; @DBLP:conf/lrec/Tsuchiya18; @mccoy2019; @rozen-etal-2019-diversify; @ross-pavlick-2019-well] have provided evaluation methodologies and found that current NLI models often fail on particular inference types, or that they learn undesired heuristics from the training set. Monotonicity covers various systematic inferential patterns, and thus is an adequate semantic phenomenon for assessing inferential systematicity in natural language. Another benefit of focusing on monotonicity is that it provides hard problem settings against heuristics [@mccoy2019], which fail to perform downward-entailing inferences where the hypothesis is longer than the premise. Conclusion {#sec:conc} ========== A series of experiments showed that the capability of three models to capture systematicity of predicate replacements was limited to cases where the positions of the constituents were similar between the training and test sets. For embedding monotonicity, no models consistently drew inferences involving embedded clauses whose depths were two levels deeper than those in the training set. This suggests that models fail to capture inferential systematicity of monotonicity and its productivity. We hope that our work will be useful in future research for realizing more advanced models that are capable of appropriately performing arbitrary inferences. Acknowledgement {#acknowledgement .unnumbered} =============== We thank the three anonymous reviewers for their helpful comments and suggestions. We are also grateful to Benjamin Heinzerling and Sosuke Kobayashi for helpful discussions. This work was partially supported by JSPS KAKENHI Grant Numbers JP20K19868 and JP18H03284, Japan. [lll]{}\ \ $S$ & $\rightarrow$& $NP \ \, IV_1$\ $\mathit{NP}$ & $\rightarrow$ & $Q \ \, N$  $\mid$  $Q \ \, N \ \, \overline{S}$\ $\overline{S}$ & $\rightarrow$ & $\mathit{WhNP}\ \, TV\ \, NP \mid \mathit{WhNP}\ \, \mathit{NP}\ \, TV \mid \mathit{NP}\ \, TV$\ \ $Q$&$\rightarrow$&{*no, at most three, less than three, few, some, at least three, more than three, a few*}\ $N$&$\rightarrow$&{*dog, rabbit, lion, cat, bear, tiger, elephant, fox, monkey, wolf*}\ $IV_1$&$\rightarrow$&{*ran, walked, came, waltzed, swam, rushed, danced, dawdled, escaped, left*}\ $IV_2$&$\rightarrow$&{*laughed, groaned, roared, screamed, cried*}\ $TV$&$\rightarrow$&{*kissed, kicked, hit, cleaned, touched, loved, accepted, hurt, licked, followed*}\ $WhNP$&$\rightarrow$&{*that, which*}\ $N_{hypn}$&$\rightarrow$&{*animal, creature, mammal, beast*}\ $Adj$&$\rightarrow$&{*small, large, crazy, polite, wild*}\ $PP$&$\rightarrow$&{*in the area, on the ground, at the park, near the shore, around the island*}\ $RelC$&$\rightarrow$&{*which ate dinner, that liked flowers, which hated the sun, that stayed up late*}\ $Adv$&$\rightarrow$&{*slowly, quickly, seriously, suddenly, lazily*}\ \ $N$ & to & $N_{hypn} \mid Adj\ N \mid N\ PP \mid N\ RelC$\ $IV_1$ & to & $IV_1\ Adv \mid IV_1\ PP \mid IV_1 \ \text{or} \ IV_2 \mid IV_1 \ \text{and} \ IV_2$\ Appendix ======== Lexical entries and replacement examples ---------------------------------------- Table \[tab:lexicon\] shows a context-free grammar and a set of predicate replacements used to generate inference examples. Regarding the context-free grammar, we consider premise–hypothesis pairs containing the quantifier $Q$ in the subject position, and the predicate replacement is performed in both the first and second arguments of the quantifier. When generating premise–hypothesis pairs involving embedding monotonicity, we consider inferences involving four types of predicate replacements (hyponyms $N_{hypn}$, adjectives $Adj$, prepositions $PP$, and relative clauses $RelC$) in the first argument of the quantifier. To generate natural sentences consistently, we use the past tense for verbs; for lexical entries and predicate replacements, we select those that do not violate selectional restriction. To check the gold labels for the generated premise–hypothesis pairs, we translate each sentence to a first-order logic (FOL) formula and test if the entailment relation holds by theorem proving. The FOL formulas are compositionally derived by combining lambda terms assigned to each lexical item in accordance with meaning composition rules specified in the CFG rules in the standard way [@BlackburnBos05]. Since our purpose is to check the polarity of monotonicity marking, vague quantifiers such as *few* are represented according to their polarity. For example, we map the quantifier *few* onto the lambda-term $\lambda P \lambda Q \lnot \exists x (\textbf{few}(x) \land P(x) \wedge Q(x))$. Results on embedding monotonicity --------------------------------- Table \[tab:emball\] shows all results on embedding monotonicity. **Train** **Test** **BERT** **LSTM** **TreeLSTM** --------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------- --------------- --------------- --------------- [$\textbf{D}_{1}$]{} [$\textbf{D}_{1}$]{} 100.0$\pm$0.0 91.1$\pm$5.4 100.0$\pm$0.0 [$\textbf{D}_{2}$]{} 44.1$\pm$6.4 34.1$\pm$3.8 48.1$\pm$1.2 [$\textbf{D}_{3}$]{} 47.6$\pm$3.2 45.1$\pm$5.1 48.5$\pm$1.8 [$\textbf{D}_{4}$]{} 49.6$\pm$1.0 44.4$\pm$6.5 50.1$\pm$2.1 [$\textbf{D}_{5}$]{} 49.9$\pm$1.1 44.1$\pm$5.3 50.3$\pm$1.1 ${\ensuremath{\textbf{D}_{1}}}\cup{\ensuremath{\textbf{D}_{2}}}$ [$\textbf{D}_{1}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.1 [$\textbf{D}_{2}$]{} 100.0$\pm$0.0 99.8$\pm$0.2 99.5$\pm$0.1 [$\textbf{D}_{3}$]{} 75.2$\pm$10.0 75.4$\pm$10.8 86.4$\pm$4.1 [$\textbf{D}_{4}$]{} 55.0$\pm$3.7 57.7$\pm$8.7 58.6$\pm$7.8 [$\textbf{D}_{5}$]{} 49.9$\pm$4.4 45.8$\pm$4.0 48.4$\pm$3.7 ${\ensuremath{\textbf{D}_{1}}}\cup{\ensuremath{\textbf{D}_{2}}}\cup{\ensuremath{\textbf{D}_{3}}}$ [$\textbf{D}_{1}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.0 [$\textbf{D}_{2}$]{} 100.0$\pm$0.0 95.1$\pm$7.8 99.6$\pm$0.0 [$\textbf{D}_{3}$]{} 100.0$\pm$0.0 85.2$\pm$8.9 97.7$\pm$1.1 [$\textbf{D}_{4}$]{} 77.9$\pm$10.8 59.7$\pm$10.8 68$\pm$5.6 [$\textbf{D}_{5}$]{} 53.5$\pm$19.6 55.1$\pm$8.2 49.6$\pm$4.3 ${\ensuremath{\textbf{D}_{1}}}\cup{\ensuremath{\textbf{D}_{2}}}\cup{\ensuremath{\textbf{D}_{3}}}\cup{\ensuremath{\textbf{D}_{4}}}$ [$\textbf{D}_{1}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.1 [$\textbf{D}_{2}$]{} 100.0$\pm$0.0 99.4$\pm$1.1 99.7$\pm$0.2 [$\textbf{D}_{3}$]{} 100.0$\pm$0.0 91.5$\pm$4.0 98.9$\pm$1.1 [$\textbf{D}_{4}$]{} 100.0$\pm$0.0 74.1$\pm$4.2 94.0$\pm$2.3 [$\textbf{D}_{5}$]{} 89.1$\pm$5.4 64.2$\pm$4.7 69.5$\pm$4.1 ${\ensuremath{\textbf{D}_{1}}}\cup{\ensuremath{\textbf{D}_{2}}}\cup{\ensuremath{\textbf{D}_{3}}}\cup{\ensuremath{\textbf{D}_{4}}}\cup{\ensuremath{\textbf{D}_{5}}}$ [$\textbf{D}_{1}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.1 [$\textbf{D}_{2}$]{} 100.0$\pm$0.0 95.8$\pm$7.3 99.8$\pm$0.1 [$\textbf{D}_{3}$]{} 100.0$\pm$0.0 90.5$\pm$13.1 99.1$\pm$0.2 [$\textbf{D}_{4}$]{} 100.0$\pm$0.0 90.2$\pm$6.0 94.8$\pm$0.1 [$\textbf{D}_{5}$]{} 100.0$\pm$0.0 93.6$\pm$3.1 83.2$\pm$12.1 [$\textbf{D}_{2}$]{} [$\textbf{D}_{1}$]{} 36.4$\pm$14.4 25.3$\pm$9.3 44.9$\pm$4.1 [$\textbf{D}_{2}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.2 [$\textbf{D}_{3}$]{} 47.6$\pm$10.3 43.9$\pm$17.5 51.8$\pm$1.1 [$\textbf{D}_{4}$]{} 61.7$\pm$7.8 57.9$\pm$14.7 51.7$\pm$0.6 [$\textbf{D}_{5}$]{} 42.6$\pm$5.1 47.2$\pm$2.9 50.9$\pm$0.4 [$\textbf{D}_{3}$]{} [$\textbf{D}_{1}$]{} 49.6$\pm$0.5 48.8$\pm$13.2 49.8$\pm$4.1 [$\textbf{D}_{2}$]{} 49.8$\pm$0.6 47.3$\pm$12.1 51.8$\pm$1.1 [$\textbf{D}_{3}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.2 [$\textbf{D}_{4}$]{} 49.7$\pm$1.0 42.0$\pm$0.6 51.3$\pm$0.7 [$\textbf{D}_{5}$]{} 50.0$\pm$0.4 38.4$\pm$9.6 49.8$\pm$0.3 [$\textbf{D}_{4}$]{} [$\textbf{D}_{1}$]{} 50.3$\pm$1.0 46.8$\pm$6.5 49.0$\pm$0.4 [$\textbf{D}_{2}$]{} 49.6$\pm$0.8 45.4$\pm$1.8 49.7$\pm$0.3 [$\textbf{D}_{3}$]{} 50.2$\pm$0.7 45.1$\pm$0.6 50.5$\pm$0.7 [$\textbf{D}_{4}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.1 [$\textbf{D}_{5}$]{} 49.7$\pm$0.5 45.1$\pm$0.9 50.5$\pm$1.1 [$\textbf{D}_{5}$]{} [$\textbf{D}_{1}$]{} 49.9$\pm$0.7 43.7$\pm$4.4 49.1$\pm$1.1 [$\textbf{D}_{2}$]{} 49.1$\pm$0.3 43.4$\pm$3.9 51.4$\pm$0.6 [$\textbf{D}_{3}$]{} 50.6$\pm$0.2 44.3$\pm$2.7 50.5$\pm$0.3 [$\textbf{D}_{4}$]{} 50.9$\pm$0.8 44.4$\pm$3.4 50.3$\pm$0.4 [$\textbf{D}_{5}$]{} 100.0$\pm$0.0 100.0$\pm$0.0 100.0$\pm$0.1 [^1]: The evaluation code will be publicly available at https://github.com/verypluming/systematicity. [^2]: https://github.com/vprover/vampire [^3]: https://github.com/huggingface/pytorch-pretrained-bert [^4]: Following the previous work [@Richardson2019], we used the MultiNLI mismatched development set for MNLI-test.
--- abstract: 'Computer simulations have become a popular tool of assessing complex skills such as problem-solving skills. Log files of computer-based items record the entire human-computer interactive processes for each respondent. The response processes are very diverse, noisy, and of nonstandard formats. Few generic methods have been developed for exploiting the information contained in process data. In this article, we propose a method to extract latent variables from process data. The method utilizes a sequence-to-sequence autoencoder to compress response processes into standard numerical vectors. It does not require prior knowledge of the specific items and human-computers interaction patterns. The proposed method is applied to both simulated and real process data to demonstrate that the resulting latent variables extract useful information from the response processes.' author: - 'Xueying Tang, Zhi Wang, Jingchen Liu, and Zhiliang Ying' bibliography: - 'seq2seq.bib' title: An Exploratory Analysis of the Latent Structure of Process Data via Action Sequence Autoencoders --- Introduction {#sec:intro} ============ Problem solving is one of the key skills for people in the current world full of rapid changes [@OECD2017problem]. Computer-based items have recently become popular for assessing problem solving skills. In such items, problem-solving scenarios can be conveniently simulated through human-computer interfaces and the problem-solving processes can be easily recorded for analysis. ![Main page of the sample item.[]{data-label="fig:item_main"}](figures/example2_1.png){width="\textwidth"} ![Webpage after clicking “Find Jobs” in Figure \[fig:item\_main\].[]{data-label="fig:item_page1"}](figures/example2_2.png){width="\textwidth"} ![Detailed information page of the first job listing in Figure \[fig:item\_page1\].[]{data-label="fig:item_page2"}](figures/example2_3.png){width="\textwidth"} In 2012, several computer-based items were designed and deployed in the Programme for International Assessment of Adult Competencies (PIAAC) to measure adults’ competency in problem solving in technology-rich environments (PSTRE). Screenshots[^1] of the interface of a released PSTRE item are shown in Figures \[fig:item\_main\]–\[fig:item\_page2\]. The opening page of the item, displayed in Figure \[fig:item\_main\], consists of two panels. The left panel contains item instructions and navigation buttons, while the right panel is the main medium for interaction. In this example, the right panel is a web browser showing a job searching website. The task is to find all job listings that meet the criteria described in the instructions. The dropdown menus and radio buttons can be used to narrow down the search range. Once the “Find Jobs” button is clicked, jobs that meet the selected criteria will be listed on the web page as shown in Figure \[fig:item\_page1\]. Participants can read the detail information about a listing by clicking “More about this position”. Figure \[fig:item\_page2\] is the detailed information page of the first listing in Figure \[fig:item\_page1\]. If a listing is considered to meet all the requirements, it can be saved by clicking the “SAVE this listing” button. When a participant works on a problem, the entire response process is recorded in the log files in additional to the final response outcome (correct/incorrect). For example, if a participant selected “Photography” and “7 days” in the two dropdown menus, clicked the “Part-time” radio button, then clicked “Find Jobs”, read the detailed information of the first listing and saved it, then a sequence of actions, “Start, Dropdown1\_Photography, Dropdown2\_7, Part-time, Find\_Jobs, Click\_W1, Save, Next”, is recorded in the log files[^2]. The entire action sequence constitutes a single observation of process data. It tracks all the major actions the participants took when they interacted with the browsing environment. The process responses contain substantially more comprehensive information of respondents than the traditional item responses that is often dichotomous (correct/incorrect) or polytomous (partial credits). On the other hand, to what extent this information is useful for educational and cognitive assessments and how to systematically make full use of such information are largely unknown. One of the difficulties in analyzing process data is to cope with its nonstandard format. Each process is a sequence of categorical variables (mouse clicks and keystrokes) and its length varies across observations. As a result, existing models for traditional item responses such as item response theory (IRT) models are not directly applicable to process data. Although some models have been extended to incorporate item response time [@entink2009multivariate; @wang2018using; @zhan2018cognitive], similar extensions for response processes are difficult. Another challenge for analyzing process data comes from the wide diversity of human behaviors. Signals from behavioral patterns in response processes are often attenuated by a large amount of noisy actions. The 1-step or 2-step lagged correlation of the response processes are often close to zero, indicating that models only capturing short-term dependence are often inadequate. The rich variety of computer-based items also adds to the difficulty in developing general methods for process data. The computer interface involved in the PSTRE items in PIAAC 2012 includes web browser, mail clients, and spreadsheet. The required tasks in these items also vary greatly. In some recent development of process data analysis such as @greiff2016understanding and @kroehne2018conceptualize, process data are first summarized into several variables according to domain knowledge and then their relationship with other variables of interest are investigated by conventional statistical methods. The design of the summary variables is usually item-specific and requires a thorough understanding of respondents’ cognitive process during human-computer interaction. Thus these approaches are too “expensive” to apply to even a moderate number of diverse items such as the PSTRE items in PIAAC 2012. @he2016analyzing adopted the concept of n-grams from natural language processing to explore the association between action sequence patterns and traditional item responses. The sequence patterns extracted from their procedure depend on the coding of log files and are often of limited capacity since it only considers consecutive actions. In this paper, we propose a generic method to extract features from process data. The extracted features play a similar role as the latent variables in item response theory [@lord1980applications; @lord1968statistical]. The proposed method does not rely on prior knowledge of the items and coding of the log files. Therefore, it is applicable to a wide range of process data with little item-specific processing effort. In the case study, we applied the proposed method to 14 PSTRE items in PIAAC 2012. These items vary widely in many aspects including the content of the problem-solving task and their overall difficulty levels. The main component of the proposed feature extraction method is an autoencoder [@Goodfellow-et-al-2016 Chapter 14]. It is a class of artificial neural networks that tries to reproduce the input in its output. Autoencoders are often used for dimension reduction [@hinton2006reducing] and data denoising [@vincent2008extracting] in pattern recognition, computer vision, and many other machine learning applications [@deng2010binary; @lu2013speech; @li2015hierarchical; @yousefi2017autoencoder]. They first map the input to a low-dimensional vector, from which they then try to reconstruct the input. Once a good autoencoder is found for a dataset, the low-dimensional vector contains comprehensive information of original data and thus can be used as features summarizing the response processes. With the proposed method, we extract features from each of the PSTRE items in PIAAC 2012 and explore the extracted feature space of process data. We show that the extracted features from response processes contain more information than the traditional item responses. We find that the prediction of many variables, including literacy and numeracy scores and a variety of background variables, can be considerably improved once the process features are incorporated. Neural networks have been used for analyzing educational data recently. @NIPS2015_5654 and @wang2017deep applied recurrent neural networks to knowledge tracing and showed that their deep knowledge tracing models can predict students’ performance on the next exercise from their exercise trajectories more accurately than other traditional methods. @bosch2017unsupervised discussed several neural network architectures that can be used for analyzing interaction log data. They extracted features for detecting student boredom through modeling the relations of student behaviors in two time intervals. The log file data used there were aggregated into a more regular form. @ding2019effective also studied the problem of extracting features from student’s learning process using autoencoders. The learning processes considered there have a fixed number of steps and the data in each step were preprocessed into fixed dimension raw features. The rest of the paper is organized as follows. In Section \[sec:autoencoder\], we introduce the action sequence autoencoder and the feature extraction procedure for process data. The proposed procedure is applied to simulated processes in Section \[sec:simulation\] to demonstrate how extracted features reveal the latent structure in response processes. Section \[sec:example\] presents a case study of process data from PSTRE items of PIAAC to show that response processes contain more information than traditional responses. Some concluding remarks are made in Section \[sec:discussion\]. Feature Extraction by Action Sequence Autoencoder {#sec:autoencoder} ================================================= We adopt the following setting throughout this paper. Let $\mathcal{A}=\{a_1, \ldots, a_N\}$ denote the set of possible actions for an item, where $N$ is the total number of distinct actions and each element in $\mathcal A$ is a unique action. A response process can be represented as a sequence of actions, $\bm s = (s_1, \ldots, s_{T})$, where $s_t \in \mathcal{A}$ for $t = 1, \ldots, T$ and $T$ denotes the length of the process, i.e., the total number of actions that a respondent took to solve the problem. An action sequence $\bm s$ can be equivalently represented as a $T \times N$ binary matrix $\mathbf{S} = (S_{tj})$ whose $t$-th row gives the dummy variable representation of the action at time step $t$. More specifically, $S_{tj}$ being one indicates the $t$-th action of the sequence is action $a_j$. There is one and only one element being one in each row. All other elements are zeros. In the rest of this article, $\mathbf{S}$ is used interchangeably with $\bm{s}$ for referring to an action sequence. The length of a response process is likely to vary widely across respondents. As a result, the matrix representation of response processes from different respondents will have different number of rows. For a set of $n$ processes, $\bm s_1, \ldots, \bm s_n$ (equivalently, $\mathbf{S}_1, \ldots, \mathbf{S}_n$), the length of $\bm s_i$ (the number of rows in $\mathbf{S}_i$) is denoted by $T_i$, for $i = 1, \ldots, n$. The main motivation of developing a feature extraction method for process data is to compress the nonstandard data with varying dimension into homogeneous dimension vectors to facilitate subsequent standard statistical analysis. Autoencoder ----------- The main component of our feature extraction method is an autoencoder [@Goodfellow-et-al-2016 Chapter 14]. It is a type of artificial neural networks whose output tries to reproduce the input. A trivial solution to this task is to link the input and the output through an identity function, but it provides little insight about the data. Autoencoders employ special structures in the mapping from the input to the output so that nontrivial reconstructions are formed to unveil the underlying low-dimensional structure. As illustrated in Figure \[fig:autoencoder\], an autoencoder consists of two components, an encoder $\phi$ and a decoder $\psi$. The encoder $\phi$ transforms a complex and high-dimensional input $\bm s$ into a low-dimensional vector $\bm \theta$. Then the decoder $\psi$ reconstructs the input from $\bm \theta$. Since the low-dimensional vector is in a standard and simpler format and contains adequate information to restore the original data, autoencoders are often used for dimension reduction and feature extraction. ![Structure of an autoencoder.[]{data-label="fig:autoencoder"}](figures/autoencoder_picture.pdf){width="10cm"} The encoder and the decoder are often specified as a family of functions, $\phi_{\bm \eta}$ and $\psi_{\bm \xi}$, respectively, where $\bm \eta$ and $\bm \xi$ are parameters to be estimated by minimizing the discrepancy between the inputs and the outputs of the autoencoder. To be more specific, letting $\hat{\bm s}_i = \psi_{\bm \xi}(\phi_{\bm \eta}(\bm s_i))$ denote the output for input $\bm s_i$, $i = 1, \ldots, n$, the parameters $\bm \eta$ and $\bm \xi$ are estimated by minimizing $$\label{eq:obj} F(\bm \eta, \bm \xi) = \sum_{i=1}^n L(\bm s_i, \hat{\bm s}_i),$$ where $L$ is a loss function measuring the difference between the reconstructed data $\hat{\bm s}_i$ and the original data $\bm s_i$. Once estimates $\hat{\bm \eta}$ and $\hat{\bm \xi}$ are obtained, the latent representation or the features of an action sequence $\bm s$ can be computed by $\bm \theta = \phi_{\hat {\bm \eta}}(\bm s)$. To make an analogue to the IRT models or other latent variable models, one may consider $\bm \theta$, the output of the encoder $\phi$, to be an estimator of the latent variables based on the responses and the decoder $\psi$ to be the item response function that specifies the response distribution corresponding to a latent vector. For the IRT model, the estimator and the item response function are often coherent in the sense that the estimator is determined by the item response function. For autoencoder, both $\phi$ and $\psi$ are parameterized and estimated based on the data. There is no coherence guarantee between them. This is one of the theoretical drawbacks of autoencoder. Nonetheless, we hope that the parametric families for $\phi$ and $\psi$ are flexible enough such that they can be consistently estimated with large samples and thus approximate coherence is automatically achieved. Based on the above discussion, a crucial step in the application of autoencoders is to specify an encoder and a decoder that are suitable for the data to be compressed. In the remainder of this section, we will describe an autoencoder that performs well for response processes. Recurrent Neural Network {#sec:rnn} ------------------------ ![Structure of RNNs.[]{data-label="fig:rnn"}](figures/rnn_picture.pdf){width="10cm"} To facilitate the presentation, we first provide a brief introduction to the recurrent neural networks (RNNs), a pivotal component of the encoder and the decoder of the action sequence autoencoder. RNNs form a class of artificial neural networks that deal with sequences. Unlike traditional artificial neural networks such as multi-layer feed-forward networks [@patterson2017deep Chapter 2] that treat an input as a simple vector, RNNs have a special structure to utilize the sequential information in the data. As depicted in Figure \[fig:rnn\], the basic structure of RNNs has three components: inputs, hidden states, and outputs, each of which is a multivariate time series. The inputs $\bm x_1, \ldots, \bm x_T$ are $K$-dimensional vectors. The hidden states $\bm m_1, \ldots, \bm m_T$ are also $K$-dimensional and can be viewed as the memory that helps process the input information sequentially. The hidden state evolves as the input evolves. Each $\bm m_t$ summarizes what has happened up to time $t$ by integrating the current information $\bm x_t$ with the previous memory $\bm m_{t-1}$, that is, $\bm m_t$ is a function of $\bm x_t$ and $\bm m_{t-1}$ $$\label{eq:rnn_hidden} \bm m_t = f(\bm x_t, \bm m_{t-1}),$$ for $t= 1, \ldots, T$. The initial hidden state $\bm m_0$ is often set to be the zero-vector. To extract from memory the information that is useful for subsequent tasks, a $K$-dimensional output vector $\bm y_t$ is produced as a function of the hidden state $\bm m_t$ at each time step $t$, $$\label{eq:rnn_output} \bm y_t = g(\bm m_t).$$ Both $f$ and $g$ are often specified as a parametric family of functions with parameters to be estimated from data. To summarize, an RNN makes use of the current input $\bm x_t$ and a summary of previous information $\bm m_{t-1}$ to produce an updated summary information $\bm m_t$, which in turn produces an output $\bm y_t$ at each time step $t$. An RNN is not a probabilistic model. It does not specify the probability distribution of the input $\bm x_t$ or the output $\bm y_t$ given the hidden state $\bm m_t$. It is essentially a deterministic nonlinear function that takes a sequence of vectors and outputs another sequence of vectors. Each output vector summarizes the useful information in the input vectors up to the current time step. We will write the function induced by an RNN as $\mathcal{R}(\cdot ; \bm \gamma)$ where $\bm \gamma$ collects the parameters in $f$ and $g$. Letting $\mathbf{X} = (\bm x_1, \ldots, \bm x_T)^\top$ and $\mathbf{Y} = (\bm y_1, \ldots, \bm y_T)^\top$ respectively denote the inputs and the outputs of the RNN, we have $\mathbf{Y} = \mathcal{R}(\mathbf{X}; \bm \gamma)$. We use a subscript $t$ of $\mathcal{R}$ to denote the output vector at time step $t$, that is, $\bm y_t = \mathcal{R}_t(\mathbf{X}; \bm \gamma)$. RNNs can process sequences of different lengths. Note that the functions $f$ and $g$ in and are the same across time steps. Therefore, the total number of parameters for an RNN does not depend on the number of the time steps. Various choices of $f$ and $g$ have been proposed to compute the hidden states and the outputs. Two most widely used ones are the long-short-term-memory (LSTM) unit [@hochreiter1997long] and the gate recurrent unit (GRU) [@cho2014learning]. They are designed to mitigate the vanishing or exploding gradient problem of a basic RNN [@bengio1994learning]. We will also use the two designs in the RNN component of our action sequence autoencoder. The detailed expressions of the LSTM unit and GRU are given in the appendix. Action Sequence Autoencoder --------------------------- ![Structure of action sequence autoencoders.[]{data-label="fig:act_seq_autoencoder"}](figures/action_seq_autoencoder_picture.pdf){width="14cm"} The action sequence autoencoder used for extracting features from process data takes a sequence of actions as the input process and outputs a reconstructed sequence. The diagram in Figure \[fig:act\_seq\_autoencoder\] illustrates the structure of the action sequence autoencoder. In what follows, we elaborate the encoding and the decoding mechanism. #### Encoder. The encoder of the action sequence autoencoder takes a sequence of actions and outputs a $K$-dimensional vector as a compressed summary of the input action sequence. Working with action sequences directly is often challenging because of the categorical nature of the actions. To overcome the obstacle, we associate each action $a_i$ in the action pool $\mathcal{A}$ with a $K$-dimensional latent vector $\bm e_i$ that will be estimated based on the data. These latent vectors describe the attributes of actions and will be used to summarize the information contained in the sequence. The method of mapping categorical variables to continuous latent attributes is often called the embedding method. It is widely used in machine learning applications such as neural machine translation and knowledge graph completion [@bengio2003neural; @NIPS2013_5021; @kraft2016embedding]. The first operation of our encoder is to transform the input sequence $\bm s=(s_1, \ldots, s_T)$ into a corresponding sequence of latent vectors $(\bm e_{i_1}, \ldots, \bm e_{i_T})$ where $i_t$ is the index of the action in $\mathcal{A}$ at time step $t$, that is, $s_t = a_{i_t}$ for $t = 1, \ldots, T$. With the binary matrix representation $\mathbf{S}$ of action sequence $\bm s$, the embedding step of the encoder is simply a matrix multiplication $\mathbf{X} = \mathbf{SE}$ where $\mathbf{E} = (\bm e_1, \ldots, \bm e_N)^\top$ is an $N \times K$ matrix whose $i$-th row is the latent vector for action $a_i$ and the rows of $\mathbf{X} = (\bm e_{i_1}, \ldots, \bm e_{i_T})^\top$ form the latent vector sequence corresponding to the original action sequence $\bm s$. Given the latent vector sequence, the encoder uses an RNN to summarize the information. Since our goal is to compress the entire response process into a single $K$-dimensional vector, only the last output vector of the RNN is kept to serve as a summary of information. Therefore, the output of the encoder, i.e., the latent representation of the input sequence, is $\bm \theta = \mathcal{R}_T(\mathbf{X}; \bm \gamma_{\text{E}})$. To summarize, the encoder of our action sequence autoencoder is $$\label{eq:encoder} \phi_{\bm \eta}(\mathbf{S}) = \mathcal{R}_T(\mathbf{SE}; \bm \gamma_{\text{E}}),$$ where $\bm \eta$ represents all the parameters including the embedding matrix $\mathbf{E}$ and the parameter vector $\bm \gamma_{\text{E}}$ of the encoder RNN. The encoding procedure consists of the following three steps. 1. An observed action sequence is transformed into a sequence of latent vectors by the embedding method: $\mathbf{X} = \mathbf{S}\mathbf{E}$. 2. The latent vector sequence is processed by the encoder RNN to obtain another sequence of vectors $\mathcal{R}(\mathbf{X}; \bm \gamma_{\text{E}}) = (\bm \theta_1, \ldots, \bm \theta_T)^\top$ where $\bm \theta_t = \mathcal{R}_t(\mathbf{X}; \bm \gamma_{\text{E}})$ for $t=1,\ldots, T$. 3. The last output of the RNN is kept as the latent representation, namely, $\bm \theta = \bm \theta_T$. Each of the three steps corresponds to an arrow in the encoder part of Figure \[fig:act\_seq\_autoencoder\]. #### Decoder. The decoder of the action sequence autoencoder reconstructs an action sequence $\bm s$, or equivalently, its binary matrix representation $\mathbf{S}$, from $\bm \theta$. First, a different RNN is used to expand the latent representation $\bm \theta$ into a sequence of vectors, each of which contains the information of the action at the corresponding time step. As $\bm \theta$ is the only information available for the reconstruction, the input of the decoder RNN is the same $\bm \theta$ for each of the $T$ time steps. Writing it in a matrix form, the input of the decoder RNN is $\mathbf{1}_T \bm \theta^\top$ where $\bm{1}_T$ is the $T$-dimensional vector of ones. After the decoder RNN’s processing, we obtain a sequence of $K$-dimensional vectors $\mathbf{Y} = (\bm y_1, \ldots, \bm y_T)^\top = \mathcal{R}(\bm{1}_T{\bm\theta}^\top; \bm \gamma_\text{D})$. Each $\bm y_t$ contains the information for the action taken at time step $t$. Recall that each row of $\mathbf{S}$ is the dummy variable representation of the action taken at corresponding time step. Each row essentially specifies a degenerate categorical distribution on $\mathcal{A}$, with the action that is actually taken having probability one and all the other actions having probability zero. With this observation, the task of restoring the action at step $t$ becomes constructing the probability distribution of the action taken at step $t$ from $\bm y_t$. The multinomial logit model (MLM) can be used in the decoder to achieve this. To be more specific, the probability of taking action $a_j$ at time $t$ is $$\label{eq:mlm} \hat{S}_{tj} = \left\{ \begin{array}{ll} \frac{\exp(b_j + \bm y_t^\top \bm \beta_j)}{ 1 + \sum_{k=1}^{N-1} \exp(b_k + \bm y_t^\top \bm \beta_k)} & \text{if} ~j= 1, \ldots, N-1;\\ \frac{1}{ 1 + \sum_{k=1}^{N-1} \exp(b_k + \bm y_t^\top \bm \beta_k)} & \text{if}~j=N, \end{array}\right.$$ where $b_j$ and $\beta_j$ are parameters to be estimated from the data. Note that the parameters in do not depend on $t$. That is, the encoder uses the same MLM to compute the probability distribution of $s_t$ from $\bm y_t$ for $t=1, \ldots, T$. As a result, the reconstructed sequence is $\hat{\mathbf{S}} = (\hat{S}_{tj})$ and the decoder can be written as $$\label{eq:decoder} \psi_{\bm \xi}(\bm \theta) = \text{MLM}(\mathcal{R}(\bm{1}_T{\bm\theta}^\top; \bm \gamma_\text{D})),$$ where the parameter vector $\bm \xi$ consists of the parameter vector $\bm \gamma_{\text{D}}$ in the decoder RNN and $b_j, \bm \beta_j$, $j = 1, \ldots, N-1$. If we have an ideal autoencoder that reconstructs the input perfectly, the probability distribution specified by $(\hat{S}_{t1}, \ldots, \hat{S}_{tN})$ will concentrate all its probability mass on the action that is actually taken. In practice, it is very unlikely to construct such perfect autoencoders. Usually, every action in the action set $\mathcal{A}$ will be assigned a positive probability in the reconstructed probability distribution. For a given set of response processes, we want to manipulate the parameters in the encoder and the decoder so that the reconstructed probability distribution concentrates as much probability mass on the actual action as possible. To summarize, as depicted in the decoder part of Figure \[fig:act\_seq\_autoencoder\], the decoding procedure of the action sequence autoencoder consists of the following three steps. 1. The latent representation $\bm \theta$ is replicated $T$ times to form the $T \times K$ matrix $\mathbf{1}_T \bm \theta^\top$. 2. The decoder RNN takes $\mathbf{1}_T \bm \theta^\top$ and outputs a sequence of vectors $(\bm y_1, \ldots, \bm y_T)$, each element of which containing the information of the action at the corresponding step. 3. The probability distribution of $s_t$ is computed according to the MLM from $\bm y_t$ at each time step $t$. #### Loss function. In order to extract good features for a given set of response processes, we need to construct an action sequence autoencoder that reconstructs the response processes as well as possible. The discrepancy between an action sequence $\mathbf{S}$ and its reconstructed version $\hat{\mathbf{S}}$ can be measured by the following loss function $$\label{eq:cross_entropy} L(\mathbf{S}, \hat{\mathbf{S}}) = -\frac{1}{T}\sum_{t=1}^T \sum_{j=1}^N S_{tj} \log(\hat{S}_{tj}).$$ Note that, for a given $t$, only one of $S_{t1}, \ldots, S_{tN}$ is non-zero. The loss function is smaller if the distribution specified by $(\hat{S}_{t1}, \ldots, \hat{S}_{tN})$ is more concentrated on the action that is actually taken at step $t$. The best action sequence autoencoder for describing a given set of response processes is the one that minimizes the total reconstruction loss defined in . Notice that is in the same form as the log-likelihood function of categorical distributions. By using this loss function, we implicitly define a probabilistic model for the response processes. That is, given the latent representation $\bm \theta$, $s_t$ follows a categorical distribution on $\mathcal{A}$ with probability vector $(\hat{S}_{t1}, \ldots, \hat{S}_{tN})$. The decoder of the action sequence autoencoder specifies the functional form of the probability vector in terms of $\bm \theta$ and $\bm \xi$. Procedure --------- Based on the above discussion, we extract $K$ features from $n$ response processes $\mathbf{S}_1, \ldots, \mathbf{S}_n$ through the following procedure. \[proc:feature\]  1. Find a minimizer, $(\hat{\bm \eta}, \hat{\bm \xi})$, of the objective function $F(\bm \eta, \bm \xi) = \sum_{i=1}^n L(\mathbf{S}_i, \hat{\mathbf{S}}_i)$ by stochastic gradient descent through the following steps. 1. Initialize the parameters $\bm \eta$ and $\bm \xi$. 2. Randomly generate $i$ from $\{1, \ldots, n\}$ and update $\bm \eta$ and $\bm \xi$ with $\bm \eta - \alpha \frac{\partial{L(\mathbf{S}_i, \hat{\mathbf{S}}_i})}{\partial \bm\eta}$ and $\bm \xi - \alpha \frac{\partial{L(\mathbf{S}_i, \hat{\mathbf{S}}_i})}{\partial \bm\xi}$, respectively, where $\alpha$ is a predetermined small positive number.\[step2\] 3. Repeat step (b) until convergence. 2. Calculate $\tilde{\bm\theta}_i = \phi_{\hat{\bm\eta}}(\mathbf{S}_i)$, for $i=1, \ldots, n$. Each column of $\tilde{\bm\Theta} = (\tilde{\bm \theta}_1, \ldots, \tilde{\bm\theta}_n)^\top$ is a raw feature of the response processes. 3. Perform principal component analysis (PCA) on $\tilde{\bm\Theta}$. The principal components are the $K$ principal features of the response processes. In Step 1, the optimization problem is solved by stochastic gradient descent (SGD) [@robbins1951stochastic]. In Step 1b, a fixed step size $\alpha$ is used for updating the parameters. Data-dependent step sizes such as those proposed in @duchi2011adaptive, @zeiler2012adadelta, and @kingma2014adam can be easily adapted for the optimization problem. Neural networks are often over-parametrized. To prevent overfitting, validation based early stopping [@Prechelt2012] is often used when estimating parameters of complicated neural networks such as our action sequence autoencoder. With this technique, the optimization algorithm, in our case, SGD, is not run until convergence. A parameter value that are obtained before convergence with good performance on the validation set is used as an estimate of the minimizer. To perform early stopping, a dataset is split into a training set and a validation set. A chosen optimization algorithm is performed only on the training set for a number of epochs. An epoch consists of $n_\text{T}$ iterations, where $n_\text{T}$ is the size of the training set. At the end of each epoch, the objective function is evaluated on the validation set. The value of the parameters produces the lowest validation loss is used as an estimate of the minimizer. We adopt this technique when constructing the action sequence autoencoder. The feature extraction procedure with validation-based early stopping is summarized in Procedure \[proc:feature\_valid\]. \[proc:feature\_valid\]  1. Find a minimizer, $(\hat{\bm \eta}, \hat{\bm \xi})$, of the objective function $F(\bm \eta, \bm \xi) = \sum_{i=1}^n L(\mathbf{S}_i, \hat{\mathbf{S}}_i)$ by stochastic gradient descent with validation-based early stopping through the following steps. 1. Randomly split $\{1, \ldots, n\}$ into a training index set $\Omega_\text{T}$ of size $n_\text T$ and a validation index set $\Omega_\text{V}$ of size $n_\text V$. 2. Initialize the parameters $\bm \eta$ and $\bm \xi$ and calculate $F_{\text{V}1} = \sum_{i \in \Omega_{\text V}} L(\mathbf{S}_i, \hat{\mathbf{S}}_i)$. 3. Randomly permute the indices in $\Omega_T$ and denote the result as $(i_1, \ldots, i_{n_{\text T}})$. 4. For $k = 1, \ldots, n_{\text T}$, update $\bm \eta$ and $\bm \xi$ with $\bm \eta - \alpha \frac{\partial{L(\mathbf{S}_{i_k}, \hat{\mathbf{S}}_{i_k})}}{\partial \bm\eta}$ and $\bm \xi - \alpha \frac{\partial{L(\mathbf{S}_{i_k}, \hat{\mathbf{S}}_{i_k})}}{\partial \bm\xi}$, respectively. 5. Calculate $F_{\text{V}2} = \sum_{i \in \Omega_{\text V}} L(\mathbf{S}_i, \hat{\mathbf{S}}_i)$. If $F_{\text{V}2}$ is smaller than $F_{\text{V}1}$, let $\hat{\bm \eta} = \bm \eta$ and $\hat{\bm \xi} = \bm \xi$ and update $F_{\text{V}1}$ with $F_{\text{V}2}$. 6. Repeat steps (c), (d), and (e) for sufficiently many times. 2. Calculate $\tilde{\bm\theta}_i = \phi_{\hat{\bm\eta}}(\mathbf{S}_i)$, for $i=1, \ldots, n$. Each column of $\tilde{\bm\Theta} = (\tilde{\bm \theta}_1, \ldots, \tilde{\bm\theta}_n)^\top$ is a raw feature of the response processes. 3. Perform principal component analysis (PCA) on $\tilde{\bm\Theta}$. The principal components are the $K$ principal features of the response processes. The proposed feature extraction procedure requires the number of features to be extracted, $K$, as an input. In general, if $K$ is too small, the action sequence autoencoder does not have enough flexibility to capture the structure of the response processes. On the other hand, if $K$ is too big, the extracted features contain too much redundant information, causing overfitting and instability in downstream analyses. We adopt the $k$-fold cross-validation procedure [@stone1974cross] to choose a suitable $K$ in the analyses presented in Sections \[sec:simulation\] and \[sec:example\]. We perform principal component analysis on the raw features in the last step of the proposed feature extraction procedure for seeking for feature interpretations. As we will show in the case study, the first several principal features usually have clear interpretations even if the meaning of the actions is not taken into account in the feature extraction procedure. Since the extracted features have a standard format, they can be easily incorporated in (generalized) linear models and many other well-developed statistical procedures. As we will show in the sequel, the extracted features contain a substantial amount of information about the action sequences. They can be used as surrogates of the action sequences to study how response processes are related to the respondents’ latent traits and other quantities of interest. Simulations {#sec:simulation} =========== Experiment Settings ------------------- In this section, we apply the proposed feature extraction method to simulated response processes of an item with 26 possible actions. Each action in the item is denoted by an upper-case English letter. In other words, we define $\mathcal{A} = \{\text{A}, \text{B}, \ldots, \text{Z}\}$. All the sequences used in the study start with A and end with Z, meaning that A and Z represent the start and the end of an item, respectively. In our simulation study, action sequences are generated from Markov chains. That is, given the first $t$ actions in a response process, $s_1, \ldots, s_t$, the distribution from which $s_{t+1}$ is generated depends only on $s_t$. A Markov chain is determined by its probability transition matrix $\mathbf{P} = (p_{ij})_{1\leq i, j \leq N}$, where $p_{ij} = P(s_{t+1} = a_j \, | \, s_{t} = a_i)$. Because of the special meaning of actions A and Z, there should not be transitions from other actions to A and from Z to other actions. As a result, the probability transition matrices used in our simulation study have the constraints that $p_{i1} = 0$ for $i = 1, \ldots, N$, and $p_{NN} = 1$. To construct a probability transition matrix, we only need to specify its elements in its upper right $(N-1)\times (N-1)$ submatrix. Given a transition matrix $\mathbf{P}$, we start a sequence with A and generate all subsequent actions according to $\mathbf{P}$ until Z appears. Two simulation scenarios are devised in our experiments to impose latent class structures in generated response processes. In Scenario I, two latent groups are formed by generating action sequences from two different Markov chains. Let $\mathbf{P}_1$ and $\mathbf{P}_2$ denote the probability transition matrices of the two chains. A set of $n$ sequences are obtained by generating $n/2$ sequences according to $\mathbf{P}_1$ and the remaining $n/2$ sequences according to $\mathbf{P}_2$. Both $\mathbf{P}_1$ and $\mathbf{P}_2$ are randomly generated and then fixed to generate all sets of response processes. To generate $\mathbf{P}_1 = (p_{ij}^{(1)})_{1\leq i,j \leq N}$, we first construct an $(N-1) \times (N-1)$ matrix $\mathbf{U}$ whose elements are independent samples from a uniform distribution on interval $[-10, 10]$. Then the upper right $(N-1)\times (N-1)$ submatrix of $\mathbf{P}_1$ is computed from $\mathbf{U}$ by $${p}_{i+1,j}^{(1)} = \frac{\exp(u_{ij})} { \sum_{l=1}^{N-1}\exp(u_{il})},~ i,j = 1, \ldots, N-1.$$ The transition matrix $\mathbf{P}_2$ is obtained similarly. In Scenario II, half of the $n$ action sequences in a set are generated from $\mathbf{P}_1$ as in Scenario I. The other half is obtained by reversing the actions between A and Z in each of the generated sequences. For example, if (A, B, C, Z) is a generated sequence, then the corresponding reversed sequence is (A, C, B, Z). The two latent groups formed in this scenario is more subtle than those in Scenario I as a sequence and its reversed version cannot be distinguished by marginal counts of actions in $\mathcal{A}$. We consider three choices of $n$, 500, 1000, and 2000. One hundred sets of action sequences are generated for each simulation scenario and each choice of $n$. Procedure \[proc:feature\_valid\] is applied to each datasets. Both LSTM and GRU are considered for the recurrent unit in the autoencoder. For each choice of the recurrent unit, the number of features to be extracted are chosen from $\{10, 20, 30, 40, 50\}$ by five-fold cross-validation. We investigate the ability of the extracted features in preserving the information in action sequences by examining their performance in reconstructing variables derived from action sequences. The variables to be reconstructed are indicators of the appearance of an action or an action pair in a sequence. Rare actions and action pairs that appears fewer than $0.05n$ times in a dataset are not taken into consideration. We model the relationship between the indicators and the extracted features through logistic regression. For each dataset, $n$ sequences are split into training and test sets in the ratio of 4:1. A logistic regression model is estimated for each indicator on the training set and its prediction performance is evaluated on the test set by the proportion of correct prediction, i.e., prediction accuracy. The average prediction accuracy over all the considered indicators are recorded for each dataset and each choice of the recurrent unit. To study how well the extracted features unveil the latent group structures in response processes, we build a logistic regression model to classify the action sequences according to the extracted features. The training and test sets are split similarly as before and the prediction accuracy on the test set is recorded for evaluation. Results ------- -- ------ -------------- -------------- -- -------------- -------------- LSTM GRU LSTM GRU 500 0.88 (0.005) 0.87 (0.006) 0.99 (0.010) 1.00 (0.007) 1000 0.90 (0.003) 0.90 (0.004) 0.99 (0.005) 0.99 (0.006) 2000 0.91 (0.002) 0.91 (0.003) 0.99 (0.005) 0.99 (0.005) 500 0.88 (0.006) 0.88 (0.006) 0.86 (0.033) 0.87 (0.031) 1000 0.90 (0.004) 0.91 (0.005) 0.86 (0.021) 0.86 (0.021) 2000 0.91 (0.002) 0.92 (0.003) 0.87 (0.027) 0.87 (0.016) -- ------ -------------- -------------- -- -------------- -------------- : Mean (standard deviation) of prediction accuracy in the simulation study.[]{data-label="table:sim_ex1"} Table \[table:sim\_ex1\] reports the results of our simulation study. A few observations can be made from Table \[table:sim\_ex1\]. First, the accuracy for reconstructing the appearance of actions and action pairs is high in both simulation scenarios, indicating that the extracted features preserve a significant amount of information in the original action sequences. The reconstruction accuracy is slightly improved as $n$ increases. Including more action sequences can provide more information for estimating the autoencoder in Step 1 of Procedure \[proc:feature\_valid\] thus producing better features. A larger sample size can also lead to a better fit of the logistic models that relate features to derived variables. Both effects contribute to the improvement of action and action pair reconstruction. Second, in both simulation scenarios, the extracted features can distinguish the two latent groups well. In Scenario I, the two groups can be separated almost perfectly. Since the group difference in Scenario II is more subtle, the accuracy in classifying the two groups is lower than that in Scenario I, but still more than 85% of the sequences can be classified correctly. To further look at how the extracted features unveil the latent structure of action sequences, we plot two principal features for one of the datasets with 2000 sequences for each scenario in Figure \[fig:sim\_latent\]. The left panel presents the first two principal features for Scenario I. The group structure is clearly shown and the two groups can be roughly separated by a horizontal line at 0. The right panel of Figure \[fig:sim\_latent\] displays the plot of the first and fourth principal features for Scenario II. Again the two groups can be clearly separated. ![Left: scatterplot of the first two principal features for one dataset of 2000 sequences generated under scenario I. Right: scatterplot of principal features 1 and 4 for one dataset of 2000 sequences generated under scenario II.[]{data-label="fig:sim_latent"}](figures/plot_ex1_I.pdf "fig:"){width="45.00000%"}   ![Left: scatterplot of the first two principal features for one dataset of 2000 sequences generated under scenario I. Right: scatterplot of principal features 1 and 4 for one dataset of 2000 sequences generated under scenario II.[]{data-label="fig:sim_latent"}](figures/plot_ex1_II.pdf "fig:"){width="45.00000%"} Last, the extracted features for the two choices of the recurrent unit in the action sequence autoencoder are comparable in terms of both reconstruction and group structure identification. A GRU has a simpler structure and fewer parameters than an LSTM unit with the same latent dimension. In this sense, GRU is more efficient for our action sequence modeling. Case Study {#sec:example} ========== Data ---- Process data used in this study contains 11,464 respondents’ response processes of the PSTRE items in PIAAC 2012. There are 14 PSTRE items in total. In our data, 7,620 respondents answered 7 items and 3,645 respondents answered all 14 items. For each of the 14 items, there are around 7,500 respondents. For each respondent-item pair, both the response process (action sequence) and the final response outcome were recorded. The original final outcomes for some items are polytomous. We simplify them into binary outcomes with the fully corrected responses labelled as 1 and all others as 0. The 14 PSTRE items in PIAAC 2012 vary in content, task complexity and difficulty. Some basic descriptive statistics of the items are summarized in Table \[table:items\], where $n$ denotes the number of respondents, $N$ is the number of possible actions, $\bar T$ stands for the average sequence length and Correct % is the percentage of correct responses. There are three types of interaction environments, email client, spreadsheet, and, web browser. Some items such as U01a and U01b have a single environment while some items such as U02 and U23 involve multiple environments. U06a is the simplest item in terms of number of possible actions and average response length, but only about one fourth of the participants answered it correctly. Items U02 and U04a are the most difficult items—only around 10% of the respondents correctly completed the given tasks. The tasks in these two items are relatively complicated—there are a few hundred of possible actions and more than 40 actions are needed to finish the task. With the wide item variety, manually extracting important features of process data based on experts’ understanding of the items is time-consuming while the proposed automatic method can be easily applied to all these items. ID Description $n$ $N$ [$\bar T$]{} Correct % ------ -------------------------------------------------- ------ ----- -------------- ----------- -- U01a Party Invitations - Can/Cannot Come 7620 207 24.8 54.5 U01b Party Invitations - Accommodations 7670 249 52.9 49.3 U02 Meeting Rooms 7537 328 54.1 12.8 U03a CD Tally 7613 280 13.7 37.9 U04a Class Attendance 7617 986 44.3 11.9 U06a Sprained Ankle - Site Evaluation Table 7622 47 10.8 26.4 U06b Sprained Ankle - Reliable/Trustworthy Site 7612 98 16.0 52.3 U07 Digital Photography Book Purchase 7549 125 18.6 46.0 U11b Locate E-mail - File 3 E-mails 7528 236 30.9 20.1 U16 Reply All 7531 257 96.9 57.0 U19a Club Membership - Member ID 7556 373 26.9 69.4 U19b Club Membership - Eligibility for Club President 7558 458 21.3 46.3 U21 Tickets 7606 252 23.4 38.2 U23 Lamp Return 7540 303 28.6 34.3 : Descriptive statistics of PIAAC PSTRE items.[]{data-label="table:items"} Note: $n$ = number of respondents; $N$ = number of possible actions; $\bar T$ = average sequence length; Correct % = percentage of correct responses. Features -------- We extract features from the response processes for each of the 14 items using the proposed procedure. The number of features is chosen from $\{10, 20, \ldots, 100\}$ by five-fold cross-validation. Adam [@kingma2014adam] step size is used for optimizing the object function in Step 1 of Procedure \[proc:feature\_valid\]. The algorithm is run for 100 epochs with validation based early stopping, where 10% of the processes are randomly sampled to form the validation set for each item. Although the proposed method does not utilize the meaning of the actions for feature extraction, many of the principal features, especially the first several, have clear interpretations. Table \[table:feature\] gives a partial list of feature interpretations. Interface Type Interpretation ---------------- -------------------------------------------- viewing emails and folders, moving emails, creating new folders, typing emails using sort, using search, clicking drop-down menu clicking relevant links, clicking irrelevant links sequence length, using actions related to the task, switching working environments, selecting answers, answer submission : A partial list of feature interpretations by interface type.[]{data-label="table:feature"} The first or the second principal feature of each item is usually related to respondents’ attentiveness. An inattentive respondent tends to move to the next item without meaningful interactions with the computer environment. In contrast, an attentive respondent typically tries to understand and to complete the task by exploring the environment. Thus attentiveness in response process can be reflected in the length of the process. We call the principal feature that has the largest absolute correlation with the logarithm of the process length the attentiveness feature. In our case, the attentiveness feature is the second principal feature for item U06a and the first for all other items. For all the items, the absolute correlation between the attentiveness feature and the logarithm of sequence length is higher than 0.85. To make a higher attentiveness feature correspond to a more attentive respondent, we modify the attentiveness features by multiplying each of them by the sign of their correlation with the logarithm of process length. For a given pair of items, we select respondents who responded to both items and calculate the correlation between the two modified attentiveness features. These correlations are all positive and range from 0.30 to 0.70, implying that the respondents who are inattentive in one item tend to be inattentive in another item. The feature space of the respondents with correct responses is usually very different from that of the respondents with incorrect responses. As an illustration, in Figure \[fig:feature\_U01b\], we plot the first two principal features of U01b for the two groups of respondents separately. It is obvious that the two clouds of points are of very distinct shapes. The non-oval shape of the clouds suggests that the feature space is highly non-linear. A multivariate normal distribution is not a suitable choice to describe the joint feature space. The scales of the two plots in Figure \[fig:feature\_U01b\] are also different. The variation of the features of correct respondents is much smaller than that of incorrect respondents. The main reason for this phenomenon is that there are more ways to solve the problem incorrectly than correctly. Item U01b requires the respondents to create a new folder and to move some emails to the new folder. Among the incorrect respondents, some moved emails but didn’t create a new folder while some created a new folder but didn’t move the emails correctly. There are also some respondents who didn’t respond seriously—they took fewer than five actions before moving to the next item. As shown in the right panel of Figure \[fig:feature\_U01b\], respondents with similar behaviors are located close to each other in the feature space. ![Scatterplots of the first two principal features of U01b stratified by response outcome.[]{data-label="fig:feature_U01b"}](figures/plot_U01b_features_all_new.pdf){width="\textwidth"} Reconstruction of Derived Variables ----------------------------------- We demonstrate in this subsection that the features extracted from the proposed procedure retain a substantial amount of information of the response processes. To be more specific, we show that various variables directly derived from the processes can be reconstructed by the extracted features. We define a derived variable as a binary variable indicating whether an action or a combination of actions appears in the process. For example, whether the first dropdown menu is set to “Photography” is a derived variable of the item described in the introduction. The binary response outcome is also a derived variable since it is entirely determined by the response process. In our data, 93 derived variables, including 14 item response outcomes, are considered. Similar to the simulation study, we examine how well the derived variables can be reconstructed through a prediction procedure. We use logistic regression to model the relation between a derived variable and the principal features of the corresponding item. For each derived variable, 80% of the respondents are randomly sampled to form the training set and the remaining 20% form the test set. We fit the model on the training set and predict the derived variable for each respondent in the test set. Specifically, the derived variable is predicted as 1 if the fitted probability is greater than 0.5 and 0 otherwise. Prediction accuracy on the entire test set is calculated for evaluation. As shown in Table \[table:derived\_var\], for all the derived variables, the prediction accuracy is higher than 0.80. For 75 out of 93 variables, the accuracy is higher than 0.90. Thirty five variables are predicted nearly perfectly (prediction accuracy greater than $0.975$). These results manifest that the extracted features carry a significant amount of information in the action sequences. We demonstrate in the remaining subsections that the extracted features is useful for assessing respondents’ competency and behaviors. Accuracy $(0.80, 0.85]$ (0.85, 0.90\] (0.90, 0.95\] (0.95, 0.975\] (0.975, 1.00\] ---------- ---------------- --------------- --------------- ---------------- ---------------- Counts 5 13 28 12 35 : Distribution of the out-of-sample prediction accuracy for 93 derived variables.[]{data-label="table:derived_var"} Variable Prediction Based on a Single Item ------------------------------------------ Item responses (both outcome and process) of an item reflect respondents’ latent traits, which affect their overall performance in a test. Therefore, each item response should have some predicting power of the responses of other items and the overall competency. Process data contain more detailed information of respondents’ behaviors than a single binary outcome. We expect that the prediction based on the response process is more accurate than that solely based on the final outcome. In this subsection, we assess information in the response processes of a single item via the prediction of the binary response outcomes of other items as well as the numeracy and the literacy scores. Given the final outcome and the response process of an item, say item $j$, we model their relation with the predicted variable by a generalized linear model $$\label{eq:pred_model} g(\mu) = \bm \eta_j^\top \bm \beta,$$ where $\mu$ is the expectation of the predicted variable, $g$ is the link function, $\bm \eta_j$ is a vector of covariates related to item $j$, which will be specified later, and $\bm \beta$ is the coefficient vector. If the predicted variable is the binary outcome of item $j'$, $g(\mu) = \log\left(\mu/(1-\mu)\right)$ is the logit link and $\mu$ is the probability of answering the item correctly. If the predicted variable is the literacy or numeracy score, $g$ is the identity link and becomes linear regression. Let $z_j$ denote the binary outcome and let $\bm\theta_j$ denote the features extracted from the response process of item $j$. We consider two choices of $\bm\eta_j$ for a given predicted variable, $\bm \eta_j = (1, z_j)^\top$ and $\bm \eta_j = (1, z_j, \bm\theta_j^\top, z_j \bm\theta_j^\top)^\top$. The first choice only uses the binary outcome for prediction. The second uses both the outcome and the response process. We call the model with these two choices of covariates the baseline model and the process model, respectively. It turns out that the information in the baseline model is very limited, especially when the correct rate of item $j$ is close to 0 or 1. For a given predicted variable, two thirds of the available respondents are randomly sampled to form the training set. The remaining one third are evenly split to form the validation and the test set. Both the baseline model and the process model are fit on the training set. We add $L_2$ penalties on the coefficient vector $\bm \beta$ in the process model to avoid overfitting. The penalty parameter is chosen by examining the prediction performance of the resulting model on the validation set. Specifically, a process model is fitted for each candidate value of the penalty parameter. The one that produces the best prediction performance on the validation set is chosen to obtain the final process model for comparing with the baseline model. The evaluation criterion is prediction accuracy for outcome prediction and out-of-sample $R^2$ ($\text{OSR}^2$) for score prediction. $\text{OSR}^2$ is defined to be the square of the Pearson correlation between the predicted and true values. A higher $\text{OSR}^2$ indicates better prediction performance. ### Outcome Prediction Results Figure \[fig:outcome\_pred\] presents the results of outcome prediction. The plot in the left panel gives the improvement in the out-of-sample prediction accuracy of the process model over that of the baseline model for all item pairs. The entry in the $i$-th row and the $j$-th column gives the result for predicting item $j$ by item $i$. For many item pairs, adding the features extracted from process data improves the prediction. To further examine the improvements, for the task of predicting the outcome of item $j'$ by item $j$, we calculate the prediction accuracy separately for the respondents who answered item $j$ correctly and for those who answered incorrectly. The improvements for these two groups are plotted respectively in the middle and the right panels of Figure \[fig:outcome\_pred\]. The improvement is more significant for the incorrect group in both the number of item pairs that have improvement and the magnitude of the improvement. As we mentioned previously, the incorrect response processes are more diverse than the correct ones, thus providing more information about the respondents. Misunderstanding the item requirements and lack of basic computer skills often lead to an incorrect response. Carelessness and inattentiveness are also possible causes of an incorrect answer. These differences can be reflected in the extracted features as illustrated in Figure \[fig:feature\_U01b\]. Therefore, including these features in the model helps the prediction more for the incorrect group than for the correct group. ![Difference of the cross-item outcome prediction accuracy for the process model and the response model.[]{data-label="fig:outcome_pred"}](figures/matplots_seq2seq_one2one_outcome_pred_all_new.pdf){width="\textwidth"} ### Numeracy and Literacy Prediction Results Numeracy and literacy score prediction results are displayed in Figure \[fig:score\_pred\]. In the left panel, we plot the $\text{OSR}^2$ of the process model against that of the baseline model. For both literacy and numeracy, regardless of the item used for prediction, the process model produces a higher $\text{OSR}^2$ than the baseline model. Although the PSTRE items are not designed for measuring these two competencies, the response processes are helpful for predicting the scores. To further examine the results, for each item-score pair, we again group the respondents according to their item response outcome and calculate the $\text{OSR}^2$ of the process model for the two groups separately. The $\text{OSR}^2$ for the incorrect group is plotted against that for the correct group in the right panel of Figure \[fig:score\_pred\]. Similar to the outcome prediction, the prediction performance for the incorrect group is usually much better than that for the correct group since action sequences corresponding to incorrect answers are often more diverse and informative than those corresponding to correct answers. ![Left: OSR${}^2$ of the baseline and process model on the test set. Right: OSR${}^2$ of the process model for the correct and incorrect groups.[]{data-label="fig:score_pred"}](figures/scatterplot_seq2seq_one_score_pred_comb_rsq_all_new.pdf){width="\textwidth"} Prediction Based on Multiple Items ---------------------------------- In this subsection, we examine how the improvement in prediction performance brought by process data aggregates as more items are incorporated in the prediction. The variables of interest are age, gender, and literacy and numeracy scores. We only consider the 3,645 respondents who responded to all 14 PSTRE items in this experiment. The respondents are randomly split into training, validation, and test sets. The sizes of the three sets are 2645, 500, and 500, respectively. The split is fixed for estimating and evaluating all models in this experiment. We still consider model for prediction. The logit link, i.e., logistic regression, is used for gender prediction and linear regression for other variables. In this experiment, the covariate vector $\bm \eta$ incorporates information from multiple items. Given a predicted variable and a set of available items, a baseline model and a process model are considered for each variable. For the baseline model, $\bm \eta$ consists of only the final outcomes, while for the process model it also includes the first 20 principal features for each available item. Let $S_m = \{j_1, \ldots, j_m\}$ denote the set of the indices of available items. The predictor for the baseline model is $\bm \eta = (1, z_{j_1}, \ldots, z_{j_m})^\top$ and that of the process model is $\bm \eta = (1, z_{j_1}, \ldots, z_{j_m}, \bm \theta_{j_1}, \ldots, \bm \theta_{j_m})^\top$ where $\bm \theta_{j} \in \mathbb{R}^{20}$ is the first 20 principal features for item $j$. We start from an empty item set and add one item to the set at a time. That is, for a given predicted variable, a sequence of 14 baseline models and 14 process models are fitted. The order of items being added to the model is determined by forward Akaike information criterion (AIC) selection for the 14 outcomes on the training set. Specifically, for a given $m$, $S_m$ contains the items whose outcomes are the first $m$ variables selected by the forward AIC selection among all 14 outcomes. We use prediction accuracy as the evaluation criterion for gender prediction and $\text{OSR}^2$ for other variables. ### Numeracy and Literacy Prediction Results In Figure \[fig:score\_pred\_all\], $\text{OSR}^2$ for predicting literacy and numeracy scores is plotted against the number of available items. For both the process model and the baseline model, the prediction of the numeracy and the literacy improves as responses from more items are available. Regardless of the number of available items, the process model outperforms the baseline model in both literacy and numeracy score predictions, although the difference becomes smaller as the the number of available items increases. Notice that the $\text{OSR}^2$ of the process model based on only two items roughly equals the $\text{OSR}^2$ of the baseline model based on four items. These results imply that properly incorporating process data in data analysis can exploit the information in items more efficiently and that the incorporation is especially beneficial when a small number of items are available. The PSTRE item responses have some predicting power of literacy and numeracy. This is not surprising as literacy and numeracy are related to the understanding of the PSTRE item description and material. In our case study, PSTRE items are more related to literacy than numeracy—the $\text{OSR}^2$ of literacy score models is usually higher than that of the corresponding numeracy score model. The number of items needed in the process model to achieve a similar $\text{OSR}^2$ obtained in the baseline model with all 14 items is five for literacy and eight for numeracy. ![OSR${}^{2}$ of the baseline and process model with various number of items.[]{data-label="fig:score_pred_all"}](figures/plot_seq2seq_score_pred_l2_all_new.pdf){width="\textwidth"} ### Background Variable Prediction Results ![Prediction results for age and gender.[]{data-label="fig:bg_pred_all"}](figures/plot_seq2seq_bg_pred_l2_all.pdf){width="\textwidth"} Figure \[fig:bg\_pred\_all\] presents the results for predicting age and gender. Adding more items in the baseline model barely improves the $\text{OSR}^2$ for predicting age while in the process model the quantity increases as more items are included and it is about twice as high as that of the baseline model when all 14 items are included. These results show that respondents at different age behave differently in solving PSTRE items and that response processes can reveal the differences significantly better than final outcomes. A closer examination of the action sequences shows that younger respondents are more likely to use drag and drop actions to move emails while older respondents tend to move emails by using email menu (left panel of Figure \[fig:age\_evidence\]). Also, older respondents are less likely to use “Search” in spreadsheet environment (right panel of Figure \[fig:age\_evidence\]). ![Left: Proportion of respondents moving emails by menu in different age groups. Right: Proportion of respondents using “Search” in different age groups.[]{data-label="fig:age_evidence"}](figures/plot_age_menu.pdf "fig:"){width="45.00000%"} ![Left: Proportion of respondents moving emails by menu in different age groups. Right: Proportion of respondents using “Search” in different age groups.[]{data-label="fig:age_evidence"}](figures/plot_age_search.pdf "fig:"){width="45.00000%"} As for gender, the highest prediction accuracy of the baseline models is 0.55, which is only 0.02 higher than the proportion of female respondents in the test set. The prediction accuracy of the process model is almost always higher than that of the corresponding baseline model and it can be as high as 0.63. These observations imply that female and male respondents have similar performance in PSTRE items in terms of final outcomes, but there are some differences in their response processes. In our data, male respondents are more likely to use sorting tools in spreadsheet environment as shown in Table \[table:gender\_sort\]. The p-value for the $\chi^2$ test of independence between gender and whether “Sort” is used is less than $10^{-6}$ for the three items with spreadsheet environment. -------- ----- ------ -- ----- ------ -- ----- ------ Yes No Yes No Yes No Male 418 1238 365 1291 661 995 Female 359 1630 311 1678 564 1425 -------- ----- ------ -- ----- ------ -- ----- ------ : Contingency tables of gender and whether “Sort” is used in U03a, U19a, and U19b.[]{data-label="table:gender_sort"} Concluding Remarks {#sec:discussion} ================== In this article, we presented a method to extract latent features from response processes. The key step of the method is to build an action sequence autoencoder for a set of response processes. We showed through a case study of the process data of PSTRE items in PIAAC 2012 that the extracted features improve the prediction of response outcomes, and literacy and numeracy scores. It is possible to build neural networks that predict a response variable directly from an action sequence. These neural networks are often the combination of an RNN and a feed-forward neural network. In this way, we possibly need to fit separate models for each response variable and each of the models involves RNNs. Fitting models with RNN components is generally computationally expensive because of its recurrent structure. With the feature extraction method, we only need to fit a single model (the action sequence autoencoder) that involves RNNs, and then fit a (generalized) linear model or a feed-forward neural network for each variable of interest. The prediction performance of the two approaches are often comparable. The approach without feature extraction may perform worse than the approach with feature extraction due to overfitting. Computer log files of interactive items often include time stamps of actions. The time elapsed between two consecutive actions may also provide extra information about respondents and can be useful in educational and cognitive assessments. The current action sequence autoencoder does not make use of this information. Further study on incorporating time information in the analysis of process data is a potential future direction. Acknowledgment ============== The authors would like to thank Educational Testing Service and Qiwei He for providing the data, and Hok Kan Ling for cleaning it. Structures of the LSTM Unit and the GRU ======================================= LSTM Unit --------- Using the notation in Section \[sec:rnn\], the LSTM unit computes the hidden states and outputs in time step $t$ as follows $$\begin{gathered} \label{eq:lstm} \bm z_t = \sigma (\bm q_1 + \bm W_1 \bm x_t + \bm U_1 \bm m_{t-1}),\\ \bm r_t = \sigma (\bm q_2 + \bm W_2 \bm x_t + \bm U_2 \bm m_{t-1}), \\ \tilde{\bm c}_t = \tanh (\bm q_3 + \bm W_3 \bm x_t + \bm U_3 \bm m_{t-1}), \\ \bm c_t = \bm z_t \star \bm c_{t-1} + \bm r_t \star \tilde{\bm c_t}, \\ \bm v_t = \sigma (\bm q_4 + \bm W_4 \bm x_t + \bm r_t \star \bm U_4 \bm m_{t-1}), \\ \bm m_t = \bm v_t \star \tanh (\bm c_t),\\ \bm y_t = \bm m_t,\end{gathered}$$ where $\star$ denotes element-wise multiplication, $\bm z_t$, $\bm r_t$, $\bm v_t$, and $\bm c_t$ are called the forget gate, input gate, output gate, and cell state of an LSTM unit, respectively, and $\bm q_i, \bm W_i, \bm U_i, i = 1,2,3,4$, are parameters. Both $\sigma (x) = 1/\{1+ \exp(-x)\}$ and $\tanh (x) = \{\exp(x) - \exp(-x) \}/\{\exp(x) + \exp(-x)\}$ are element-wise activation functions. GRU --- Using the notation in Section \[sec:rnn\], the GRU computes the hidden states and outputs in time step $t$ as follows $$\begin{gathered} \label{eq:lstm} \bm z_t = \sigma (\bm q_1 + \bm W_1 \bm x_t + \bm U_1 \bm m_{t-1}),\\ \bm r_t = \sigma (\bm q_2 + \bm W_2 \bm x_t + \bm U_2 \bm m_{t-1}), \\ \tilde{\bm m}_t = \tanh (\bm q_3 + \bm W_3 \bm x_t + \bm U_3 (\bm r_t \star \bm m_{t-1})), \\ \bm m_t = (1 - \bm z_t) \star \bm m_{t-1} + \bm z_t \star \tilde{\bm m}_t, \\ \bm y_t = \bm m_t,\end{gathered}$$ where $\star$ denotes element-wise multiplication, $\bm z_t$ and $\bm r_t$ are called the update gate and reset gate of a GRU, respectively, and $\bm q_i, \bm W_i, \bm U_i, i = 1,2,3$, are parameters. Both $\sigma (x) = 1/\{1+ \exp(-x)\}$ and $\tanh (x) = \{\exp(x) - \exp(-x) \}/\{\exp(x) + \exp(-x)\}$ are element-wise activation functions. [^1]: Retrieved from <https://piaac-logdata.tba-hosting.de/> [^2]: This example item is not used in real practice. The coding of the actions and the action sequence described above were created for illustration purpose.
--- abstract: 'Let $P$ and $Q$ be finite partially ordered sets on $[d] = \{1, \ldots, d\}$, and $\Oc(P) \subset \RR^{d}$ and $\Oc(Q) \subset \RR^{d}$ their order polytopes. The twinned order polytope of $P$ and $Q$ is the convex polytope $\Delta(P,-Q) \subset \RR^{d}$ which is the convex hull of $\Oc(P) \cup (- \Oc(Q))$. It follows that the origin of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$ if and only if $P$ and $Q$ possess a common linear extension. It will be proved that, when the origin of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$, the toric ideal of $\Delta(P,-Q)$ possesses a quadratic Gröbner basis with respect to a reverse lexicographic order for which the variable corresponding to the origin is smallest. Thus in particular if $P$ and $Q$ possess a common linear extension, then the twinned order polytope $\Delta(P,-Q)$ is a normal Gorenstein Fano polytope.' address: - 'Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Toyonaka, Osaka 560-0043, Japan' - 'Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Toyonaka, Osaka 560-0043, Japan' author: - Takayuki Hibi - Kazunori Matsuda title: Quadratic Gröbner bases of twinned order polytopes --- Introduction {#introduction .unnumbered} ============ In [@HMOS], from a viewpoint of Gröbner bases, the centrally symmetric configuration ([@CSC]) of the order polytope ([@Stanley]) of a finite partially ordered set is studied. In the present paper, a far-reaching generalization of [@HMOS] will be discussed. Let $P = \{ p_{1}, \ldots, p_{d} \}$ and $Q = \{ q_{1}, \ldots, q_{d} \}$ be finite partially ordered sets (posets, for short) with $|P| = |Q| = d$. A subset $I$ of $P$ is called a [*poset ideal*]{} of $P$ if $p_{i} \in I$ and $p_{j} \in P$ together with $p_{j} \leq p_{i}$ guarantee $p_{j} \in I$. Thus in particular the empty set $\emptyset$ as well as $P$ itself is a poset ideal of $P$. Write $\Jc(P)$ for the set of poset ideals of $P$ and $\Jc(Q)$ for that of $Q$. A [*linear extension*]{} of $P$ is a permutation $\sigma = i_{1}i_{2}\cdots i_{d}$ of $[d] = \{ 1, \ldots, d \}$ for which $i_{a} < i_{b}$ if $p_{i_{a}} < p_{i_{b}}$. Let $\eb_{1}, \ldots, \eb_{d}$ stand for the canonical unit coordinate vectors of $\RR^{d}$. Then, for each subset $I \subset P$ and for each subset $J$ of $Q$, we define $\rho(I) = \sum_{p_{i}\in I} \eb_{i}$ and $\rho(J) = \sum_{q_{j}\in J} \eb_{j}$. In particular $\rho(\emptyset)$ is the origin ${\bf 0}$ of $\RR^{d}$. Define $\Omega(P, - Q) \subset \ZZ^{d}$ as $$\Omega(P, - Q) = \{ \, \rho(I) \, : \, \emptyset \neq I \in \Jc(P) \, \} \cup \{ \, - \rho(J) \, : \, \emptyset \neq J \in \Jc(Q) \, \} \cup \{ {\bf 0} \}$$ and write $\Delta(P,-Q) \subset \RR^{d}$ for the convex polytope which is the convex hull of $\Omega(P, - Q)$. We call $\Delta(P,-Q)$ the [*twinned order polytope*]{} of $P$ and $Q$. In other words, the twinned order polytope $\Delta(P,-Q)$ of $P$ and $Q$ is the convex polytope which is the convex hull of $\Oc(P) \cup ( - \Oc(Q))$, where $\Oc(P) \subset \RR^{d}$ is the order polytope of $P$ and $- \Oc(Q) = \{ - \beta \, ; \, \beta \in \Oc(Q) \}$. One has $\dim \Delta(P,-Q) = d$. Since $\rho(P) = \rho(Q) = \eb_{1} + \cdots + \eb_{d}$, it follows that the origin ${\bf 0}$ of $\RR^{d}$ cannot be a vertex of $\Delta(P,-Q)$. In fact, the set of vertices of $\Delta(P,-Q)$ are $\Omega(P, - Q) \setminus \{\bf 0\}$. This paper is organized as follows. In Section $1$, a basic fact that the origin of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$ if and only if $P$ and $Q$ possess a common linear extension (Lemma \[Sapporo\]). We then show, in Section $2$, that, when the origin of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$, the toric ideal of $\Delta(P,-Q)$ possesses a quadratic Gröbner basis with respect to a reverse lexicographic order for which the variable corresponding to the origin is smallest (Theorem \[Boston\]). Thus in particular if $P$ and $Q$ possess a common linear extension, then the twinned order polytope $\Delta(P,-Q)$ is a normal Gorenstein Fano polytope (Corollary \[Berkeley\]). Finally, we conclude this paper with a collection of examples in Section $3$. We refer the reader to [@dojoEN] for fundamental materials on Gröbner bases and toric ideals. Linear extensions ================= Let $P$ and $Q$ be finite posets with $|P| = |Q| = d$. In general, the origin ${\bf 0}$ of $\RR^{d}$ may not belong to the interior of the twinned order polytope $\Delta(P,-Q)$ of $P$ and $Q$. It is then natural to ask when the origin of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$. \[Sapporo\] Let $P = \{ p_{1}, \ldots, p_{d} \}$ and $Q = \{ q_{1}, \ldots, q_{d} \}$ be finite posets. Then the following conditions are equivalent[:]{} 1. The origin of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$[;]{} 2. $P$ and $Q$ possess a common linear extension. ((i) $\Rightarrow$ (ii)) Suppose that the origin ${\bf 0}$ of $\RR^{d}$ belongs to the interior of $\Delta(P,-Q)$. Since $\Omega(P, - Q) \setminus \{\bf 0\}$ is the set of vertices of $\Delta(P,-Q)$, the existence of an equality $$\begin{aligned} \label{interior} {\bf 0} = \sum_{\emptyset \neq I \in \Jc(P)} a_{I} \cdot \rho(I) \ + \sum_{\emptyset \neq J \in \Jc(Q)} b_{J} \cdot (- \rho(J)),\end{aligned}$$ where each of $a_{I}$ and $b_{J}$ is a positive real numbers, is guaranteed. Let $$\sum_{\emptyset \neq I \in \Jc(P)} a_{I} \cdot \rho(I) = \sum_{i=1}^{d} a^{*}_{i} \eb_{i}, \, \, \, \, \, \sum_{\emptyset \neq J \in \Jc(Q)} b_{J} \cdot \rho(J) = \sum_{i=1}^{d} b^{*}_{i} \eb_{i},$$ where each of $a^{*}_{i}$ and $b^{*}_{i}$ is a positive rational number. Since each $I$ is a poset ideal of $P$ and each $J$ is a poset ideal of $Q$, it follows that $a^{*}_{i} > a^{*}_{j}$ if $p_{i} < p_{j}$. Let $\sigma = i_{1} i_{2} \cdots i_{d}$ be a permutation of $[d]$ for which $i_{a} < i_{b}$ if $a^{*}_{i} > a^{*}_{j}$. Then $\sigma$ is a linear extension of $P$. Furthermore, by using (\[interior\]), one has $a^{*}_{i} = b^{*}_{i}$ for $1 \leq i \leq d$. It then turn out that $\sigma$ is also a linear extension of $Q$, as required. ((ii) $\Rightarrow$ (i)) Let $\sigma = i_{1}i_{2}\cdots i_{d}$ be a linear extension of each of $P$ and $Q$. Then $I_{k} = \{p_{i_{1}}, \ldots, p_{i_{k}}\}$ is a poset ideal of $P$ and $J_{k} = \{q_{i_{1}}, \ldots, q_{i_{k}}\}$ is a poset ideal of $Q$ for $1 \leq k \leq d$. Hence $$\begin{aligned} \label{point} \pm\eb_{i_{1}}, \, \pm(\eb_{i_{1}} + \eb_{i_{2}}), \ldots, \, \pm(\eb_{i_{1}} + \cdots + \eb_{i_{d}})\end{aligned}$$ belong to $\Omega(P, - Q)$. Let $\Gamma \subset \RR^{d}$ denote the convex polytope which is the convex hull of (\[point\]). Since $\dim \Gamma = d$ and since the origin of $\RR^{d}$ belongs to the interior of $\Gamma$, it follows that the origin of $\RR^{d}$ belongs to the interior of $\Delta(P, - Q)$, as desired. Quadratic Gröbner bases ======================= Let, as before, $P = \{ p_{1}, \ldots, p_{d} \}$ and $Q = \{ q_{1}, \ldots, q_{d} \}$ be finite partially ordered sets. Let $K[{\bf t}, {\bf t}^{-1}, s] = K[t_{1}, \ldots, t_{d}, t_{1}^{-1}, \ldots, t_{d}^{-1}, s]$ denote the Laurent polynomial ring in $2d + 1$ variables over a field $K$. If $\alpha = (\alpha_{1}, \ldots, \alpha_{d}) \in \ZZ^{d}$, then ${\bf t}^{\alpha}s$ is the Laurent monomial $t_{1}^{\alpha_{1}} \cdots t_{d}^{\alpha_{d}}s$. In particular ${\bf t}^{\bf 0}s = s$. The [*toric ring*]{} of $\Omega(P, - Q)$ is the subring $K[\Omega(P, - Q)]$ of $K[{\bf t}, {\bf t}^{-1}, s]$ which is generated by those Laurent monomials ${\bf t}^{\alpha}s$ with $\alpha \in \Omega(P, - Q)$. Let $$K[{\bf x}, {\bf y}, z] = K[\{x_{I}\}_{\emptyset \neq I \in \Jc(P)} \cup \{y_{J}\}_{\emptyset \neq J \in \Jc(Q)} \cup \{ z \}]$$ denote the polynomial ring in $|\Omega(P, - Q)|$ variables over $K$ and define the surjective ring homomorphism $\pi : K[{\bf x}, {\bf y}, z] \to K[\Omega(P, - Q)]$ by setting - $\pi(x_{I}) = {\bf t}^{\rho(I)}s$, where $\emptyset \neq I \in \Jc(P)$; - $\pi(y_{J}) = {\bf t}^{- \rho(J)}s$, where $\emptyset \neq J \in \Jc(Q)$; - $\pi(z) = s$. The [*toric ideal*]{} $I_{\Omega(P, - Q)}$ of $\Omega(P, - Q)$ is the kernel of $\pi$. Let $<$ denote a reverse lexicographic order on $K[{\bf x}, {\bf y}, z]$ satisfying - $z < x_{I}$ and $z < y_{J}$; - $x_{I'} < x_{I}$ if $I' \subset I$; - $y_{J'} < y_{J}$ if $J' \subset J$, and $\Gc$ the set of the following binomials: 1. $x_{I}x_{I'} - x_{I\cap I'}x_{I \cup I'}$; 2. $y_{J}y_{J'} - y_{J\cap J'}y_{J \cup J'}$; 3. $x_{I}y_{J} - x_{I \setminus \{p_{i}\}}y_{J \setminus \{q_{i}\}}$, where - $x_{\emptyset} = y_{\emptyset} = z$; - $I$ and $I'$ belong to $\Jc(P)$ and $J$ and $J'$ belong to $\Jc(Q)$; - $p_{i}$ is a maximal element of $I$ and $q_{i}$ is a maximal element of $J$. \[Boston\] Work with the same situation as above. Suppose that $P$ and $Q$ possess a common linear extension. Then $\Gc$ is a Gröbner basis of $I_{\Omega(P, - Q)}$ with respect to $<$. It is clear that $\Gc \subset I_{\Omega(P, - Q)}$. In general, if $f = u - v$ is a binomial, then $u$ is called the [*first*]{} monomial of $f$ and $v$ is called the [*second*]{} monomial of $f$. The initial monomial of each of the binomials (i) – (iii) with respect to $<$ is its first monomial. Let ${\rm in}_{<}(\Gc)$ denote the set of initial monomials of binomials belonging to $\Gc$. It follows from [@OHrootsystem (0.1)] that, in order to show that $\Gc$ is a Gröbner basis of $I_{\Omega(P, - Q)}$ with respect to $<$, what we must prove is the following: ($\clubsuit$) If $u$ and $v$ are monomials belonging to $K[{\bf x}, {\bf y}, z]$ with $u \neq v$ such that $u \not\in \langle {\rm in}_{<}(\Gc) \rangle$ and $v \not\in \langle {\rm in}_{<}(\Gc) \rangle$, then $\pi(u) \neq \pi(v)$. Let $u$ and $v$ be monomials belonging to $K[{\bf x}, {\bf y}, z]$ with $u \neq v$. Write $$u = z^{\alpha} x_{I_{1}}^{\xi_{1}} \cdots x_{I_{a}}^{\xi_{a}} y_{J_{1}}^{\nu_{1}} \cdots y_{J_{b}}^{\nu_{b}}, \, \, \, \, \, \, \, \, \, \, v = z^{\alpha'} x_{I'_{1}}^{\xi'_{1}} \cdots x_{I'_{a'}}^{\xi'_{a'}} y_{J'_{1}}^{\nu'_{1}} \cdots y_{J'_{b'}}^{\nu'_{b'}},$$ where - $\alpha \geq 0$, $\alpha' \geq 0$; - $I_{1}, \ldots, I_{a}, I'_{1}, \ldots, I'_{a'} \in \Jc(P) \setminus \{ \emptyset \}$; - $J_{1}, \ldots, J_{b}, J'_{1}, \ldots, J'_{b'} \in \Jc(Q) \setminus \{ \emptyset \}$; - $\xi_{1}, \ldots, \xi_{a}, \nu_{1}, \ldots, \nu_{b}, \xi'_{1}, \ldots, \xi'_{a'}, \nu'_{1}, \ldots, \nu'_{b'} > 0$, and where $u$ and $v$ are relatively prime with $u \not\in \langle {\rm in}_{<}(\Gc) \rangle$ and $v \not\in \langle {\rm in}_{<}(\Gc) \rangle$. Especially either $\alpha = 0$ or $\alpha' = 0$. Let, say, $\alpha' = 0$. Thus $$u = z^{\alpha} x_{I_{1}}^{\xi_{1}} \cdots x_{I_{a}}^{\xi_{a}} y_{J_{1}}^{\nu_{1}} \cdots y_{J_{b}}^{\nu_{b}}, \, \, \, \, \, \, \, \, \, \, v = x_{I'_{1}}^{\xi'_{1}} \cdots x_{I'_{a'}}^{\xi'_{a'}} y_{J'_{1}}^{\nu'_{1}} \cdots y_{J'_{b'}}^{\nu'_{b'}}.$$ By using (i) and (ii), it follows that - $I_{1} \subset I_{2} \subset \cdots \subset I_{a}, \, I_{1} \neq I_{2} \neq \cdots \neq I_{a}$; - $J_{1} \subset J_{2} \subset \cdots \subset J_{b}, \, J_{1} \neq J_{2} \neq \cdots \neq J_{b}$; - $I'_{1} \subset I'_{2} \subset \cdots \subset I'_{a'}, \, I'_{1} \neq I'_{2} \neq \cdots \neq I'_{a'}$; - $J'_{1} \subset J'_{2} \subset \cdots \subset J'_{b'}, \, J'_{1} \neq J'_{2} \neq \cdots \neq J'_{b'}$. Furthermore, by virtue of [@Hibi1987], it suffices to discuss $u$ and $v$ with $(a, a') \neq (0, 0)$ and $(b, b') \neq (0,0)$. Let $A_{i}$ denote the power of $t_{i}$ appearing in $\pi(x_{I_{1}}^{\xi_{1}} \cdots x_{I_{a}}^{\xi_{a}})$ and $A'_{i}$ the power of $t_{i}$ appearing in $\pi(x_{I'_{1}}^{\xi'_{1}} \cdots x_{I'_{a'}}^{\xi'_{a'}})$. Similarly let $B_{i}$ denote the power of $t_{i}^{-1}$ appearing in $\pi(y_{J_{1}}^{\nu_{1}} \cdots y_{J_{b}}^{\nu_{b}})$ and $B'_{i}$ the power of $t_{i}^{-1}$ appearing in $\pi(y_{J'_{1}}^{\nu'_{1}} \cdots y_{J'_{b'}}^{\nu'_{b'}})$. Since $P$ and $Q$ possess a common linear extension, after relabeling the elements of $P$ and $Q$, we assume that if $p_{r} < p_{s}$ in $P$, then $r < s$, and if $q_{r'} < q_{s'}$ in $Q$, then $r' < s'$. Let $1 \leq j_{*} \leq d$ denote the biggest integer for which one has $A_{j_{*}} \neq A'_{j_{*}}$. Since $I_{a} \neq I'_{a'}$, the existence of $j_{*}$ is guaranteed. Let $j_{*} = d$ and, say, $A_{d} > A'_{d}$. Then $p_{d} \in I_{a}$. Since $p_{d}$ is a maximal element of $P$ and $q_{d}$ is that of $Q$, by using (iii), it follows that $q_{d}$ cannot belong to $J_{b}$. Hence $\pi(u) \neq \pi(v)$, as desired. Let $j_{*} < d$ and $A_{j_{*}} > A'_{j_{*}}$. Let $1 \leq e \leq a$ denote the integer with $p_{j_{*}} \in I_{e}$ and $p_{j_{*}} \not\in I_{e - 1}$. We claim that $p_{j_{*}}$ is a maximal element of $I_{e}$. To see why this is true, let $p_{j_{*}} < p_{h}$ in $I_{e}$. Then $j_{*} < h$. Since both $p_{j_{*}}$ and $p_{h}$ belong to each of $I_{e}, I_{e+1}, \ldots, I_{a}$, it follows that $A_{j_{*}} = A_{h}$. Now, since $p_{j_{*}} < p_{h}$, one has $A'_{j_{*}} \geq A'_{h}$. Hence $A_{h} = A_{j_{*}} > A'_{j_{*}} \geq A'_{h}$. However, the definition of $j_{*}$ says that $A_{h} = A'_{h}$, a contradiction. Hence $p_{j_{*}}$ is a maximal element of $I_{e}$. Now, suppose that $\pi(u) = \pi(v)$. Then $B_{d} = B'_{d}, \ldots, B_{j_{*}+1} = B'_{j_{*}+1}$ and $B_{j_{*}} > B'_{j_{*}}$. Then the above argument guarantees the existence of $J_{e'}$ for which $q_{j_{*}}$ is a maximal element of $J_{e'}$. The fact that $p_{j_{*}}$ is a maximal element of $I_{e}$ and $q_{j_{*}}$ is that of $J_{e'}$ contradicts (iii). As a result, one has $\pi(u) \neq \pi(v)$, as desired. Theorem \[Boston\] is a far-reaching generalization of [@HMOS Theorem 2.2]. We refer the reader to [@HMOS] and [@harmony] for basic materials on normal Gorenstein Fano polytopes. As in [@HMOS Corollary 2.3] and [@harmony Corollary 1.3], it follows that \[Berkeley\] If $P$ and $Q$ possess a common linear extension, then the twinned order polytope $\Delta(P,-Q)$ is a normal Gorenstein Fano polytope. Examples ======== We conclude this paper with a collection of examples. It is natural to ask, if, in general, the toric ideal of $I_{\Omega(P, - Q)}$ possesses a quadratic Gröbner basis with respect to a reverse lexicographic order as in Theorem \[Boston\]. *In general, a toric ideal $I_{\Omega(P, - Q)}$ may not possess a quadratic Gröbner basis with respect to a reverse lexicographic order $<$ introduced as above. Let $P = \{p_{1}, \ldots, p_{5}\}$ and $Q = \{q_{1}, \ldots, q_{5}\}$ be the following finite posets:* @ (0, 0) ; (30, 0) \*[P = ]{} , @ (0, 0) ; (44, -10) \*++!R[p\_1]{}\* = “A”, @ “A”; (64, -10) \*++!L[p\_2]{}\* = “B”, @[-]{} “A”; (44, 0) \*++!R[p\_3]{}\* = “C”, @[-]{} “B”; (64, 0) \*++!L[p\_4]{}\* = “D”, @[-]{} “B” ; “C”, @[-]{} “C” ; (54, 10) \*++!R[p\_5]{}\* = “I”, @[-]{} “D” ; “I”, @ (0, 0) ; (86, 0) \*[Q = ]{} , @ (0, 0) ; (100, -10) \*++!R[q\_4]{}\* = “E”, @[-]{} “E” ; (100, -4) \*++!R[q\_3]{}\* = “F”, @[-]{} “F” ; (100, 3) \*++!R[q\_2]{}\* = “G”, @[-]{} “G” ; (100, 10) \*++!R[q\_1]{}\* = “H”, @[-]{} “E” ; (120, -4) \*++!D[q\_5]{}\*, Since $p_1 < p_3$ and $q_3 < q_1$, it follows that no linear extension of $P$ is a linear extension of $Q$. Then a routine computation guarantees that, for any reverse lexicographic order as in Theorem \[Boston\], the binomial $$\begin{aligned} \label{hello} x_{\{2\}} x_{\{1, 2, 3, 4\}} y_{\{1, 2, 3, 4, 5\}} - x_{\{2, 4\}} y_{\{4, 5\}}z\end{aligned}$$ belongs to the reduced Gröbner basis of $I_{\Omega(P, - Q)}$ with respect to $<$. However, the toric ideal $I_{\Omega(P, - Q)}$ is generated by quadratic binomials. The $S$-polynomial of the binomials $$x_{\{2, 4\}} x_{\{1, 2 , 3\}} - x_{\{2\}} x_{\{1, 2, 3, 4\}}, \, \, \, x_{\{1, 2, 3\}} y_{\{1, 2, 3, 4, 5\}} - y_{\{4, 5\}} z$$ belonging to a system of generators of $I_{\Omega(P, - Q)}$ coincided with the binomial (\[hello\]). Let $P$ and $Q$ be arbitrary finite posets with $|P| = |Q| = d$. Then the toric ideal $I_{\Omega(P, - Q)}$ is generated by quadratic binomials. Let $\delta(\Delta(P,-Q))$ denote the $\delta$-vector ([@HibiRedBook p. 79]) of $\Delta(P,-Q)$. It then follows that, if $P$ and $Q$ possess a common linear extension, then $\delta(\Delta(P,-Q))$ is symmetric and unimodal. [*Let $P = \{ p_{1}, \ldots, p_{d} \}$ be a chain and $Q = \{ q_{1}, \ldots, q_{d} \}$ an antichain. Then the $\delta$-vector of $\Delta(P,-Q)$ is $$\begin{aligned} d = 2 &:& (1, 3, 1), \\ d = 3 &:& (1, 7, 7, 1), \\ d = 4 &:& (1, 15, 33, 15, 1), \\ d = 5 &:& (1, 31, 131, 131, 31, 1), \\ d = 6 &:& (1, 63, 473, 883, 473, 63, 1).\end{aligned}$$ It seems likely that $\delta(\Delta(P,-Q))$ coincides with the Pascal-like triangle [@Barry pp. 11–12] with $r = 1$.* ]{} [99]{} P. Barry, General Eulerian polynomials as moments using exponential Riordan arrays, [*J. Integer Seq.*]{} [**16**]{} (2013), Article 13.9.6. T. Hibi, Distributive lattices, affine semigroup rings and algebras with straightening laws, [*in*]{} “Commutative Algebra and Combinatorics” (M. Nagata and H. Matsumura, Eds.), Advanced Studies in Pure Math., Volume 11, North–Holland, Amsterdam, 1987, pp. 93 – 109. T. Hibi, “Algebraic Combinatorics on Convex Polytopes,” Carslaw Publications, Glebe NSW, Australia, 1992. T. Hibi, Ed., “Gröbner Bases: Statistics and Software Systems,” Springer, 2013. T. Hibi, K. Matsuda, H. Ohsugi and K. Shibata, Centrally symmetric configurations of order polytopes, arXiv:1409.4386. H. Ohsugi and T. Hibi, Quadratic initial ideals of root systems, [*Proc. Amer. Math. Soc.*]{} [**130**]{} (2002), 1913–1922. H. Ohsugi and T. Hibi, Centrally symmetric configurations of integer matrices, [*Nagoya Math. J.*]{} [**216**]{} (2014), 153–170. H. Ohsugi and T. Hibi, Reverse lexicographic squarefree initial ideals and Gorenstein Fano polytopes, arXiv:1410.4786. R. P. Stanley, Two poset polytopes, [*Disc. Comput. Geom.*]{} [**1**]{} (1986), 9–23.
--- author: - | Xiang-Yang Li$^\mathcal{z}$ and Taeho Jung$^\mathcal{y}$\ Department of Computer Science, Illinois Institute of Technology, Chicago, IL\ $^\mathcal{z}$xli@cs.iit.edu, $^\mathcal{y}$tjung@hawk.iit.edu title: | Search Me If You Can:\ Privacy-preserving Location Query Service --- [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} T. Hashem and L. Kulik, “Safeguarding location privacy in wireless ad-hoc networks,” *Ubicomp 2007: Ubiquitous Computing*, pp. 372–390, 2007. C. Bettini, X. Wang, and S. Jajodia, “Protecting privacy against location-based personal identification,” *Secure Data Management*, pp. 185–199, 2005. M. Mokbel, C. Chow, and W. Aref, “The new casper: query processing for location services without compromising privacy,” in *Proceedings of the 32nd international conference on Very large data bases*, VLDB Endowment, 2006, pp. 763–774. K. Vu, R. Zheng, and J. Gao, “Efficient algorithms for k-anonymous location privacy in participatory sensing.” in *IEEE INFOCOM*, 2012. L. Sweeney *et al.*, “k-anonymity: A model for protecting privacy,” *International Journal of Uncertainty Fuzziness and Knowledge Based Systems*, vol. 10, no. 5, pp. 557–570, 2002. H. Zang and J. Bolot, “Anonymization of location data does not work: A large-scale measurement study,” in *Proceedings of the 17th annual international conference on Mobile computing and networking*, 2011, pp. 145–156. H. Kido, Y. Yanagisawa, and T. Satoh, “Protection of location privacy using dummies for location-based services,” in *21st International Conference on Data Engineering Workshops, 2005*, pp. 1248–1248. A. Beresford and F. Stajano, “Mix zones: User privacy in location-aware services,” in *Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops, 2004*, pp. 127–131. B. Hoh, M. Gruteser, R. Herring, J. Ban, D. Work, J. Herrera, A. Bayen, M. Annavaram, and Q. Jacobson, “Virtual trip lines for distributed privacy-preserving traffic monitoring,” in *Proceeding of the 6th international conference on Mobile systems, applications, and services*, ACM, 2008, pp. 15–28. M. Li, K. Sampigethaya, L. Huang, and R. Poovendran, “Swing & swap: user-centric approaches towards maximizing location privacy,” in *Proceedings of the 5th ACM workshop on Privacy in electronic society*, 2006, pp. 19–28. X. Liu, H. Zhao, M. Pan, H. Yue, X. Li, and Y. Fang, “Traffic-aware multiple mix zone placement for protecting location privacy.” in *IEEE INFOCOM 2012*. B. Gedik and L. Liu, “Location privacy in mobile systems: A personalized anonymization model,” in *Proceedings of 25th IEEE International Conference on Distributed Computing Systems, 2005*, pp. 620–629. P. Kalnis, G. Ghinita, K. Mouratidis, and D. Papadias, “Preventing location-based identity inference in anonymous spatial queries,” *IEEE Transactions on Knowledge and Data Engineering*, vol. 19, no. 12, pp. 1719–1733, 2007. Y. Wang, D. Xu, X. He, C. Zhang, F. Li, and B. Xu, “L2p2: Location-aware location privacy protection for location-based services.” in *IEEE INFOCOM 2012*. C. Chow, M. Mokbel, and X. Liu, “A peer-to-peer spatial cloaking algorithm for anonymous location-based service,” in *Proceedings of the 14th annual ACM international symposium on Advances in geographic information systems*, 2006, pp. 171–178. Y. Liu, J. Han, and J. Wang, “Rumor riding: anonymizing unstructured peer-to-peer systems,” *IEEE Transactions on Parallel and Distributed Systems*, vol. 22, no. 3, pp. 464–475, 2011. T. Jung, X. Li, Z. Wan, and M. Wan, “Privacy preserving cloud data access with multi-authorities,” in *IEEE INFOCOM*, 2013. A. Sahai and B. Waters, “Fuzzy identity-based encryption,” *Advances in Cryptology–EUROCRYPT 2005*, pp. 557–557, 2005. V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-based encryption for fine-grained access control of encrypted data,” in *Proceedings of the 13th ACM conference on Computer and communications security*, 2006, pp. 89–98. J. Bethencourt, A. Sahai, and B. Waters, “Ciphertext-policy attribute-based encryption,” in *IEEE Symposium on Security and Privacy, 2007*, pp. 321–334. J. Liu, Z. Wan, and M. Gu, “Hierarchical attribute-set based encryption for scalable, flexible and fine-grained access control in cloud computing,” *Information Security Practice and Experience*, pp. 98–107, 2011. K. Lauter, M. Naehrig, and V. Vaikuntanathan, “Can homomorphic encryption be practical,” *Preprint*, 2011. D. Boneh, P. Papakonstantinou, C. Rackoff, Y. Vahlis, and B. Waters, “On the impossibility of basing identity based encryption on trapdoor permutations,” in *IEEE 49th Annual IEEE Symposium on Foundations of Computer Science*, 2008, pp. 283–292. D. Boneh, A. Sahai, and B. Waters, “Functional encryption: Definitions and challenges,” *Theory of Cryptography*, pp. 253–273, 2011. A. Lewko, T. Okamoto, A. Sahai, K. Takashima, and B. Waters, “Fully secure functional encryption: Attribute-based encryption and (hierarchical) inner product encryption,” *Advances in Cryptology–EUROCRYPT 2010*, pp. 62–91, 2010. A. Sahai and H. Seyalioglu, “Worry-free encryption: functional encryption with public keys,” in *Proceedings of the 17th ACM conference on Computer and communications security*, 2010, pp. 463–472. D. Boneh and M. Franklin, “Identity-based encryption from the weil pairing,” in *Advances in Cryptology–EURYPTO 2001*, pp. 213–229. T. Jung, X. Mao, X. Li, S. Tang, W. Gong, and L. Zhang, “Privacy-preserving data aggregation without secure channel: multivariate polynomial evaluation,” in *IEEE INFOCOM*, 2013. L. Zhang, X. Li, Y. Liu, and T. Jung, “Verifiable private multi-party computation: ranging and ranking,” in *IEEE INFOCOM Mini-Conference*, 2013.
--- abstract: 'I shall present nonperturbative calculations of the electron’s magnetic moment using the light-cone representation.' author: - 'G. McCartor' title: 'A Nonperturbative Calculation of the Electron’s Magnetic Moment' --- SMUHEP/03-12 INTRODUCTION ============ For some time we have been studying the problem of performing nonperturbative calculations in quantum field theory [@bhm1; @bhm2; @bhm3; @bhm4; @bhm5; @Paston:1997hs; @Paston:2000fq]. The most serious problem is to find a method of regularizing the calculation in such a way as to preserve the sacred symmetries (Lorentz and gauge invariance), or at least preserve them well enough to allow an effective renormalization to be preformed. Since the method of regularization must also allow for efficient calculations to be performed, the problem presents a challenge. We have been interested in the possibility of including Pauli–Villars fields in the calculation — that is, including them in the lagrangian right from the beginning. This preserves Lorentz invariance; in some cases it also effectively preserves gauge invariance; in those cases where it breaks gauge invariance we shall have to add counter terms. These procedures will, in some cases, give a finite theory which preserves the symmetries. But since we do not know how to find an exact solution, we must also manage to produce an approximate solution. All the ways that we know of to do that involve truncating the Fock space. That truncation will certainly break all of the symmetries. We argue, though, that in the last step (the truncation) it is more a question of accuracy than symmetry. With the regulators in place we presume that there is an exact solution which preserves the symmetries, and if our approximate solution is close to the exact one, even if the small difference is in such a direction as to maximally violate the symmetries, it is still a small difference. Of course, the negative metric fields will violate unitarity. We shall have more to say about that below. For reasons also discussed below, we have chosen to formulate the method in the light-cone representation. To further test the methods on a realistic problem to which we know the answer, we have chosen to do a nonperturbative calculation of the electron’s magnetic moment. In some ways the problem is not ideal: the physical electron is a very perturbative object while it is in some ways ill suited to our methods; therefore we do not expect to do better, or even as good as perturbation theory. But that is not our objective; we simply want to verify that an approximate nonperturbative solution for the electron’s magnetic moment is an approximation to QED. For that study, the problem has the advantages that we know the answer, we know that the answer is given by QED and we know the details of how the answer is given by QED. If the method gives a useful answer to this problem, even if it is not as good as perturbation theory, we may hope that the method will give a useful answer to problems that perturbation theory cannot attempt, such as hadron bound states. We find that there are three problems which must be solved in order to produce a useful calculation of the electron’s magnetic moment: the problem of maintaining gauge invariance; the problem of uncancelled divergences; and the problem of new singularities. We believe that we have found effective solutions to these problems, at least for the present calculations. I shall present these solutions in later sections but first I want to present a calculation which does not work. The justification for presenting an unsuccessful calculation is that it is the application of the standard light-cone methods to the problem of the magnetic moment. There are important points to be found in the failure of that calculation and that failure shows that the later successes are not entirely trivial. TROUBLE IN LIGHT-CONE GAUGE =========================== We use the standard $P^-$ in light-cone quantized, light-cone gauge except for the modifications due to the inclusion of the Pauli–Villars fields; it has been written down without Pauli–Villars several times [@tbp]. The problem was considered in a perturbative context by Langnau and Burkardt [@lb]. We should remark, however, that with the inclusion of any number of Pauli–Villars fermi fields, the four point interactions which would take a state of one electron and one photon to another state of one electron and one photon are missing from $P^-$; that such terms are not included below is not an omission; the calculation is complete in our chosen subspace. We truncate the Fock space to the one electron sector plus the one electron, one photon sector. We then solve the eigenvalue problem: $$P^- \ket{s} = M^2 \ket{s}$$ Once we have the wave function we calculate the magnetic moment using the method of  [@Brodsky:1980zm]. We have carried through the proposed manipulations but they do not lead to a successful calculation. The problem is that our estimate of the anomalous magnetic moment has a very strong dependence on the Pauli-Villars mass scale. If we use units of ${\alpha \over 2 \pi}$, so that the correct value is near 1, then we find that, even with a value for the photon mass as large as 0.5 electron masses, when the Pauli-Villars mass scale changes from 3 times the electron mass to 7 times the electron mass the anomalous moment changes from 1.2 to -1.2. If we use a smaller value for the photon mass, which we would surely have to do to get useful results, the dependence is even stronger. Since we cannot hope to estimate the optimum value for the Pauli–Villars mass scale even to within this range, the present calculation is clearly useless. Furthermore, the problem is clearly the loss of gauge invariance; gauge invariance should prevent such strong behavior. We note that if we keep only the physical field and set $M = m_0 =m$, the function which appears in our eigenvalue equation is just the (unregulated) one-loop fermion self-energy: $${e^2 \over 16 \pi^2} \int dx dz {1 \over x} {{1 + x^2 \over (1 - x)^2} z + m^2 (1-x)^2 \over m^2 x (1-x) - m^2 (1-x) - \mu^2 x - z} \label{lca}$$ Therefore, a very useful point of comparison is the paper of Brodsky, Roskies and Suaya [@brs]. They evaluated all the graphs needed to calculate the electron’s magnetic moment in perturbation theory through order $\alpha^2$. Included in their calculations is the one-loop electron self-energy. They did not use light-cone quantization but wrote down time-ordered perturbation theory in the equal-time representation then boosted to the infinite momentum frame. They worked in Feynman gauge but the electron self-energy should be gauge invariant. They get $${e^2 \over 16 \pi^2} \int dx dz {1\over x}{z + m^2(1 - 4 x - x^2) \over m^2 x (1-x) - m^2 (1-x) - \mu^2 x - z}$$ While the integrands do not have to be identical, the results of the integrations should be equal. Whether we regulate with Pauli-Villars electrons, Pauli-Villars photons or a combination of both, the integrals are not equal. If we tried to use a momentum cutoff the results would be even worse. It is clear that the problem is that gauge invariance has been lost in solving the constraint equation $$\partial_-^2 A^- + \partial_-\partial_i A^i = -e \Psi_+^\dagger \Psi_+$$ We do not yet know exactly what has gone wrong with solving the equation. It is possible that the wrong boundary conditions have been used or that the equation is wrong as it stands: it must be the equation satisfied by the regulated fields and something like Schwinger terms might need to be included. We hope to report further on the details of a resolution to the problem of the loss of gauge invariance in what may be thought of as standard light-cone techniques, but we will not consider the problem further in the present paper. Instead, we will turn our attention to ways that we know of to fix the problem. These ways involve the use of other gauges or other methods of regulation. UNCANCELLED DIVERGENCES ======================= The problem of uncancelled divergences was not discovered by us but has been known for a long time. It occurs anytime we truncate the Fock space. It is perhaps easiest to understand by comparing with perturbation theory, for example, for the electron’ magnetic moment considered in this paper. If we truncate the space to include only the subspace of one electron and the subspace of one electron and one photon, calculate the wave function nonperturbatively and use that wave function to calculate the magnetic moment, we get a result of the form: $$\kappa = {g^2 [finite quantity] \over 1 + g^2[finite quantity] + g^2[finite quantity] \log \mu_2}$$ where $\mu_2$ is the Pauli–Villars mass scale. If we let $\mu_2$ go to infinity without allowing the coupling constant to vary, we will get zero. That would not happen in perturbation theory: since the numerator is already order $g^2$ we would use only the 1 from the denominator and might get a nonzero result. In order $g^4$ we would use the divergent term in the denominator but there would be new terms in the numerator which would contain cancelling divergences. This is the problem of uncancelled divergences. While we can find ways of allowing the bare mass and the coupling constant to depend on the Pauli–Villars masses that will allow a finite result to be obtained, the results we obtain do not look anything like the results from perturbation theory and generally do not make much sense physically. The solution is to keep the Pauli–Villars masses finite. We think of it this way: If the limit of infinite Pauli–Villars masses would give a useful answer in the case where we do not truncate the Fock space (so, we have no uncancelled divergences), then there must be some finite value of the Pauli-Villars masses that would also give a useful answer. The question is whether we can use a sufficiently large value. To answer that question we must consider that there are two types of error associated with the values of the Pauli–Villars masses. The first type of error results from having these masses too small; then our wave function will contain too much of the negative normed states, unitarity will be badly violated and in the worst case we might get negative probabilities. That type of error goes like $$E_1 \sim {M_1/M_2}$$ Where $M_1$ is the physical mass scale and $M_2$ is the Pauli–Villars mass scale. The other type of error results from having the Pauli–Villars masses too large; in that case our wave function will project significantly onto the parts of the representation space excluded by the truncation. That error can be roughly estimated as $$E_2 \sim {\langle \Phi_{+}^\prime|\Phi_{+}^\prime\rangle \over \langle \Phi_{+}|\Phi_{+}\rangle}$$ where $|\Phi_{+ }^\prime\rangle$ is the projection of the wave function onto the excluded sectors. In practice, this quantity can be estimated by doing a perturbative calculation using the projection onto the first excluded Fock sector as the perturbation. If both types of error are small, we can do a useful calculation; otherwise not. The main reason for thinking that we might be able to do a useful calculation in spite of the problem of uncancelled divergences is the lesson from earlier studies mentioned above: the rapid drop off of the projection of the wave function onto higher Fock sectors. Just where this rapid drop off occurs depends on the theory, the coupling constant and the values of the Pauli–Villars masses. At weak coupling and relatively light Pauli–Villars masses only the lowest Fock sectors are significantly populated. At stronger coupling or heavier Pauli–Villars masses, more Fock sectors will be populated; but eventually the projection onto higher sectors will fall rapidly. The rapid drop off of the projection of the wave function onto sufficiently high Fock sectors is the most important reason why we do our calculations in the light-cone representation. For any practical calculations on realistic theories we have to truncate the space and we must have a framework in which that procedure can lead to a useful calculation. The rapid drop off in the projection of the wave function will not happen in the equal-time representation mostly due to the complexity of the vacuum in that representation. These features can be explicitly demonstrated by setting the Pauli–Villars masses equal to the physical masses. In that case the theory becomes exactly solvable [@bhm4]. The spectrum is the free spectrum and the theory is not useful for describing real physical processes due to the strong presence of the negative normed states in physical wave functions, but it still illustrates the points we have been trying to make. In that theory the physical vacuum is the bare light-cone vacuum while it is a very complicated state in the equal-time representation. Physical wave functions project onto a finite number of Fock sectors in the light-cone representation but onto an infinite number of sectors in the equal-time representation. While the operators that create the physical eigenstates from the vacuum are more complicated in the equal-time representation than in the light-cone representation, the major source of the enormous complication of the equal-time wave functions is the equal-time vacuum. As the Pauli–Villars masses become larger than the physical masses, the light-cone wave functions project on to more of the representation space and more so as the coupling constant is larger and the Pauli–Villars masses are larger, but the wave functions remain much simpler than in the equal time representation and to the extent we can do the calculations, there is always a point of rapid drop off of the projection onto higher Fock sectors. Due to the rapid drop off of the projection of the wave function onto the higher Fock sectors we believe that, given a value of the Pauli–Villars mass scale sufficiently large to assure that the first type of error discussed above — the one due to taking the Pauli–Villars masses too small— is small, we could find a sufficiently large part of the representation space such that the second type of error will also be small. Therefore, we do not believe that the need to keep the value of the Pauli–Villars masses finite imposes any fundamental limitation on the accuracy which could, in principle, be achieved. Of course, even if that is true it is not guaranteed that the required part of the representation space would be small enough to allow us to perform accurate calculations. Furthermore, for the method to be useful in practice, there must not only be a value of the Pauli–Villars mass for which both types of errors are small but there must be a wide range of such values since the optimum value for the Pauli–Villars mass can only be rather crudely estimated. A principal objective of the present work is to test these ideas on a physically realistic problem to which we know the answer. NEW SINGULARITIES ================= To perform the nonperturbative calculations we must face the problem of new singularities. We have to do integrals with denominators of the form $(- M^2 x (1-x) + m^2 x + \mu^2 (1-x) + z)$, where $M$ is the physical electron mass, $m$ is the bare electron mass and $\mu$ is the photon mass. When the bare mass is less than the physical mass, as is the case in QED, there can be a zero in this denominator. In perturbation theory the expansion is about $M = m$ and the denominator cannot vanish as long as the photon is given a small nonzero mass (or a large nonzero mass if it is a Pauli–Villars photon). The standard techniques in perturbation theory thus avoid this singularity. We find that when the zero is a simple pole, the principle value prescription is correct. But in the wave function normalization the denominator is squared so there is a double pole and we must give it a meaning. We believe that the correct prescription is $$\begin{aligned} &&\int dy \; dz \; y \;{f(y,z)\over [ m^2 y + \mu_0^2 (1-y) -M^2 y (1-y) + z]^2} \equiv \lim_{\epsilon\rightarrow 0} \nonumber \\ &&{1\over 2 \epsilon} \int dy \int dz f(y,z) \Bigg[{1 \over [ m^2 y + \mu_0^2 (1-y) - y (1-y) + z - \epsilon]}\nonumber \\ &&- {1 \over [ m^2 y + \mu_0^2 (1-y) - M^2 y (1-y) + z + \epsilon]}\Bigg]\end{aligned}$$ This prescription has the interesting consequence that the wave function normalization is infrared finite whereas it is infrared divergent in perturbation theory. The reason is that, with the prescription, the true singularity occurs at $M = m + \mu$; in perturbation theory, with $M = m$, this is at $\mu = 0$, which is the infrared singularity and the reason that the photon mass cannot be taken all the way to zero in perturbation theory. For the nonperturbative calculation, the physical photon mass can be taken to zero since $M \neq m$. The basic requirement of these prescriptions is that they preserve the Ward identities. We have not shown that the prescription preserves the Ward identities but the answers we get do depend on the prescription and we believe that it passes the test so far. FEYNMAN GAUGE ============= In this section we shall calculate the electron’s magnetic moment using Feynman gauge. We shall regulate the theory by the inclusion of one Pauli-Villars photon and one Pauli-Villars fermion with the inclusion of flavor changing currents. The Lagrangian is thus $$\sum_{i=0}^1 -{1 \over 4} (-1)^i F_i^{\mu \nu} F_{i,\mu \nu} -{\lambda_i \over 2} (\partial_\mu A_i^\mu)^2 + \sum_{i=0}^1 (-1)^i \bar{\psi_i} (i \gamma^\mu \partial_\mu - m_i) \psi_i - e \bar{\psi}\gamma^\mu \psi A_\mu$$ where $$A^\mu = \sum_{i=0}^1 A^\mu_i \quad \psi = \sum_{i=0}^1 \psi_i \quad F_i^{\mu \nu} = \partial^\mu A_{i}^{\nu}-\partial_\nu A_{i}^{\mu}$$ Here, $i=0$ are the physical fields and $i=1$ are the Pauli-Villars (negative metric) fields. We must remark on two effects of including the Pauli-Villars fermi fields with the flavor changing currents, one good effect and one apparently bad effect. The first effect, the good one, pertains to the operator $P^-$. If one works out $P^-$ including only the physical fields one encounters the need to invert the covariant derivative $\partial_- - e A_-$  [@sb]. The same problem occurs in any gauge where $A_-$ is not zero. That complication is perhaps the main reason that gauges other than light-cone gauge have received relatively little attention in the light-cone representation. While the inverse of the covariant derivative can be defined by a power series in $e$, or, in a truncated space may be calculated exactly if the truncation is sufficiently severe, one has the feeling that $P^-$ has not been fully written down since there remains the nontrivial problem of inverting the covariant derivative. But with the inclusion of the Pauli-Villars fermions with the flavor changing currents we find that the problem does not occur: the inverse of the covariant derivative is replaced by the inverse of the ordinary derivative. The second effect of the flavor changing currents (the apparently bad effect) is that they break gauge invariance. That would seem to require that we include counter terms in the lagrangian to correct for the breaking of gauge invariance. It turns out that that is not necessary. The reason is that we can take the limit of the Pauli-Villars fermion mass, $m_2$, going to infinity. One might properly worry that there might still be finite effects of the necessary counter terms but the counter terms go to zero like powers of $m_2$ while the only divergences we encounter are logs. We shall therefore proceed with the calculation using only the Lagrangian given above. We expand the wave function as: $$|\psi\rangle = b_{0,+}(1,\vec{0}) |0\rangle + \sum_{s,\mu,i,j} \int C^\mu_{s,i,j} b_{is}^\dagger(x,\vec{k}) a^{\mu \dagger}_j(1-x,\vec{k}) |0\rangle$$ where we have set the total +-momentum of the state to 1 and the total transverse momentum of the state to zero; we shall also take the physical mass of the electron to be 1. In principal, we should include a term which is a state of one Pauli-Villars fermion; but we find that if we do, the coefficient of that term goes to zero when $m_2$ goes to infinity (see also [@bhm4])so we will not include it here. We find that the eigenvalue equation takes the form $$1 - m_0^2 = 2 G \int dx\; dz \sum_{i,j}{(-1)^{i + j} \over x}\left[{(m_j^2 - 4 m_0 m_j x + m_0^2 x^2) -z \over x (1-x) - m_j^2 (1-x) - \mu_i^2 x - z}\right]$$ One indication that we have successfully implemented gauge invariance is that this integral is just that of ref. [@brs] with the perturbative denominator replaced by the nonperturbative denominator. When we complete the calculation of the wave function and then calculate the magnetic moment using the method of  [@Brodsky:1980zm], we find that, in units of the Schwinger term, ${\alpha \over 2 \pi}$, the anomalous moment, $\kappa$ is given by: $\kappa = .99$ at $\mu_1 = 3$, $\kappa =1.09$ at $\mu_1 = 10$, $\kappa = 1.13$ at $\mu_1 = 100$ and $\kappa = 1.49$ at $\mu_1 = 1000$. Thus, we find that the magnetic moment of the electron,$\mu$ is probably between 1.0011 $\mu_0$ and 1.0013 $\mu_0$ where $\mu_0$ is the Dirac moment. If we estimate the optimum Pauli-Villars mass scale using the method given above we find that it is about 30 times the electron mass and obtain an estimate of about 1.0012 $\mu_0$. In the absence of the more accurate perturbative estimate that would be a useful calculation to compare with experiment and we consider the outcome to be entirely satisfactory. LIGHT-CONE GAUGE AGAIN ====================== Although we did not succeed in our earlier light-cone gauge calculation based on regulating the standard light-come method with Pauli-Villars fields, we can do a successful light-cone gauge calculation based on the method of [@Paston:2000fq]. In that reference the authors show that the regularization method gives perturbative equivalence with standard Feynman methods. The present calculations are the first use of the method in a nonperturbative calculation. The Lagrangian is $$\begin{aligned} &&L=-\frac{1},{4}\sum_{j=0,1}(-1)^jF_j^{\m\n} \ls 1+\frac{\dd_{\pa}^2},{\La^2_j}- \frac{\dd_\p^2},{\La^2}\rs F_{j,\m\n} \\ &&+\sum_{l=0}^{3}\frac{1},{v_l}\bar\ps_l\ls i\g^\m\dd_\m-M_l\rs \ps_l -e A_\m \bar\ps \g^\m\ps\end{aligned}$$ where $$F_{j,\m\n}=\dd_\m A_{j,\n}-\dd_\n A_{j,\m},\quad v_0=1,\quad \sum_{l=0}^3 v_l=0,\quad \sum_{l=0}^3 v_lM_l=0,\quad \sum_{l=0}^3 v_lM_l^2=0$$ $$\begin{aligned} A_\m=A_{0,\m}+A_{1,\m},\quad \ps=\sum_{l=0}^3\ps_l,\quad \frac{1},{\La_j^2}=\cases{1/\La^2, &$j=0$,\cr 1/\La^2+1/\m^2, &$j=1$\cr}\end{aligned}$$ There are two other regulation parameters which remove certain states form the Fock space: $\e$ ($\e>0$ cuts off the small $p_-$states according to, $|p_-|\ge\e$), and $v$ ($v>0$ cuts off the small $p_\p$values according to , $|p_\p|\ge v$). The regularization parameters are removed in a strict order according to: $\e\to 0$, then $\mu \to 0$, then $v \to 0$, then $M_L \to \infty$. All these limits are finite and after taking them we are left with a theory regulated only by $\La$ which acts something like a photon mass. As seen in the Lagrangian, it is not exactly a photon mass but rather a parameter which controls the higher derivatives in the photon kinetic energy. In the final answer $\La$ appears in much the same way as the Pauli-Villars photon mass does in Feynman gauge. We have not quite completed the calculations in light-cone gauge using the more sophisticated regularization method but the calculations are nearly done and we can say that the final answers will be very close to those in Feynman gauge. [99]{} S.J. Brodsky, J.R. Hiller, and G. McCartor, Phys. Rev. D **58** (1998), 025005. S.J. Brodsky, J.R. Hiller, and G. McCartor, Phys. Rev. D **60** (1999), 054506. S.J. Brodsky, J.R. Hiller, and G. McCartor, Phys. Rev. D **64** (2001), 114023. S.J. Brodsky, J.R. Hiller, and G. McCartor, Ann. Phys. **296** (2002), 406. S.J. Brodsky, J.R. Hiller, and G. McCartor, Ann. Phys. **305** (2003), 266. S.A. Paston and V.A. Franke, Theor. Math. Phys.  **112** (1997), 1117 \[Teor. Mat. Fiz.  **112** (1997), 399\] \[arXiv:hep-th/9901110\]. S.A. Paston, V.A. Franke, and E.V. Prokhvatilov, Theor. Math. Phys.  **120** (1999), 1164 \[Teor. Mat. Fiz.  **120** (1999), 417\] \[arXiv:hep-th/0002062\]. A. Langnau and M. Burkardt, Phys. Rev. D **47** (1993), 3452. A. C. Tang, S.J. Brodsky and H.-C. Pauli, Phys. Rev. D **44** (1991), 1842. S.J. Brodsky and S.D. Drell, Phys. Rev. D **22** (1980), 2236. S.J. Brodsky, R. Roskies and R. Suaya, Phys. Rev. D **8** (1973), 4574. P. P. Srivastava and S.J. Brodsky, Phys. Rev. D **61** (2000), 025013.
--- abstract: 'Polymeric micelles are used in a variety of applications, with the micelle’s shape often playing an important role. Consequently, a scheme to design micelles of arbitrary shape is desirable. In this paper, we consider micelles formed from a single, linear, multiblock copolymer, and we study how easily the micelle’s shape can be controlled by altering the copolymer block lengths. Using a rational design scheme, we identify a few aspects of the multiblock composition that are expected to have a well-behaved, predictable effect on micelle shape. Starting from a reference micelle composition, itself already exhibiting a nonstandard shape having a moderately sized dimple, we alter these aspects of the multiblock composition and observe the regularity of the micelle shape response. The response of the shape is found to be somewhat smooth, but significantly nonlinear and sometimes nonmonotonic, suggesting that sophisticated techniques may be required to aid in micelle design.' author: - Brian Moths bibliography: - 'references/firstPaper.bib' - | % references/artin.bib - | % references/binder11.bib - | % references/bose2017.bib - | % references/chan93.bib - | % references/Decuzzi10.bib - | % references/detcheverry2009CoarseGrained.bib - | % references/Devarajan09.bib - | % references/Dimitrakopoulos04.bib - | % references/dule2015.bib - | % references/Gillies04.bib - | % references/helfrich73.bib - | % references/Hsieh06.bib - | % references/huopaniemi06.bib - | % references/lammps.bib - | % references/larson88.bib - | % references/le2015.bib - | % references/Li2015.bib - | % references/liu2017.bib - | % references/miller92.bib - | % references/miskin2013.bib - | % references/Muro08.bib - | % references/Patil08.bib - | % references/patra2016.bib - | % references/qiu2015.bib - | % references/siepmann06.bib - | % references/Wang91%.bib title: 'Feasibility of rational shape design of single-polymer micelle using spontaneous surface curvature' --- Introduction {#sec:introduction} ============ Micelles are self-organized aggregates occurring in a solvent, and consisting of two chemically incompatible regions: the exterior of the micelle, occupied by solvophilic material which is miscible with the solvent, and the interior of the micelle, containing solvophobic, immiscible material. The chemical dissimilarity between the micelle interior and exterior allow the micelle to transport material which would normally be immiscible in the solvent. This is what makes micelles effective in their perhaps best-known role as detergents. A related application is to use micelles as a drug carrier, with the drug payload residing in the interior of the micelle. It has been found that a drug carrier’s shape affects aspects of the drug carrier’s performance such as where in the body (e.g., into which organ) the drug payload is deposited [@Decuzzi10; @Devarajan09; @Muro08; @Patil08; @Gillies04]. A second application where micelle shape may be important concerns micelles aggregating together to form higher-order structures of various shapes such as cubes, pyramids, or long chains [@Bose2017; @dule2015; @Li2015; @Qiu2015]. The shape of these aggregates might be controlled through the shape of the constituent micelles. Because of the importance of micelle shape, it would be useful to have a rational design scheme to create micelles of a precisely tailored shape. A good rational design scheme would identify a few key control parameters governing how the micelle is synthesized, and these control parameters would have a well-behaved effect on the micelle shape. Ideally, the effect of the control parameters would be so regular that, given the observed shapes from a small number of control parameter values, the relationship between shape and control parameters could be accurately determined by a naive linear model. In this work, we characterize the performance of such a scheme wherein the micelle consists of a single, linear, multiblock copolymer (i.e., a polymer containing solvophilic and solvophobic monomers segregated into multiple homogeneous blocks), and the number of these blocks and their lengths, which we collectively refer to as the “micelle composition", are used as control parameters to set the micelle shape. We study the design scheme by simulation, which, for simplicity, is performed in two dimensions, a choice we will justify in \[sec:discussion\]. In a previous paper [@firstPaper], we demonstrated that this scheme can indeed be used to produce a micelle of a nonstandard dimpled shape, and we showed, by varying two aspects of the micelle composition, that the micelle shape could be controlled. In this paper, we go beyond merely demonstrating that it is possible to control the micelle shape: we select several control parameters governing the micelle composition, and we assess the regularity of the micelle’s shape dependence on these parameters. We seek to determine if the micelle’s shape dependence can be explained by a straightforward rationale and whether this dependence is simple enough that it may be represented by a naive linear model. To better motivate which aspects of the micelle composition we vary in this work, we now give a more detailed description of the rationale underlying our shape-design scheme. The key idea is to view the multiblock copolymer not as a sequence of homopolymer blocks joined together, but rather as a sequence of diblocks. Thus two homopolymer blocks are joined at a diblock junction point, and adjacent diblocks are joined to each other in the middle of a homopolymer segment, as illustrated in \[fig:multiblockDiblock\]. ![Two different views of a linear multiblock copolymer composed of two species of monomer, shown in red (light gray) and blue (dark gray). In the first view, the multiblock is considered a collection of homopolymer segments. In the second view, it is considered a sequence of diblocks joined end to end. Figure reproduced from [@firstPaper].[]{data-label="fig:multiblockDiblock"}](multiblockDiblock){width="\linewidth"} With this view in mind, we now give an explanation, illustrated in \[fig:rationale\], of how the diblocks’ block lengths may affect the micelle shape. ![ Illustration of shape-design rationale. The multiblock, viewed as a collection of diblocks, exhibits a configuration where the junction points of the diblock lie on the micelle surface. Diblocks of different composition, and therefore different spontaneous curvatures, cause a nonuniform surface curvature, giving the micelle its desired shape. The relative positioning of the diblocks is enforced by joining them end to end. Figure reproduced from [@firstPaper]. []{data-label="fig:rationale"}](interfaceCurvatureWithMicelle){width=".7\linewidth"} In solution, the solvophilic blocks are located at the exterior of the micelle, while the solvophobic blocks compose the micelle interior. Thus, the diblock junction points occupy the boundary separating the two regions. It is well-known that such an interface containing diblocks has a spontaneous curvature depending on the diblocks’ compositions and chemical properties [@Wang91]. By this reasoning, we may judiciously choose the diblock lengths at each point on the micelle surface to imprint a spontaneous curvature profile giving rise to the desired shape. We have now explained why the multiblock is viewed as a collection of diblocks, but we have not explained why these diblocks must be joined together as opposed to simply being allowed to aggregate as in a typical self-assembled micelle composed of diblock amphiphiles. The diblocks are joined in order to prevent them from diffusing across the micelle surface, since such diffusion would erase the intended spontaneous curvature profile. Nevertheless, as will be indicated in \[sec:methods\], the multiblock structure of the polymer often fails to ensure the intended diblock arrangement on the surface. To eliminate these failures something further must be done, but an in-depth study of this issue is beyond the scope of the present work. Instead, we simply discard the problematic micelles. A justification for discarding the problematic micelles and proposals for how they may be completely eliminated in future work are given in \[sec:discussion\]. In the rest of this paper, we describe an assessment of the performance of this design scheme. In \[sec:methods\], we describe how we simulate single-polymer micelles: we identify a micelle composition that assumes a nonstandard shape; we select five aspects of the micelle composition as suitable control parameters; and we choose two features of the micelle shape whose dependences on the control parameters are to be assessed. In \[sec:results\], the results of varying our chosen aspects of micelle composition are presented, and the effect on the shape features is examined. In \[sec:discussion\], we discuss the implications of our results for the practicality of the shape-design scheme presented in this paper, and we revisit unresolved issues mentioned in \[sec:introduction\] and \[sec:methods\]. In \[sec:conclusion\], we conclude. Methods {#sec:methods} ======= To assess our shape-design scheme, we simulate micelles of various compositions and compare the resulting shapes. However, before simulations can be performed, a physical model for the micelles must be selected. A detailed description of our model and simulation method is given in [@firstPaper]. We now present the most relevant features starting with the model. We choose a simple coarse-grained bead-spring model with implicit solvent (similar to those of [@detcheverry2009; @Binder11; @Dimitrakopoulos04; @Hsieh06]) because our shape-design mechanism should not depend on details of the interactions of the polymer constituents. A polymer molecule is represented as a linear sequence of beads with consecutive beads joined by harmonic springs, as illustrated in \[fig:beadSpring\]. ![Schematic of a short diblock copolymer as represented in our model. This copolymer consists of seven beads: four solvophobic beads, shown in red (light gray), and three solvophilic beads, shown in blue (dark gray). Harmonic springs connect beads adjacent along the polymer. The light blue background represents the implicit solvent.[]{data-label="fig:beadSpring"}](beadSpring){width=".4\linewidth"} There are two species of beads: solvophilic beads, which interact with other beads through only a short-range repulsion, and solvophobic beads, which experience an additional longer-range attraction with other solvophobic beads because of their immiscibility with the solvent. The particular values of the interaction parameters and the simulation’s temperature are chosen to replicate macroscopic behavior of real polymer. This model is simulated at constant temperature using the LAMMPS molecular dynamics package [@lammps95]. At regular intervals of the simulated time, the simulation records the junction points’ positions, defined as the midpoint between adjacent beads of opposite species. Because we are interested only in the overall shape of the micelle and not the individual positioning of each bead, these junction points provide sufficient data for our purposes. We refer to each list of junction point positions as a “shape", denoting it by a blackboard bold symbol (e.g., ${\mathbbm{r}}$). Therefore, each shape ${\mathbbm{r}}$ has the form $$\left({\mathbf{r}}_1,{\mathbf{r}}_2,\dots,{\mathbf{r}}_i,\dots,{\mathbf{r}}_{N_j}\right), \label{eq:formOfR}$$ where $N_j$ is the number of junction points in the micelle and each ${\mathbf{r}}_i$ is a two dimensional junction point position. The output of the simulation is a time sequence of such shapes: ${\mathbbm{r}}_\alpha$, $\alpha=1,2,\dots,N_s$, where $N_s$ is the number of sampled shapes. After the simulation runs are complete, the shape sequences are further analyzed. We summarize the resulting sequence ${\mathbbm{r}}_\alpha$ of shapes by its average ${\bar{{\mathbbm{r}}}}$, the shape variance matrix ${\mathbb{\Sigma}}$ of dimension $2 N_j \times 2 N_j$ characterizing the variance shape’s thermal fluctuations, and another $2 N_j \times 2 N_j$ variance matrix ${\bar{{\mathbb{\Sigma}}}}$ representing the uncertainty in the mean shape ${\bar{{\mathbbm{r}}}}$. For a given micelle composition, we run several simulations. Despite the bonds joining adjacent diblocks, roughly half of the simulations result in poorly formed micelles where the diblocks do not keep their intended positioning on the surface. Since we are interested in the behavior of our shape-design scheme, which depends on the diblocks maintaining their intended positioning, we exclude any poorly formed micelles from further analysis. Concretely, we reject any simulation run whose average micelle either has two neighboring junction points separated by more than forty percent of the median distance between adjacent junction points, or whose the shortest closed path connecting all the junction points does not have the intended ordering. In \[sec:discussion\], we explain why the exclusion of these poorly formed micelles is justified. We combine the results of the remaining simulations to make a best estimate of ${\bar{{\mathbbm{r}}}}$, ${\mathbb{\Sigma}}$, and ${\bar{{\mathbb{\Sigma}}}}$. Having described how the micelles are simulated, we now describe which micelle compositions to simulate in order to examine their effect on micelle shape. We start with a reference micelle composition previously shown in [@firstPaper] to produce a micelle of nonstandard dimpled shape. Then we select several parameters of this micelle composition to be changed. ![ Schematic of multiblock bond architecture. Tan (top row) and red (light gray, bottom rows) disks represent solvophobic beads, while blue disks (dark gray, bottom rows) represent solvophilic beads. Black segments represent bonds between beads. The multiblock begins with a core segment composed of solvophobic beads shown in tan and occupying the top row. To this segment solvophobic-rich diblocks, outlined in black as in \[fig:endPicture\], are successively attached end to end. Lastly solvophilic-rich diblocks are attached end to end. The “$\bullet \bullet\bullet$” symbols represent further diblocks not shown. Figure adapted from [@firstPaper]. []{data-label="fig:multiblockdiblockwithcore"}](multiblockDiblockWithCore){width="\linewidth"} ![ A typical configuration of the reference micelle during the course of a simulation. The red (light gray) beads are solvophobic; the blue (dark gray beads on micelle exterior), solvophilic. The tan beads (dark gray beads in micelle interior), which constitute the micelle core, are also solvophobic. The micelle is constructed from two types of diblocks, termed solvophobic-rich and solvophilic-rich. The solvophobic-rich diblocks, outlined in black, are located near the dimple. []{data-label="fig:endPicture"}](endPicture){width="\columnwidth"} The basic design of the copolymers simulated in this work is shown in \[fig:multiblockdiblockwithcore\]. The key feature of the design is that the micelle contains two species of diblock having a common length but distinguished by their composition: a “solvophobic-rich” species of diblock, having relatively more solvophobic beads and therefore favoring a more negative, concave curvature, and a “solvophilic-rich” species of diblock, having relatively more solvophilic beads and therefore favoring a more positive, convex curvature. This contrast in preferred curvature is designed to cause the formation of a dimple. The linear copolymer begins with a long sequence of solvophobic beads, which forms a “core" to be situated in the micelle’s interior. To one end of this core segment are joined end-to-end a sequence of solvophobic-rich diblocks. To the free end of this sequence of solvophobic-rich diblocks, we attached a sequence of solvophilic-rich diblocks. The micelle has $700$ core beads, $12$ solvophobic-rich diblocks each containing $27$ solvophobic beads and $4$ solvophilic beads, and $55$ solvophilic-rich diblocks each containing $24$ solvophobic beads and $7$ solvophilic beads. A chemical formula representing this monomer sequence is $R_{700}(R_{27}B_4B_4R_{27})_6(R_{24}B_7 B_7 R_{24})_{27} R_{24}B_7$, where $R$ represents a solvophobic monomer and $B$ represents a solvophilic monomer. A typical thermal configuration of a micelle having the reference composition is illustrated in \[fig:endPicture\]. Next, we describe our chosen control parameters—parameters of the micelle composition that we alter to control the micelle’s shape. A good control parameter must have a well-behaved effect on the micelle shape, and, further, its effect on micelle shape ought to be predictable using a simple rationale. Accordingly, we will describe each parameter’s anticipated effect as it is introduced. The first parameter we define is the number of core beads, ${N_\text{core}}$, having a value of $700$ for the reference micelle composition; it can be used to set the enclosed volume of the micelle without affecting the surface properties. Two additional parameters concerning the number of beads in the micelle are the numbers of solvophobic-rich (${N_-}$, the “$-$” reflecting that these diblocks prefer a relatively negative, concave curvature) and solvophilic-rich (${N_+}$, the “$+$” reflecting that these diblocks prefer a relatively positive, convex curvature) diblocks in the micelle, which we expect to set the preferred perimeter of their respective regions of the micelle surface without directly affecting either region’s preferred curvature. These parameters have the values of ${N_-}=12$ and ${N_+}=55$ for the reference micelle composition. The two final parameters concern the compositions of the solvophobic-rich and solvophilic-rich diblocks. We keep the length of either species of diblock fixed at $31$, changing only the relative amount of the two species of beads (that is, the asymmetry of the diblock). The asymmetry of a diblock containing $n_\text{phobic}$ solvophobic beads and $n_\text{philic}$ solvophilic beads is quantified by the “asymmetry ratio" $r$ defined by $$r=\frac{n_\text{philic} - n_\text{phobic}}{n_\text{philic} + n_\text{phobic}}. \label{eq:asymmetryRatio}$$ The asymmetry ratio of several model diblocks is illustrated in \[fig:asymmetryRatio\]. We denote the asymmetry ratio of the solvophobic-rich diblocks and solvophilic-rich diblocks by ${r_-}$ and ${r_+},$ respectively. By definition, solvophilic-rich diblocks have a larger asymmetry ratio, so that ${r_-}$ and ${r_+}$ satisfy ${r_-}< {r_+}$. These two parameters provide a way to control the spontaneous curvature of their respective regions of the micelle surface while only weakly changing the preferred perimeter. Specifically, we expect that the more positive a diblock’s asymmetry ratio, the more positive its associated preferred curvature. ![A graphical representation of the average shape ${\bar{{\mathbbm{r}}}}$, the shape fluctuations ${\mathbb{\Sigma}}$, and the uncertainty in the average shape ${\bar{{\mathbb{\Sigma}}}}$ of the reference micelle. The average shape ${\bar{{\mathbbm{r}}}}$ is represented as a green curve (passing through the midline of shaded region) connecting the average position of the junction points. The curve segments connecting solvophobic-rich diblocks are outlined in black. The shape fluctuations ${\mathbb{\Sigma}}$ are represented by a large blue ellipses surrounding each junction point indicating the $40\%$ confidence region (corresponding to one standard deviation) for the junction point’s position during the course of the simulation. The uncertainty in the mean shape is represented similarly with smaller red ellipses indicating the confidence region for the mean junction point position. []{data-label="fig:baseCaseAverage"}](sevenHundredsymmetricAveragePlot.pdf){width=".8\columnwidth"} Having described how the micelle compositions are changed, we now describe what features of the resulting thermal micelle shape distribution we study. A graphical representation of the average shape ${\bar{{\mathbbm{r}}}}$, shape sample variance ${\mathbb{\Sigma}}$, and variance in the mean shape ${\bar{{\mathbb{\Sigma}}}}$ characterizing the micelle shape distribution is shown in \[fig:baseCaseAverage\]. In previous work [@firstPaper], we have validated that the mean shapes are reproducible and the errors in the mean shape are indeed consistent with the variability in the mean shape between simulation runs. However, since the average shape ${\bar{{\mathbbm{r}}}}$, shape sample variance ${\mathbb{\Sigma}}$, and variance in the mean shape ${\bar{{\mathbb{\Sigma}}}}$ are high-dimensional objects, we choose, for the sake of concreteness, to look at only two scalar shape features summarizing these quantities, which we soon define: the curvature ratio, characterizing the strength of the average shape’s dimple, and the normalized fluctuation, characterizing the size of thermal shape fluctuations. The curvature ratio ${\frac{c_-}{c_+}}$, illustrated in \[fig:curvatureRatio\], is defined as the shape’s average signed curvature $c_-$ in the region occupied by the solvophobic-rich diblocks divided by the average signed curvature $c_+$ in the region occupied by the solvophilic-rich diblocks. Thus a circle has a curvature ratio of one, and negative curvature ratios indicate the presence of a concave dimple, with increasingly negative curvature ratios indicating stronger dimples. ![Illustration of curvature ratio definition. The region occupied by solvophobic-rich diblocks is shown in red, and its average signed curvature (having a negative value in this case) is denoted by $c_-$. By contrast, the region occupied by solvophilic-rich diblocks is shown in green, and its average curvature is denoted $c_+$. The curvature ratio is defined as the ratio $c_-/c_+$. Figure reproduced from [@firstPaper]. []{data-label="fig:curvatureRatio"}](curvatureRatio){width=".7\columnwidth"} The normalized fluctuation $\delta$ is defined by the formula $$\delta = \sqrt{\frac{\operatorname{Tr}{\mathbb{\Sigma}}}{2 N_j R_g^2}}, \label{eq:normalizedFluctuation}$$ where $R_g$ is the radius of gyration of the average shape ${\bar{{\mathbbm{r}}}}$. Intuitively, the factor $\sqrt{\frac{\operatorname{Tr}{\mathbb{\Sigma}}}{2N_j}}$ may be interpreted as the root-mean-square length of the semi-axes of the blue ellipses shown, for example, in \[fig:baseCaseAverage\] (the blue ellipses being the one standard deviation confidence regions for the sampled position of the junction points). The normalized fluctuation is a scalar measure of the size of the shape fluctuations, normalized so as not to scale with the number of junction points or overall spatial extent of the micelle shape. The uncertainties in these two shape features can, like the values themselves, be estimated from the micelle shape distribution statistics ${\bar{{\mathbbm{r}}}}$, ${\mathbb{\Sigma}}$, and ${\bar{{\mathbb{\Sigma}}}}$. Since the curvature ratio depends only on the mean shape ${\bar{{\mathbbm{r}}}}$, its uncertainty can easily be inferred from the error in the mean ${\bar{{\mathbb{\Sigma}}}}$. However, our estimate for the uncertainty in the normalized fluctuation is more subtle; we refer the reader to [@firstPaper] for a description and validation of this uncertainty estimate. Results {#sec:results} ======= In this section, we show the dependence of the two shape features ${\frac{c_-}{c_+}}$ and $\delta$ on the five micelle composition parameters ${N_\text{core}}$, ${N_-}$, ${N_+}$, ${r_-}$, and ${r_+}$ introduced in \[sec:methods\]. To speak to the question we raised in \[sec:introduction\] of whether this observed dependence is explained by a straightforward rationale, we give simple arguments accounting for the observed behavior in terms of the micelle surface’s tension and bending energy. The adequacy of our proposed explanations, as well as what these results imply about the feasibility of a naive design strategy will be discussed in \[sec:discussion\]. We are not so much concerned with the exact numerical values of the composition parameters as we are with how the micelle shape qualitatively depends on them. Therefore, to simplify discourse, we normalize the composition parameters by their values for the reference micelle, and we denote the normalized values with a hat ($\hat{}$). For example, the normalized amount of core ${\hat{N}_\text{core}}$ is given by ${N_\text{core}}/700$, since the reference micelle composition has $700$ core beads. Similarly, the normalized number of solvophilic-rich chains is given by ${\hat{N}_+}={N_+}/55$, since the reference micelle composition has ${N_+}=55$, etc. To frame the explanation of our observed results, we first explain what one might naively expect. The shape dependence can be thought of as a function from the five-dimensional space of micelle composition parameters to the two-dimensional shape feature space. In this work, we start from a base micelle composition and change different aspects of the micelle composition (in other words, moving in different directions in micelle composition space) and observe the resulting change in the shape features (in other words, how the resulting shape changes in shape feature space). A naive expectation, which must be borne out for small changes in the micelle composition, is that the micelle shape change depends linearly on the change in micelle composition. In the typical case, we expect the map to have full rank so that it is possible to change the curvature ratio without changing the normalized fluctuation and vice-versa through appropriate changes to the micelle composition. Then by the rank-nullity theorem [@artin], there must be three directions in the micelle composition space (typically not corresponding to a change in any single composition parameter) that lead to no change in the shape features. Since we expect the three null directions to have no relationship to the axes defined by the five composition parameters, we expect the five composition parameters to each change the shape features in a unique direction in the two-dimensional shape feature space. We will compare our results to these expectations after presenting the results. We begin by examining the shape features’ dependence on the number of core beads ${N_\text{core}}$ while holding the other four composition parameters fixed. [.85]{}[|c|cccccc|]{} ${\hat{N}_\text{core}}$ ------------------------------------------------------------------------ & 14% & 29% & 43% & 57% & 71% & 86%\ & & & & & &\ ${\hat{N}_\text{core}}$ ------------------------------------------------------------------------ & 100%&114% &129% &143% &157% &171%\ & & & & & &\ ${\hat{N}_\text{core}}$ ------------------------------------------------------------------------ & 186% &200% &214% &286% &357%&\ & & & & & &\ shows the average shapes, fluctuations, and uncertainties in the average shapes resulting from varying the number of core beads ${N_\text{core}}$. It is apparent from these results that the effect of increasing ${N_\text{core}}$ is to make the shapes more circular and decrease their fluctuations. The character of these trends can be studied more precisely by plotting the shape features ${\frac{c_-}{c_+}}$ and $\delta$ against ${N_\text{core}}$, which we do in \[fig:corePlots\]. In this figure, it is clear that the shape features generally follow the trend apparent from \[tab:coreShapes\], and, excluding the largest values of ${N_\text{core}}$, the dependence of the shape features on ${N_\text{core}}$ is roughly linear. The largest two values of ${N_\text{core}}$ indicate smaller slopes than the other data, consistent with the expectation that for large ${N_\text{core}}$, the curvature ratio should approach unity and the normalized fluctuations should go to zero. Thus the data resulting from varying ${N_\text{core}}$ show a moderately sized domain of linearity. Next we present in \[fig:combinedResults\] the dependences of the shape features on each of the five micelle composition parameters. There are two types of trends that result from varying a micelle composition parameter: the first type of trend, shown in \[fig:nPhilCorePlots\], is where the fluctuation increases as the dimple becomes more pronounced (i.e., curvature ratio becomes more negative); the second type of trend, shown in \[fig:rPhilnPhoberPhobePlots\], involves the opposite relationship between the shape features, with the fluctuation instead decreasing as the dimple becomes more pronounced. The first type of trend results from varying ${N_\text{core}}$ and ${N_+}$, while the second type of trend results from varying ${r_-}$, ${r_+}$, and ${N_-}$. We now propose an explanation for why varying ${N_\text{core}}$ and ${N_+}$ (the shapes resulting from varying ${N_+}$ are shown in \[tab:nPhilShapes\]) both cause the normalized fluctuation and the strength of the dimple to respond in the same direction. Since increasing ${N_\text{core}}$ tends to increase the size and therefore the perimeter of the micelle, and decreasing ${N_+}$ decreases the number of diblocks on the micelle perimeter, either of these changes tends to decrease the density of diblocks on the micelle surface. As the surface density of the diblocks is decreased, we expect their surfactant-like effect to be reduced so that the surface tension of the micelle would increase. This surface tension increase should have two effects. The first effect is to reduce fluctuations in the micelle shape, and the second effect is to make the micelle shape more circular, reducing the strength of the dimple. Thus we expect that increasing ${N_\text{core}}$ or decreasing ${N_+}$ both decreases the fluctuations (i.e., decreases $\delta$) and decreases the strength of the dimple (i.e., makes ${\frac{c_-}{c_+}}$ more positive). By this reasoning, changing either ${N_\text{core}}$ or ${N_+}$ would cause ${\frac{c_-}{c_+}}$ and $\delta$ to change in opposite directions, consistent with the negative slope in \[fig:corePlots\]. [|c|cccccc|]{} ${\hat{N}_+}$ ------------------------------------------------------------------------ & 45% &55% &64% &73% &82% &91%\ & & & & & &\ ${\hat{N}_+}$ ------------------------------------------------------------------------ & 100% &109% &118% &127%&136%&\ & & & & & &\ Having discussed the two micelle composition parameters which affect the normalized fluctuation and the dimple strength in the same way, we now discuss the remaining three composition parameters, where the responses of the shape features are opposite to each other as shown in \[fig:rPhilnPhoberPhobePlots\]. First we propose explanations for the results of varying asymmetry ratio ${r_-}$ of the solvophobic-rich diblocks, those designed to sit at the micelle’s dimple. The resulting shapes are shown in \[tab:rPhobeShapes\]. --------------- -------- -------- ---------- ----------- ----------- ----------- ${\hat{r}_-}$ $74\%$ $83\%$ $ 91\% $ $ 100\% $ $ 109\% $ $ 117\% $ --------------- -------- -------- ---------- ----------- ----------- ----------- : Mean micelle shapes, thermal fluctuations, and errors in mean micelle shapes (illustrated in the manner of \[fig:baseCaseAverage\]) as a function of ${\hat{r}_-}$, the asymmetry ratio of the solvophobic-rich diblocks expressed as a percentage of the reference micelle value. As the solvophobic-rich diblocks become more asymmetric, making a sharper contrast with the solvophilic-rich diblocks, the shapes become less circular, the fluctuations decrease.[]{data-label="tab:rPhobeShapes"} For values of ${r_-}$ closer to zero (i.e., more symmetric diblocks), the solvophobic-rich diblocks are very similar in composition to the solvophilic-rich diblocks, and so their preferred curvatures are similar, which we expect to result in a weak dimple (i.e., ${\frac{c_-}{c_+}}$ should become less negative). If the dimple is weak, then the shape should be nearly circular, so that less perimeter is required to enclose the same amount of volume, and indeed we expect that the volume enclosed by the micelle depends only weakly on the diblock composition so that micelle perimeter does decrease. A decrease in perimeter causes a higher density of diblocks and since we expect the diblock composition only weakly affects the preferred density of diblocks, we therefore expect a lower surface tension, leading to greater shape fluctuations (i.e., an increase in $\delta$). We conclude that ${r_-}$ should change ${\frac{c_-}{c_+}}$ and $\delta$ in the same direction, as observed. The key point to the above argument was that the difference between the asymmetries of the micelle’s two species of diblock determines how circular the micelle shape is. In the case we discussed, this asymmetry contrast was controlled by changing ${r_-}$, but it could just as well been controlled by changing ${r_+}$ (results shown in \[tab:rPhilShapes\]). --------------- ---------- ---------- ----------- ----------- ---------- ${\hat{r}_+}$ $135\%$ $124\%$ $ 112\% $ $ 100\% $ $ 88\%$ ${\hat{r}_+}$ $ 76\% $ $ 65\% $ $ 53\% $ $ 41\% $ $ 29\% $ --------------- ---------- ---------- ----------- ----------- ---------- : Mean micelle shapes, thermal fluctuations, and errors in mean micelle shapes (illustrated in the manner of \[fig:baseCaseAverage\]) as a function of ${\hat{r}_+}$, the asymmetry ratio of the solvophilic-rich diblocks expressed as a percentage of the reference micelle value. As the solvophilic-rich diblocks become less asymmetric, making a sharper contrast with the solvophobic-rich diblocks, the shapes become less circular and the fluctuations decrease.[]{data-label="tab:rPhilShapes"} Therefore, as the solvophilic-rich diblocks are made more asymmetric, they become more similar to the solvophobic-rich diblocks, and so we expect the shapes to become more circular and to fluctuate more. Thus, like ${r_-}$, ${r_+}$ should affect ${\frac{c_-}{c_+}}$ and $\delta$ in the same direction. The results of varying the number of solvophobic-rich diblocks ${N_-}$ (see \[tab:nPhobeShapes\]) mostly follow the same trend as the results of varying the diblock asymmetries ${r_+}$ and ${r_-}$, but a different explanation is required. In this case we expect decreasing ${N_-}$ to decrease the density of diblocks on the micelle surface, thereby increasing the surface tension and decreasing the fluctuations as measured by $\delta$. However, decreasing ${N_-}$ also decreases the length of micelle perimeter that has to deform in order to achieve its preferred curvature. Therefore, we expect micelles with small ${N_-}$ may have a more strongly curved dimple, so that ${\frac{c_-}{c_+}}$ becomes more negative. By this reasoning ${N_-}$, like ${r_+}$ and ${r_-}$, would affect ${\frac{c_-}{c_+}}$ and $\delta$ in the same direction. [|c|ccccc|]{} ${\hat{N}_-}$ ------------------------------------------------------------------------ & 33% & 42% &50% &58%& 67%\ & & & & &\ ${\hat{N}_-}$ ------------------------------------------------------------------------ & 75% & 83% & 92% & 100%& 117%\ & & & & &\ ${\hat{N}_-}$ ------------------------------------------------------------------------ & 133% &150% &167% &183% &200%\ & & & & &\ While the results of varying the number of solvophobic-rich diblocks ${N_-}$ do mostly follow a smooth trend, we note one nonmonotonic feature of this data. The simulated micelle with the smallest ${N_-}$ (four solvophobic-rich diblocks), which according to the trend of the data should have the most extreme dimple, actually has a less pronounced dimple than even the base micelle. One might hypothesize that there is a minimum number of solvophobic-rich diblocks needed to nucleate a dimple. Whatever the case, this nonmonotonicity that a naive linear model explaining the micelle shape may not be sufficient for shape design. In the preceding results, we have varied a single parameter of the micelle composition to observe the effect on two micelle shape features and have seen that the data lie on only two trend lines. This contradicts our expectation that each composition parameter change the shape features in a unique direction in shape feature space. Instead, we find that three shape composition parameters change the micelle shape in the same direction, meaning that at the level of a linear approximation, there are two independent combinations of these three parameters which have no effect on the micelle shape. The other two composition parameters also change the micelle shape features in a common direction, so that there would be one combination of the parameters which have no effect on the micelle shape features. To produce a micelle shape not falling on either of the two trends, it is necessary to change multiple micelle composition parameters at once. For ease of shape design, we would hope that the effect of simultaneously changing two composition parameters could be naively inferred by linearly extrapolating from the individual effects of the parameters. While the effects of varying individual micelle composition parameters were not independent as we expected, they were indeed often roughly linear. If linearity of the shape dependence is assumed, then the effect of varying any combination of micelle composition parameters can be inferred from the data presented above. One can then determine precisely how to change the composition parameters to produce a desired shape change (e.g., to reduce normalized fluctuations while holding the curvature ratio fixed). Additionally, by determining which composition parameters have no effect on the shape, one has freedom in picking the composition parameters. This freedom may be used to choose the most convenient parameters resulting in a desired shape. To test if things are this simple in practice, we have performed simulations where two micelle composition parameters are varied simultaneously. In the first set of simulations, ${N_\text{core}}$ and ${N_-}$ are varied to interpolate between the ${\hat{N}_\text{core}}=214\%$ and ${\hat{N}_-}=33\%$ data points of \[tab:coreShapes\] and \[tab:nPhobeShapes\]. The simulated shapes are shown in \[tab:coreNPhobeShapes\]. [|c|ccccc|]{} ${\hat{N}_\text{core}}$ ------------------------------------------------------------------------ & 100% & 114% &129% &143% & 157%\ ${\hat{N}_-}$ & 33% & 42% & 50% &58% & 67%\ & & & & & \ ${\hat{N}_\text{core}}$ ------------------------------------------------------------------------ & 171% &186% & 200% & 214% &\ ${\hat{N}_-}$ & 75% & 83% & 92% & 100% &\ & & & & &\ To get a closer look on the effect on the shape features, we plot in \[fig:coreNPhobe\] the curvature ratios and normalized fluctuations for the shapes in \[tab:coreNPhobeShapes\] as well as the results of individually varying ${N_\text{core}}$ and ${N_-}$. ![ Scatter plot showing the curvature ratios and normalized fluctuations of the data shown in \[tab:coreNPhobeShapes\] (magenta stars). The scatter plot and the fit lines are made in the style of \[fig:combinedResults\], and the data sets from varying only ${N_\text{core}}$ (yellow circles) and only ${N_-}$ (orange upward-pointing triangles) are reproduced in this figure. Notice that the for the data set where only ${N_-}$ is changed, the micelle with the smallest value of ${\hat{N}_-}$, namely $33\%$, has a weaker dimple than, and therefore appears to the right of, the reference micelle (downward pointing black triangle). By contrast, most other shapes from micelles with ${\hat{N}_-}< 100\%$ have stronger dimples than the reference micelle and therefore appear to its left. Despite this variation in micelle shape caused by changing ${N_-}$, the magenta points, representing a simultaneous variation in both ${N_\text{core}}$ and ${N_-}$, follow the same trend as the yellow points representing a variation only in ${N_\text{core}}$. []{data-label="fig:coreNPhobe"}](coreNPhobe){width="\linewidth"} We see that the interpolating micelle compositions produce shape features that mainly lie along the trend of the data set where just ${N_\text{core}}$ is varied, contrary to the naive expectation that these shape features should be a linear combination both of the shapes feature resulting from varying ${N_\text{core}}$ as well as those resulting from varying ${N_-}$. To explain this, we hypothesize that the micelles with a large value of ${N_\text{core}}$ have a surface tension so large that the change in spontaneous curvature profile caused by changing ${N_-}$ does not have a noticeable effect on the micelle. In the second set of simulations, ${N_+}$ and ${r_+}$ are varied to interpolate between the ${\hat{N}_+}=136\%$ and ${\hat{r}_+}=53\%$ data points of \[tab:nPhilShapes\] and \[tab:rPhilShapes\]. The simulated shapes are shown in \[tab:nPhilRPhilShapes\]. [|c|ccccc|]{} ${\hat{N}_+}$ ------------------------------------------------------------------------ & 100% & 109% &118% &127% & 136%\ ${\hat{r}_+}$ & 53%& 65% & 76%& 88%& 100%\ & & & & & \ For a more quantitative view of the effect on the shape features, we plot in \[fig:nPhilRPhil\] the curvature ratios and normalized fluctuations of both the shapes in \[tab:coreNPhobeShapes\] and the previously discussed shapes of \[tab:nPhilShapes\] and \[tab:rPhilShapes\] which resulted from individually varying ${N_+}$ and ${r_+}$. Ideally, the shape features would linearly, or at least monotonically, interpolate between the two extreme cases. Taking error bars into account, the data are nearly consistent with a monotonic increase in fluctuations from the ${\hat{r}_+}=53\%$ data point to the ${\hat{N}_+}=136\%$ data point. However, the curvature ratio dependence is unambiguously nonmonotonic. To see how the dependence might not be monotonic, consider first the ${\hat{r}_+}=53\%$ micelle composition, which has the largest asymmetry contrast of the simulated data shown in \[tab:nPhilRPhilShapes\] and the fewest diblocks on the micelle perimeter. On the one hand, the strong asymmetry contrast should lead to a strong dimple, but on the other hand, the decrease in the number of diblocks should lead to a higher surface tension and consequently a smaller dimple. Next consider the micelle composition at the other extreme, having the largest number of diblocks with ${\hat{N}_+}=136\%$. In this case, there should be a low surface tension, as evidenced by this micelle’s large normalized fluctuation, which allows for a larger dimple, but also a low asymmetry contrast which would lead to a smaller dimple. The nonmonotonicity we observe is that there are intermediate micelles showing a stronger dimple than both of the extreme cases. We hypothesize that as we move from the first extreme with a large asymmetry contrast and fewer diblocks to the opposite extreme, there comes a critical micelle composition where the increasing preferred perimeter set by the number of diblocks becomes large enough to completely accommodate the preferred curvature of the dimple. Micelles with more diblocks and less asymmetry contrast than this critical composition do not benefit from the increased perimeter from the diblocks, and instead have a decreasing dimple strength set by the decreasing asymmetry contrast. On the other hand, micelles with less diblocks and more asymmetry contrast than the critical composition experience both and increased surface tension and an increased preferred curvature. As evidenced in \[fig:coreNPhobe\], the surface tension has a larger effect on the dimple strength, and so we expect dimple strength diminishes. By this logic, there should be a maximum dimple strength near the critical composition, and so the curvature ratio dependence should be nonmonotonic. In any event, this example shows that a linear interpolation is insufficient to approximate the behavior of the shape features between two micelle compositions, since nonmonotonic behavior is possible. Discussion {#sec:discussion} ========== In \[sec:results\], we showed how the micelle shape features depended on the composition parameters. In this section, we discuss what implications these results have for the central questions of our work, namely whether a micelle may feasibly be designed using the rationale presented in \[sec:introduction\]. We begin by discussing if the micelle shape dependence is sufficiently regular to allow for arbitrary shape features to be designed using only a naive strategy. Next, we address a shortcoming of this work mentioned in \[sec:introduction\], which is that the micelles shapes are only metastable. We explain why the statistics of the metastable shapes examined here are meaningful and discuss what might be done to stabilize the micelles in practice. Lastly, we justify why the two-dimensional simulations considered here are relevant to practical applications which necessarily have three-dimensions. In the introduction, we set a goal of identifying good control parameters to design the shape of the micelle. Such control parameters ought to have a simple, easily understandable effect. Indeed, in \[sec:results\], we found a smooth variation in the micelle shape features, and we were able to give plausible physical explanations of the observed behavior involving the volume enclosed by the micelle, the surface tension resulting from extension of the micelle perimeter, and the bending energy associated with the curvature of the micelle perimeter. The explanations were not explicitly verified because it is difficult to define and independently measure the bending energies and surface tensions of the fluctuating, asymmetric micelles considered in this work. However, we found that the shape dependence was significantly nonlinear and produced nonmonotonicities in some cases, and therefore the dependence cannot be quantitatively explained for the purposes of shape design by simple physical arguments or a naive linear model. Therefore, if an accurate model of the relationship between the control parameters and the micelle shape is desired for facilitation of shape design, something more must be done. One approach is to create Hamiltonian whose degrees of freedom are the junction points and which contains terms for the bulk compression of the micelle interior and the stretching and bending of the micelle surface. It would be necessary to perform a series of simulations to determine a mapping between the micelle composition and the parameters of the simplified Hamiltonian. Once this mapping is determined, the simplified model, having far fewer degrees of freedom, would give a much simpler and less computationally intensive way of understanding how the micelle shape depends on the micelle composition. Alternatively, if one desires to design a single specified micelle shape, the required micelle composition could be found by some nonlinear optimization strategy, such as a genetic algorithm. Such machine learning algorithms have been applied to the design of material properties in a number of contexts [@le2015; @liu2017; @patra2016; @miskin2013]. So far, we have considered only micelles which exhibited the intended positioning of diblocks on the surface, which we call well formed, even though as noted in \[sec:methods\], micelles resulting from our simulation often did not have this property. We now give a justification for considering this seemingly biased sample. Our justification is based on the fact that the well-formed micelles are metastable, meaning that the micelles have a significant chance of surviving the length of a simulation without forming a defect in the diblock arrangement, but there is a finite probability to form a defect from which the micelle would never recover. With this in mind, it is natural to investigate the thermodynamic statistical properties of the well-formed micelles, and the appropriate statistical weight to each micelle configuration in this subensemble is found simply giving equal weight to each well-formed micelle while discarding the others. However, if this statistical analysis is to be meaningful for practical applications, something must be done to enforce that the micelles be well formed. We view this problem as separate from the question of how the micelle composition affects the shape of the well-formed micelles, but we believe there are a few promising approaches to solving this problem. One approach is to change the interaction parameters of the system. The choice of parameters used in this work was motivated by the desire to have a lower energetic barrier for bead rearrangements allowing shorter simulation times, but this has the drawback of facilitating diblock rearrangements on the micelle surface. Stronger interactions may increase the energetic penalty for micelle defects, greatly reducing their occurrence. Another approach is to make the two species of monomers composing the solvophobic-rich diblocks different from the two species composing the solvophilic-rich diblocks. Such a difference between the two types of diblocks could promote their segregation on the micelle surface, thereby enforcing their intended positioning. Beyond changing the interaction parameters of the system, a further approach is to alter the polymer architecture with the idea that a different bond topology would better stabilize the well-formed micelles. Whatever approach is taken to solve this problem, we don’t expect it to significantly alter the shape dependences observed in this work, as these are a basic result of the polymer nature of the micelle. In practical applications, the design problem considered in this work must be solved in three dimensions. However, we have chosen to conduct two-dimensional simulations, as has often been done [@larson88; @miller92; @chan93; @huopaniemi06; @siepmann06]. We argue that since the physics affecting micelle shape (compressibility, surface tension, spontaneous surface curvature) are qualitatively unchanged, the dependence on the micelle composition of a similar three-dimensional shape, such as a dimpled sphere, should be similar. In general, one can imagine many more target micelle shapes beyond a dimpled sphere. In contrast to two-dimensional shapes, the three-dimensional target shapes are described by two principal curvatures at each point on the surface. The diblock composition on the surface, however, specifies only a mean curvature at each point to lowest order [@helfrich73]. Therefore we expect that the profile of diblock compositions over the micelle surface is not in general sufficient to completely control the micelle shape in three dimensions, so that full shape control would be harder or perhaps impossible in three dimensions. However, some shape control must be possible, and studying the extent of this shape control is an interesting direction for future research. Conclusion {#sec:conclusion} ========== We have described a micelle shape-design scheme, and shown its capacity to control the average shape and fluctuations of a micelle in thermal equilibrium. We began with a reference micelle composition producing a moderately dimpled micelle, and varied, one by one, several aspects of the reference micelle composition to examine the effect on the thermal micelle shape. We studied two features of the micelle shape in particular, and found that the dependences were somewhat smooth, but significantly nonlinear and sometimes nonmonotonic. Additionally, simulations were conducted where two aspects of the micelle composition were changed simultaneously, with the result that the combined effect of changing two parameters could not easily be deduced by looking at the individual effects on the micelle. Plausible rationales were given to explain these results. Even though the relationship between the micelle composition and shape may not satisfactorily be characterized by a naive linear relationship, we believe more sophisticated methods to characterize the relationship are nonetheless possible, and we proposed examples. We expect the principles that govern our simple two-dimensional model to extend to three dimensions, and therefore that our results provide evidence that a similar design scheme should work to produce three dimensional shape-designed micelles. The author thanks Ishanu Chattopadhyay for discussing the applicability of machine learning algorithms to this work. The author also thanks T. A. Witten for reviewing a manuscript of this paper. This work is part of a Ph.D. thesis under the supervision of T. A. Witten at the University of Chicago. This work was completed in part with resources provided by the University of Chicago Research Computing Center. This work was principally supported by the University of Chicago Materials Research Science and Engineering Center, which is funded by the National Science Foundation under award No. 1420709.
--- author: - 'Longbiao Mao, Yan Yan, , Jing-Hao Xue, and Hanzi Wang, ' title: 'Deep Multi-task Multi-label CNN for Effective Facial Attribute Classification' --- [Shell : Deep Multi-task Multi-label CNN for Effective Facial Attribute Classification]{} Introduction {#sec:introduction} ============ During the past few years, Facial Attribute Classification (FAC) has attracted significant attention in computer vision and pattern recognition, due to its widespread applications, including image retrieval [@b1; @b2], face recognition [@b3; @b4], person re-identification [@b5; @b6], micro-expression recognition [@b7], image generation [@b8] and recommendation systems [@b9; @b10]. Given a facial image, the task of FAC is to predict multiple facial attributes, such as gender, attraction and smiling (some facial attributes are shown in Fig. \[fig:sample\]). Although the task of FAC is only an image-level classification task, it is not trivial, mainly because of the variability of facial appearances caused by significant changes in viewpoint, illumination, etc. Recently, due to the outstanding performance of Convolutional Neural Network (CNN), most state-of-the-art FAC methods take advantage of CNN to classify facial attributes. Roughly speaking, these methods can be categorized as follows: (1) single-label learning based FAC methods [@b11; @b12; @b13] and (2) multi-label learning based FAC methods [@b14; @b15; @b16; @b17; @b18]. The single-label learning based FAC methods usually extract the CNN features of facial images and then classify facial attributes by the Support Vector Machine (SVM) classifier. These methods, however, predict each attribute individually, thus ignoring the correlations between attributes. In contrast, multi-label learning based FAC methods, which can predict multiple attributes simultaneously, extract the shared features from the lower layers of CNN and learn attribute-specific classifiers on the upper layers of CNN. ![Examples of different facial attributes. (a) Objective attributes: Eyeglasses, Bangs and Wearing Hat; (b) Subjective attributes: Smiling, Pointy Nose and Big Lips.[]{data-label="fig:sample"}](fig1.pdf){width="8cm" height="6.5cm"} Typically, the above methods firstly perform face detection/alignment and then predict facial attributes. In other words, these closely-related tasks are trained separately. Therefore, the intrinsic relationships between these tasks are not fully and effectively exploited. Moreover, some multi-label learning based FAC methods (such as [@b19; @b20]) are developed to simultaneously predict facial attributes by using a single CNN. These methods treat the diverse attributes equally (using the same network architecture for all attributes), ignoring the different learning complexities of these attributes (for example, learning to predict the “Wearing-Eyeglasses" attribute may be much easier than identifying the “Pointy Nose" attribute, as shown in Fig. 1). In particular, some attributes (e.g., “Big Lips", “Oval Face") are very subjective, and they are more difficult to be recognized and may even confuse humans sometimes. Even worse, the training set often suffers from the problem of imbalanced labels for some facial attributes (e.g., the “Bald" attribute has very few positive samples). Re-balancing multi-label data is not a trivial task. To alleviate the above problems, we propose a novel Deep Multi-task Multi-label CNN method (DMM-CNN) for effective FAC. Two closely-related tasks (i.e., Facial Landmark Detection (FLD) and FAC) are jointly optimized to boost the performance of FAC based on multi-task learning. As a result, by exploiting the intrinsic relationship between the two tasks, the performance of FAC is effectively improved. Considering the diverse learning complexities of facial attributes, we divide the facial attributes into two groups: objective attributes and subjective attributes, and further employ two different network architectures to respectively extract discriminative features for these two groups. We also develop a novel dynamic weighting scheme to dynamically assign the loss weights to all facial attributes during training. Furthermore, in order to alleviate the problem of class imbalance for multi-label training, we develop an adaptive thresholding strategy to effectively predict facial attributes. Similar to our previous MCFA method [@b18], the proposed DMM-CNN method also adopts the framework of multi-task learning. However, there are several significant differences between MCFA and DMM-CNN. Firstly, MCFA focuses on solving the problem of extracting semantic attribute information by using a multi-scale CNN, while DMM-CNN aims to overcome the problem of diverse learning complexities of facial attributes (by designing different network architectures for objective and subjective attributes, and proposing a dynamic weighting scheme). Secondly, MCFA uses a fixed decision threshold for all attributes, while DMM-CNN leverages an adaptive thresholding strategy to alleviate the problem of class imbalance. Thirdly, MCFA jointly learns the tasks of face detection, facial landmark detection (FLD) and FAC, while DMM-CNN simulteneously performs FLD and FAC. The reason why face detection is not adopted in DMM-CNN is that using the auxiliary task of face detection only slightly improves the performance of FAC, but significantly increases the computational burden. Moreover, FLD explicitly plays the role of face localization. Finally, the FLD module in MCFA only gives five off-the-shelf facial landmarks (left and right eyes, the mouth corners, and the nose tip). In contrast, the FLD module in DMM-CNN outputs 72 facial landmarks, which can provide more auxiliary information beneficial for FAC. The main contributions of this paper are summarized as follows: - We divide the diverse facial attributes into objective attributes and subjective attributes according to their different learning complexities, where two different levels of SPP layers (i.e., a 1-level SPP layer and a 3-level SPP layer) are used to extract features. To the best of our knowledge, this paper is the first work to learn multiple deep neural networks to enhance the performance of FAC by considering the different learning complexities of facial attributes (objective and subjective attributes). - A novel dynamic weighting scheme, which capitalizes on the rate of validation loss change obtained from the whole validation set, is proposed to automatically assign weights to facial attributes. In this way, the training process concentrates on classifying the more difficult facial attributes. - We develop an adaptive thresholding strategy to accurately classify facial attributes for multi-label learning. Such a strategy takes into account the imbalanced data distribution of facial attributes. Thus, the problem of class imbalance for some attributes of FAC is effectively alleviated from the perspective of decision level. The remainder of this paper is organized as follows. In Section 2, we review related work. In Section 3, we introduce the details of the proposed method. In Section 4, we evaluate the performance of the proposed method and compare it with several state-of-the-art methods on the challenging CelebA and LFWA datasets. Finally, the conclusion is drawn in Section 5. Related Work ============ Over the past few decades, great progress has been made on FAC. Traditional FAC methods [@b3; @b21] rely on hand-crafted features to perform attribute classification. With the development of deep learning, current state-of-the-art FAC methods employ CNN models to predict the attributes and have shown remarkable improvements in performance. Our proposed method is closely related to CNN-based multi-task learning, multi-label learning and attribute grouping. In this section, we briefly introduce related work based on CNN. Multi-task Learning ------------------- Multi-task Learning (MTL) [@b22] is an effective learning paradigm to improve the performance of a target task with the help of some related auxiliary tasks. MTL has proven to be effective in various computer vision tasks [@b23; @b24; @b25]. The CNN model can be naturally used for MTL, where all the tasks share and learn common feature representations in the deep layers. For example, Zhang et al. [@b26] perform FLD together with several related tasks, such as gender classification and pose estimation. Tan et al. [@tan] jointly learn multiple attention mechanisms (including parsing attention, label attention and spatial attention) in an MTL manner for pedestrian attribute analysis. Appropriately assigning weights to different loss functions plays an importance role for multi-task deep learning. Kendall et al. [@kendall] propose to weigh loss functions based on the homoscedastic uncertainty of each task, where the weights are automatically learned from the data. Chen et al. [@chen] develop a gradient normalization (GradNorm) method which performs multi-task deep learning by dynamically tuning gradient magnitudes. The loss weights are assigned according to the training rates of different tasks. Recently, Liu et al. [@Liu] develop a multi-task attention network, which automatically learns both task-shared and task-specific features in an end-to-end manner, for MTL. They develop a novel weighting scheme, Dynamic Weight Average (DWA), which learns the weights based on the rate of loss changes for each task. Multi-label Learning -------------------- On one hand, traditional CNN based FAC methods mainly rely on single-label learning to predict facial attributes. For example, Liu et al. [@b27] propose to cascade two Localization Networks (LNets) and an Attribute Network (ANet) to localize face regions and extract features, respectively. They use the features extracted from ANet to train 40 SVMs to classify 40 attributes. The single-label learning based FAC methods consider the classification of each attribute as a single and independent problem, thereby ignoring the correlations among attributes. Moreover, these methods are usually time consuming and cost prohibitive. On the other hand, multi-label learning based FAC methods predict multiple facial attributes simultaneously in an end-to-end trained network. Because each face image is naturally associated with multiple attribute labels, multi-label learning is well suited for FAC. For example, Ehrlich et al. [@b28] use a Restricted Boltzmann Machine (RBM) based model for attribute classification. Rudd et al. [@b19] introduce a Mixed Objective Optimization Network (MOON) to address the multi-label imbalance problem. Huang et al. [@Huang] propose a greedy neural architecture search method to automatically discover the optimal tree-like network architecture, which can jointly predict multiple attributes. Existing multi-label learning based FAC methods, which use the same network architecture for each attribute, usually learn the features of facial attributes on the upper layers of CNN. However, different facial attributes have different learning complexities. Therefore, it is more attractive to develop a new CNN model, which considers the diverse learning complexities of attributes rather than treating the attributes equally during the training stage. Attribute Grouping ------------------ Facial attributes can be divided into several groups according to different criteria. For example, Emily et al. [@b20] divide the facial attributes into 9 groups according to the attribute location, and explicitly learn the relationships among attributes from similar locations in a face image. Han et al. [@b29] group the face attributes into ordinal and nominal attributes, holistic and local attributes in terms of data type and semantic meaning. Accordingly, four types of sub-networks (having the same network architecture) corresponding to the holistic-nominal, holistic-ordinal, local-nominal and local-ordinal attributes are defined, where a different loss function for each sub-network is used for FAC. Cao et al. [@Cao] split the facial attributes into four attribute groups including upper, middle, lower, and whole image according to the corresponding locations and design four task specific sub-networks (corresponding to four attribute groups) and one shared sub-network for FAC. In this paper, different from the above attribute grouping methods, we propose to divide the attributes into two groups: objective attributes and subjective attributes based on their different learning complexities. Accordingly, we design two different network architectures, which are able to extract different levels of features beneficial to classify objective and subjective attributes, respectively. Methodology =========== In this section, we introduce in detail the proposed DMM-CNN method, which takes advantage of multi-task learning and multi-label learning, for effective FAC. Overview -------- ![image](framework7.pdf){width="5in"} The overview of our proposed method is shown in Fig. \[fig:framework\]. In this paper, to extract the shared features, we adopt ResNet50 [@b30] and remove the final global average pooling layer. Based on shared features, we further perform multi-task multi-label learning, where the task-specific features for two related tasks (FAC and FLD) are extracted. Specifically, for the task of FAC, in order to deal with the diverse learning complexities of facial attributes, we divide the facial attributes into two groups (objective attributes and subjective attributes) and design two different network architectures for these two groups (Section \[sec:group\]). In particular, two different spatial pyramid pooling (SPP) layers, which extract different levels of semantic information, are respectively exploited for objective and subjective attributes in the network (Section \[sec:spp\]). For the task of FLD, 72 facial landmark points are detected (Section \[sec:fld\]). Hence, the whole network has three kinds of outputs (predicted outputs for objective attributes, subjective attributes and facial landmark regression). During the training stage (Section \[sec:train\]), the whole framework combines the losses from the two tasks into the final loss, where a novel adaptive weighting scheme is developed to automatically assign the loss weight to each facial attribute, such that the training concentrates on the classification of more difficult facial attributes. Furthermore, to alleviate the problem of class imbalance, an adaptive thresholding strategy is developed to accurately predict the label of each attribute. CNN Architecture {#section} ---------------- In the following subsections, we respectively introduce the two groups of facial attributes, the SPP layer, and the task of facial landmark detection in detail. ### Objective Attributes and Subjective Attributes {#sec:group} To effectively exploit the intrinsic relationship and heterogeneity of facial attributes, the attributes can be divided into different groups [@b20; @b29]. In this paper, we propose to classify facial attributes into two groups: objective attributes (such as “Attractive”, “Big Nose”) and subjective attributes (such as “Bald”, “Male”). See Fig. 3 for more detail. Our design is based on the observation that state-of-the-art FAC methods often show much lower accuracy for predicting subjective attributes than objective attributes (for example, it is usually easier to classify the “Wearing Hat" and “Wearing Eyeglasses" attributes than the “Smiling" and “Young" attributes). This is mainly because subjective attributes often appear in a subtle form, which makes the CNN model more difficult to learn the decision boundary. In other words, objective and subjective attributes show different learning complexities. Therefore, it is preferable to design different network architectures for these two groups of attributes. In our implementation, the branch for learning the objective attributes consists of a 1-level SPP layer (see Section 3.2.2) and two fully connected layers with the output features of $1,024$ and $22$ (the number of objective attributes) dimensions, respectively. The branch for learning the subjective attributes consists of a 3-level SPP layer and three fully connected layers with the output features of $2,048$ , $1,024$ and $18$ (the number of subjective attributes) dimensions, respectively. In this manner, the network designed for the subjective attributes encodes higher-level semantic information (which is beneficial to predict the subjective attributes) than that designed for the objective attributes. ### The SPP Layer {#sec:spp} The Spatial Pyramid Pooling (SPP) layer proposed by He et al. [@b31] is introduced to deal with the problem of the fixed image size requirement for the CNN network. The SPP layer pools the features based on the top of the last convolutional layer and it is able to generate the fixed-length outputs regardless of the input size/scale. SPP aggregates the information from the deeper layer of the network, which effectively avoids the constraint for cropping or warping of the input image. In this paper, we use the $1$-level SPP layer to extract features for objective attributes, and use the $3$-level SPP layer to extract features for subjective attributes (an $n$-level SPP layer divides a feature map into $n \times n$ blocks and then performs the max pooling operation in each block). The size of the output feature maps for the $1$-level SPP layer and the $3$-level SPP layer are $2,048\times1$ and $28,672\times1$, respectively. Therefore, we can input the face images of any sizes to the networks by taking advantage of the SPP layers. As mentioned previously, the high-level semantic features are exploited to predict the subjective attributes, while the low-level appearance features are used to classify the objective attributes. The different levels of features are advantageous for classifying the two groups of attributes. ### Facial Landmark Detection (FLD) {#sec:fld} In this paper, two different but related tasks (i.e., FLD and FAC) are jointly trained by leveraging multi-task learning. Here, FAC is the target task while FLD is the auxiliary task. Under the paradigm of multi-task learning, the inherent dependencies between the target task and the auxiliary task are exploited to effectively improve the performance of FAC. The landmark information of facial images is beneficial to improve the accuracy of FAC. For instance, the landmarks around the mouth can provide auxiliary information to help predict the “smiling” attribute. Different from our previous work [@b18], which considers only 5 facial landmarks, we use the dlib library [^1] to obtain more facial feature points (72 facial landmarks in total) that outline the eyes, eyebrows, nose, mouth and facial boundary. Note that different facial attributes are usually related to different facial landmarks. Therefore, using more facial landmarks is beneficial to improve the performance of FAC. The FLD branch takes a 2,048 dimensional feature vector obtained by the 1-level SPP layer as the input and consists of two fully connected layers with the output features of $1,024$ and $144$ dimensions, respectively. Training {#sec:train} -------- As we mention previously, different facial attributes have different learning complexities. To deal with the diverse learning complexities of facial attributes, in addition to the adoption of different network architectures for objective and subjective attributes, we further propose a novel dynamic weighting scheme to automatically assign the loss weights to different attributes. Moreover, to alleviate the problem of class imbalance for multi-label training, an adaptive thresholding strategy is developed to predict the label of each attribute. In this paper, we use the mean square error (MSE) loss functions for simplicity in different tasks. 1) Facial landmark detection (FLD): The MSE loss for FLD is given as $$\textit{L}_{FLD} = \frac{1}{N} \sum_{i=1}^{N}||\hat{\textit{\textbf{y}}}_{i}^{FLD} - \textit{\textbf{y}}_{i}^{FLD}||^{2}, \label{con:mse3}$$ where $N$ is the number of training images. $\hat{\textit{\textbf{y}}}_{i}^{FLD} \in R^{2T}$ denotes the outputs (i.e., coordinate vector) of the facial landmarks ($T$ is the number of facial landmarks, and we use 72 facial landmarks in this paper) obtained from the network. $\textit{\textbf{y}}^{FLD}\in R^{2T}$ represents the ground-truth coordinate vector. 2\) Facial attribute classification: The MSE loss for FAC is given as $$\textit{L}_{FAC}^{j} = \frac{1}{N} \sum_{i=1}^{N}(\hat{\textit{y}}_{i,j}^{FAC} - \textit{y}_{i,j}^{FAC})^{2}, \label{con:mse1}$$ where $\hat{\textit{y}}^{FAC}_{i,j}$ and $\textit{y}^{FAC}_{i,j}$ ($\in \{1,-1\}$) represent the predicted output and the label corresponding to the $j$-th attribute of the $i$-th training image, respectively. 3\) The joint loss: The joint loss consists of the losses for FLD and FAC, which can be written as $$L = \sum_{j=1}^{J} {{\lambda}_{t}^j}{L}_{FAC}^{j}+ \beta \textit{L}_{FLD}, \label{con:jointloss}$$ where $J$ is the total number of facial attributes. $\boldsymbol{\lambda}_t = [\lambda^1_t, \lambda^2_t, \cdots, \lambda^J_t]^T$ represents the weight vector corresponding to the $J$ facial attributes during the $t$-th iteration. $\beta$ is the regularization parameter (we empirically set $\beta$ to 0.5). 4\) Dynamic weighting scheme. In this paper, we propose a dynamic weighting scheme to automatically assign weights to all facial attributes. The loss weights are dynamically assigned according to the validation loss trend [@b27]. Specifically, the weights are defined as $${\lambda}_t^{j} = \left| \frac{{{L}_{FAC}^{j,VAL}}(t) -{{L}_{FAC}^{j,VAL}}(t-1) }{{{L}_{FAC}^{j,VAL}}(t-1)} \right|, \label{con:lambda}$$ where ${{L}_{FAC}^{j,VAL}}(t)$ is the validation loss (computed according to Eq. (\[con:mse1\]) for each attribute on the validation set) during the $t$-th iteration of the training. In this way, the weights corresponding to the facial attributes will be assigned low values if the validation loss does not decrease, while those will be given high values if the validation loss significantly drops. During the initial training process, the easily-classified attributes are assigned large weights so that their corresponding MSE losses can be quickly reduced. As the iteration proceeds, the MSE losses for the hardly-classified attributes become relatively larger and drop slowly, while those for the easily-classified ones become smaller. Therefore, in the later stage of the training process, the network focuses on the training of classification of the hardly-classified attributes (note that the loss for each attribute is composed of the multiplication of the weight and its corresponding MSE loss). Note that the weighting schemes are also developed in [@Liu] and [@b32]. However, the differences between the proposed dynamic weighting scheme and those in [@Liu; @b32] are significant. In [@Liu], the weights are computed based on the rate of training loss changes. In [@b32], the weights are computed according to the validation loss and the mean validation loss trend. Note that, the validation loss may not be appropriate for determining the weight. In contrast, the proposed dynamic weighting scheme computes the weights only based on the validation loss trend. Moreover, the weights in [@Liu] are obtained according to the average training loss (in the training set) in each epoch over several iterations. Different from [@Liu], the weighting scheme in [@b32] and our proposed one take advantage of the validation set, which can be beneficial to improve the generalization ability of a learned model (since the validation set is not directly used to compute gradients during the back-propagation process). In [@b32], the validation loss is computed on a small batch (containing only 10 validation images) during each iteration, while it is computed on the whole validation set for every $P$ iterations in our method. Therefore, the proposed dynamic weighting scheme shows more stable loss reduction. 5\) Adaptive thresholding strategy. We predict the label of the $j$-th facial attribute $\hat{l}_j$ according to the final output of the network: $$\hat{l}_j=\left\{ \begin{aligned} 1 & , & output > \tau_j \\ -1 & , & output \leq \tau_j \end{aligned} \right., \label{con:score}$$ where $\tau_j$ is the threshold parameter. If the predicted output is larger than the threshold $\tau_j$, a positive label is assigned. Existing FAC methods usually set the threshold $\tau_j$ to be 0. However, due to the problem of class imbalance (i.e., the number of samples for one class is significantly larger than that for the other class for one attribute), using the fixed threshold is not an optimal solution, especially for some highly imbalanced facial attributes. In this paper, we introduce an adaptive thresholding strategy, which adaptively updates the threshold as follows: $${\boldsymbol{\tau}_{t} = \boldsymbol{\tau}_{t-1} + \gamma l(\textbf{N}^{FP}_{t}-\textbf{N}^{FN}_{t})/V} \label{con:t}$$ where $\boldsymbol{\tau}_{t} \in R^{J}$ is the threshold for the $t$-th iteration. $V$ is the number of samples in the validation set. $l$ is the current epoch. $\textbf{N}^{FP}_{t}\in R^{J}$ ($\textbf{N}^{FN}_{t}\in R^{J}$) represents the number of false positive (false negative) in the validation set for the $t$-th iteration. The larger the value of $\textbf{N}^{FP}_{t}$ is (or the smaller the value of $\textbf{N}^{FN}_{t}$ is), the higher the value of the threshold should be. Hence, the difference between $\textbf{N}^{FP}_{t}$ and $\textbf{N}^{FN}_{t}$ can be used to adjust the threshold. We also consider the current epoch in Eq. (\[con:t\]), since more attention should be paid to false predictions as the training epoch increases (the threshold is adapted to a larger value). $\gamma$ is the fixed parameter (we experimentally set it to $0.01$ in this paper). The training stage of the proposed DMM-CNN method is summarized in Algorithm 1. Training data and validation data. Initialized parameters $\boldsymbol{\theta}$ of CNN. The maximum number of iterations $M$. The updating interval $P$. The model parameters $\boldsymbol{\theta}$ of the trained CNN model. $loop=0$, $t=1$; Calculate the validation loss of facial attributes according to Eq. (\[con:mse1\]); Update $\boldsymbol{\tau}_t$ according to Eq. (\[con:t\]); Update $\boldsymbol{\lambda}_t$ according to Eq. (\[con:lambda\]); $t = t + 1$; Calculate the joint loss $L$ according to Eq. (\[con:jointloss\]); Update the parameters $\boldsymbol{\theta}$ using the stochastic gradient descent technique; $loop = loop + 1$; \[code:recentEnd\] Experiments =========== In this section, we firstly introduce two public FAC datasets used for evaluation. Then, we perform an ablation study to discuss the influence of every component of the proposed DMM-CNN method. Finally, we compare the proposed DMM-CNN method with several state-of-the-art FAC methods. Datasets and Parameter Settings ------------------------------- CelebA [@b33] is a large-scale face dataset, which is provided with the labeled bounding box and the annotations of 5 landmarks and 40 facial attributes. It contains 162,770 images for training, 19,867 images for validation and 19,962 images for testing. The images in CelebA cover large pose variations and background clutter. LFWA [@b34] is another challenging face dataset that contains 13,143 images with 73 binary facial attribute annotations. We select the same 40 attributes from LFWA as CelebA. For LFWA, we fine-tune the model trained on CelebA and use both the original and the deep funneled images of LFWA as the training set to prevent over-fitting. As a result, 13,144 images are used for training and 6,571 images for testing for LFWA. Since LFWA does not provide the validation set, we directly update the dynamic weights and use the adaptive thresholding strategy on the training set. The proposed method is implemented based on the open source deep learning framework pytorch, where one NVIDIA TITAN X GPU is used to train the model for 15 epochs with the batch size of 64. The base learning rate is set to 0.001 and we multiply the learning rate by 0.1 when the validation loss stops decreasing. The model size is about 360M. Ablation Study -------------- In this subsection, we will give an ablation study to evaluate the effectiveness of different components of the proposed DMM-CNN on the CelebA and LFWA datasets. ![image](1.pdf){height="49mm" width="1\linewidth"} ![image](2.pdf){height="49mm" width="1\linewidth"} ![image](1lfw.pdf){height="49mm" width="1\linewidth"} ![image](2lfw.pdf){height="49mm" width="1\linewidth"} \[tab:dividing\] We evaluate several variants of the proposed DMM-CNN method. Specifically, Baseline represents that we only use ResNet50 (with 40 output units) to extract features and classify the attributes. DMM-FAC represents that we only perform the single task of FAC without using the auxiliary task of FLD. DMM-EQ-FIX represents that we use equal loss weights (i.e., 1.0) for all the attributes without relying on the proposed dynamic weighting scheme, and the fixed threshold (i.e., 0.0) to predict the label of each attribute instead of using the adaptive threshold. DMM-EQ-AT represents that we use equal loss weights for all the attributes and the proposed adaptive thresholding strategy. DMM-DW-FIX represents that we use the dynamic weighting scheme and the fixed threshold. DMM-SPP represents that we use the 3-level SPP layer and three fully connected layers to predict all the attributes (using the same network architecture as the subjective attributes branch) without attribute grouping. DMM-CNN is the proposed method. The details of all the competing variants are listed in Table 1. The performance (i.e., the accuracy rate) obtained by different variants is shown in Fig. \[fig:ablation\]. We have the following observations: - Compared with the Baseline, all the other variants achieve better performance (especially on the “ArchedEyebrowns", “Big Lips" and “Narrow Eyes" attributes), which demonstrates the importance of using task-specific features for FAC. - By comparing DMM-FAC with DMM-CNN, we can see that multi-task learning is beneficial to improve the performance of FAC by exploiting the intrinsic relationship between FAC and FLD. - DMM-DW-FIX achieves higher classification accuracy compared with DMM-EQ-FIX in terms of average classification rate, which shows the superiority of using the dynamic weighting scheme. - The average classification rate obtained by DMM-EQ-AT is higher than that obtained by DMM-EQ-FIX, which shows the effectiveness of using the adaptive thresholding strategy. - Compared with the baseline, the improvements of DMM-DW-FIX and DMM-EQ-AT on LFWA are more evident than those on CelebA. Specifically, DMM-DW-FIX achieves 5.52% (0.91%) improvement in accuracy, while DMM-EQ-AT obtains 3.98% (0.95%) improvement in accuracy on LFWA (CelebA). The improvements on CelebA are marginal. Such a phenomenon is also observed in some papers [@He2018; @Huang; @Lu2017]. This may be because that the discrepancy between the distributions from the training set and the test set of CelebA is large, and these exists some noise in the CelebA labels especially for the subjective attributes [@Hand2018], leading to the difficulty of significant improvements in the test set of CelebA. - Compared with DMM-SPP, DMM-CNN achieves better accuracy (i.e., 0.30% and 1.81% improvements on CelebA and LFWA, respectively). Therefore, designing different network architectures, which take into account the diverse learning complexities of facial attributes, is beneficial to improve the performance of FAC. - Among all the variants, DMM-CNN achieves the best accuracy, which can be attributed to the multi-task learning and multi-label learning framework that exploits the different learning complexities of facial attributes. The loss weighting scheme plays a critical role in the performance of FAC. we compare the performance of different weighting schemes. Specifically, we evaluate the following four representative weighting schemes: 1) Uniform Weighting (UW) scheme, where all the weights corresponding to different attributes are set to 1.0; 2) Dynamic Weight Average (DWA) scheme proposed in [@Liu], where the rate of loss change in the training set is used to automatically learn the weights; 3) Adaptive Weighting (AW) scheme proposed in [@b32], where both the validation loss and the mean validation loss trend in a batch are used to obtain the weights; 4) The proposed dynamic weighting scheme, which takes advantage of the rate of validation loss changes in the whole validation set. Table 2 gives the experimental results of different weighting schemes on the CelebA and LFWA datasets. We can see that our method with the proposed dynamic weighting scheme achieves the best performance compared with other weighting schemes, which can validate the effectiveness of the proposed one. \[tab:dynamicweights\] In Fig. \[fig:validationloss\], we further visualize the changes of mean validation loss and two representative attribute losses (i.e., for the objective attribute “MouthOpen” and the subjective attribute “Young”) on the validation set during the training stage. Here, the proposed dynamic weighting scheme and the fixed weighting scheme (i.e., the weight is set to 1.0 for each attribute) are respectively employed. We can observe that the mean validation loss based on the dynamic weighting scheme decreases faster than that based on the fixed weighting scheme. ![Changes of the validation loss with the number of iterations using the proposed dynamic weighting scheme and the fixed weighting scheme during the training stage. Here, Mean-FIX, MouthOpen-FIX, Young-FIX, Mean-DW, MouthOpen-DW and Young-DW denote the mean validation loss, two attribute losses using the fixed weighting scheme and the dynamic weighting scheme, respectively.[]{data-label="fig:validationloss"}](validationloss.pdf){width="0.85\linewidth"} The training of the objective attribute (i.e., “MouthOpen”) converges much faster than the subjective attribute (i.e., “Young”). During the initial training stage, the loss of the “MouthOpen” attribute quickly drops and converges after about 15,000 iterations. In contrast, the loss of the “Young” attribute slowly drops and converges after about 30,000 iterations. As the training proceeds, the network focuses on classifying those difficult subjective attributes. In general, the loss using the dynamic weighting scheme usually drops more and faster than that using the fixed weighting scheme. This reveals that dynamic weights are of vital importance when optimizing the multi-label learning task having different learning complexities. We visualize the changes of dynamic weights and adaptive threshold in the training stage in Fig. \[fig:weights\] and Fig. \[fig:adaptivethresholding\], respectively. ![Curves of dynamic weights during the training stage.[]{data-label="fig:weights"}](weights){width="0.85\linewidth"} ![Curves of adaptive thresholds during the training stage.[]{data-label="fig:adaptivethresholding"}](adaptivethresholding2.pdf){width="0.8\linewidth"} Firstly, in Fig. \[fig:weights\], the curves of two dynamic weights corresponding to two representative facial attributes (i.e., “MouthOpen” and “Young”) during the training stage are given. We can observe that the changes of the dynamic weights corresponding to the two attributes are unstable. This is mainly because the proposed weighting scheme dynamically assigns the weight to each attribute according to the rate of the attribute loss changes (see Eq. (4)). In other words, when the loss of an attribute significantly drops, a large weight will be assigned to this attribute (since the learning process of this attribute does not converge). Therefore, the dynamic weights reflect the learning rates of different attributes, which may significantly vary. However, note that the losses of these two attributes keep decreasing and converge stably (see Fig. 5). Secondly, in Fig. \[fig:adaptivethresholding\], the curves of adaptive thresholds corresponding to the ten randomly-chosen facial attributes during the training stage are given. We can observe that the changes of thresholds are stable. This is mainly due to the fact that the difference between the number of false positive and that of false negative is used to adjust the threshold. As the iteration goes, the difference becomes more stable. \[tab:CelebA\] Comparison with State-of-the-art FAC Methods -------------------------------------------- In this subsection, we compare the performance of the proposed DMM-CNN method with several state-of-the-art FAC methods, including (1) PANDA [@b11], which uses part-based models to extract features and SVMs as classifiers; (2) LNets+ANet [@b27], which cascades two localization networks and one attribute network, and uses one SVM classifier for each attribute; (3) MOON [@b19], a novel mixed objective optimization network which addresses the multi-label imbalance problem; (4) NSA (with the median rule) [@b14], which uses segment-based methods for FAC; (5) MCNN-AUX [@b20], which divides 40 attributes into nine groups according to attribute locations; (6) MCFA [@b18], our previous work which exploits the inherent dependencies between FAC and auxiliary tasks (face detection and FLD). Note that the accuracy obtained by MOON is not given on the LFWA dataset, since MOON does not report the results on LFWA. (7) GNAS [@Huang], which proposes an efficient greedy neural architecture search method to automatically learn the multi-attribute deep network architecture. (8) AW-CNN [@b30], which develops a novel adaptively weighted multi-task deep convolutional neural network to predict person attributes. (9) PS-MCNN-LC [@Cao], which introduces a partially shared multi-task network by exploiting both identity information and attribute relationship. Table \[tab:CelebA\] shows that DMM-CNN outperforms these competing methods and achieves the mean accuracy of 91.70% (86.56%) on CelebA (LFWA). Compared with PANDA and LNets+ANet which use per attribute SVM classifiers, DMM-CNN achieves superior performance by taking advantage of multi-label learning. Our DMM-CNN also achieves better performance than MCNN-AUX, NSA and MOON. It is worth pointing out that our method leverages only two groups of attributes (i.e., objective and subjective attributes) while MCNN-AUX employs nine groups of attributes. DMM-CNN is able to achieve higher accuracy than MCNN-AUX, even with fewer attribute groups. DMM-CNN outperforms MCFA by large margins, which validates the effectiveness of using more facial landmarks information and our attribute grouping mechanism. The proposed DMM-CNN method achieves similar accuracy with MCNN-AUX on LFWA. DMM-CNN achieves the highest accuracy for 20 attributes among all the 40 attributes, where the performance of subjective attributes (such as “Pointy Nose", “Smiling" and “Bushy Eyebrows") is significantly improved compared with the competing methods. The proposed DMM-CNN method achieves better performance than GNAS in terms of average recognition rate on both the CelebA and LFWA datasets. This can be ascribed to the effectiveness of the proposed multi-task multi-label learning framework, where two different network architectures are respectively designed to extract features for classifying objective and subjective attributes. Unlike DMM-CNN that manually designs the network architectures, GNAS automatically discovers the tree-like deep neural network architecture for multi-attribute learning. Therefore, the training process of GNAS is relatively time-consuming. Compared with AW-CNN, the proposed DMM-CNN method obtains similar accuracy. Different from AW-CNN that predicts multiple person attributes by using the framework of multi-task learning (identifying an attribute is viewed as a single task), the proposed method jointly learns two closely-related tasks (i.e., FLD and FAC). Note that, the proposed DMM-CNN method achieves worse performance than PS-MCNN-LC on both the CelebA and LFWA datasets. PS-MCNN-LC designs a shared network (SNet) to learn the shared features for different groups of attributes, while adopting the task specific networks (TSNets) for each group of attributes from low-level layers to high-level layers. However, PS-MCNN-LC takes advantage of the Local Constraint Loss (LCLoss), which requires the face identity as an additional attribute. Moreover, the numbers of channels in SNet and TSNets also need to be carefully chosen to ensure the final performance. On the whole, the performance comparison between all the competing methods shows the effectiveness of the proposed method. Conclusion ========== In this paper, we propose a novel deep multi-task multi-label CNN method (DMM-CNN) for FAC. DMM-CNN effectively improves the performance of FAC by jointly performing the tasks of FAC and FLD. Based on the division of objective and subjective attributes, different network architectures and a novel dynamic weighting scheme are adopted for dealing with the diverse learning complexities of facial attributes. For multi-label learning, an adaptive thresholding strategy is developed to alleviate the problem of class imbalance. Experiments on the public CelebA and LFWA datasets have demonstrated that DMM-CNN achieves superior performance compared with several state-of-the-art FAC methods. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the National Key R&D Program of China under Grant 2017YFB1302400, by the National Natural Science Foundation of China under Grants 61571379, U1605252, 61872307, by the Natural Science Foundation of Fujian Province of China under Grants 2017J01127 and 2018J01576. [1]{} B. Siddiquie, R.S. Feris, and L.S. Davis, “Image ranking and retrieval based on multi-attribute queries,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recog.*, 2011, pp. 801-808. Z. Wu, Q. Ke, J. Sun, and H.-Y. Shum, “Scalable face image retrieval with identity-based quantization and multireference reranking,” *IEEE Trans. Pattern Anal. Mach. Intell.*, vol. 33, no. 10, pp. 1991-2001, 2011. N. Kumar, A.C. Berg, P.N. Belhumeur, and S.K. Nayar, “Attribute and simile classifiers for face verification,” in *Proc. IEEE Int. Conf. Comput. Vis.*, 2009, pp. 365-372. N. Kumar, A.C. Berg, P.N. Belhumeur, and S.K. Nayar, “Describable visual attributes for face verification and image search,” *IEEE Trans. Pattern Anal. Mach. Intell.*, vol. 33, no. 10, pp. 1962- 1977, 2011. S. Khamis, C.H. Kuo, V.K. Singh, V.D. Shet, and L.S. Davis, “Joint learning for attribute-consistent person re-identification,” in *Proc. Eur. Conf. Comput. Vis.*, 2014, pp. 134-146. L. Wu, C. Shen, and A.V.D. Hengel, “Deep linear discriminant analysis on fisher networks: a hybrid architecture for person re-identification,” *Pattern Recog.*, vol. 65, pp. 238-250, 2017. X.H. Huang, S.J. Wang, X. Liu, G. Zhao, X. Feng, and M. Pietikainen, “Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition,” *IEEE Trans. Affective Comput.*, vol. 10, no. 1, pp. 32-47, 2017. X.C. Yan, J.M. Yang, K. Sohn, and H.L Lee, “Attribute2image: Conditional image generation from visual attributes,” in *Proc. Eur. Conf. Comput. Vis.*, 2016, pp. 776-791. G. Qi, C. Aggarwal, Q. Tian, H. Ji, and T.S. Huang, “Exploring context and content links in social media: A latent space method,” *IEEE Trans. Pattern Anal. Mach. Intell.*, vol. 34, no. 5, pp. 850-862, 2011. G. Qi, X. Hua, and H. Zhang, “Learning semantic distance from community-tagged media collection,” in *Proc. 17th ACM Int. Conf. Multimedia,* 2009, pp. 243-252. N. Zhang, M. Paluri, M. Ranzato, T. Darrell, and L. Bourdev, “Panda: Pose aligned networks for deep attribute modeling,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recog.*, 2014, pp. 1637-1644. Y. Zhong, J. Sullivan, and H. Li, “Leveraging mid-level deep representations for predicting face attributes in the wild,” in *Proc. IEEE Int. Conf. Image Process.*, 2016, pp. 3239-3243. S. Kang, D. Lee, and C.D. Yoo, “Face attribute classification using attribute aware correlation map and gated convolutional neural networks,” in *Proc. IEEE Int. Conf. Image Process.*, 2015, pp. 4922-4926. U. Mahbub, S. Sarkar, and R. Chellappa, “Segment-based methods for facial attribute detection from partial faces,” in *IEEE Trans. Affective Comput.* doi: 10.1109/TAFFC.2018.2820048, 2018. F. Wang, H. Han, T. Almaev, and S. Shan, “Deep multi-task learning for joint prediction of heterogeneous face attributes,” in *Proc. IEEE conf. Autom. Face Gesture Recog.*, 2017, pp. 173-179. H. Guo, X. Fan, and S. Wang, “Human attribute recognition by refining attention heat map,” *Pattern Recog. Lett.*, vol. 94, pp. 38-45, 2017. M. Xu, F. Chen, L. Li, C. Shen, P. Lv, B. Zhou, and R. Ji, “Bio-Inspired deep attribute learning towards facial aesthetic prediction,” in *IEEE Trans. Affective Comput.*, doi: 10.1109/TAFFC.2018.2868651, 2018. N. Zhuang, Y. Yan, S. Chen and H. Wang, “Multi-task learning of cascaded CNN for facial attribute classification,” in *Proc. Int. Conf. Pattern Recog.*, 2018, pp. 2069-2074. E.M. Rudd, M. Günther, and T.E. Boult, “Moon: A mixed objective optimization network for the recognition of facial attributes,” in *Proc. Eur. Conf. Comput. Vis.*, 2016, pp. 19-35. E.M. Hand and R. Chellappa, “Attributes for improved attributes: A multi-task network for attribute classification,” in *Proc. Thirty-First AAAI Conf. Artif. Intell.*, 2017. N. Kumar, P. Belhumeur, and S. Nayar, Facetracer: A search engine for large collections of images with faces,” in *Proc. Eur. Conf. Comput. Vis.*, 2008, pp. 340-353. R. Caruana,“Multitask learning,” *Mach. Learn.*, vol. 28, no. 1, pp. 41иC75, 1997. T. Zhang, B. Ghanem, S. Liu, and N. Ahuja, “Robust visual tracking via structured multi-task sparse learning,” *Int. J. Comput. Vision*, vol. 101, no. 2, pp. 367-383, 2013. Q. Zhou, G. Wang, K. Jia, and Q. Zhao, “Learning to share latent tasks for action recognition,” in *Proc. IEEE Int. Conf. Comput. Vis.*, 2013, pp. 2264-2271. J. Yim, H. Jung, B. Yoo, C. Choi, D.Park, and J. Kim, “Rotating your face using multi-task deep neural network,” in *Proc. IEEE Conf. Comput. Vis.*, 2013, pp. 676-684. Z. Zhang, P. Luo, C.C. Loy, and X. Tang, “Facial landmark detection by deep multi-task learning,” in *Proc. Eur. Conf. Comput. Vis.*, 2014, pp. 94-108. Z. Tan, Y. Yang, Wan, J., H. Hang, G. Guo, and S. Z. Li, “Attention-based pedestrian attribute analysis,” *IEEE Trans. Image Process.*, vol. 28, no. 12, pp. 6126-6140, 2019. A. Kendall, Y. Gal, and R. Cipolla. “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recog.*, 2018, pp. 7482-7491. Z. Chen, V. Badrinarayanan, C. Y. Lee, and A. Rabinovich, “GradNorm: Gradient normalization for adaptive loss balancing in deep multitask networks,” in *Proc. Int. Conf. Mach. Learn.*, 2018, pp. 793-802. S. Liu, E. Johns, A.J. Davison, “End-to-end multi-task learning with attention,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recog.*, 2019, pp. 1871-1880. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in *Proc. IEEE Int. Conf. Comput. Vis.*, 2015, pp. 3730-3738. M. Ehrlich, T.J. Shields, T. Almaev, and M.R. Amer, “Facial attributes classification using multi-task representation learning,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recognit.*, 2016, pp. 47-55. S. Huang, X. Li, Z. Q. Cheng, Z. Zhang, and A. Hauptmann, “GNAS: A greedy neural architecture search method for multi-attribute learning,” in *Proc. ACM Conf. Multimedia*, 2018, pp. 2049-2057. H. Han, A.K. Jain, F. Wang, S. Shan, and X. Chen, “Heterogeneous face attribute estimation: A deep multi-task learning approach,” *IEEE Trans. Pattern Anal. Mach. Intell.*, vol. 40, no. 11, pp. 2597-2609, 2018. J. Cao, Y. Li and Z. Zhang, “Partially shared multi-task convolutional neural network with local constraint for face attribute learning,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recog.*, 2018, pp. 4290-4299. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recognit.*, 2016, pp. 770-778. K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” *IEEE Trans. Pattern Anal. Mach. Intell.*, vol. 37, no. 9, pp. 1904-1916, 2015. K. He, Z. Wang, Y. Fu, R. Feng, Y.G. Jiang, and X. Xue, “Adaptively weighted multi-task deep network for person attribute classification,” in *Proc. ACM Conf. Multimedia*, 2017, pp. 1636-1644. Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in *Proc. Neural Inf. Process. Syst.*, 2014, pp. 1988-1996. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Technical Report, University of Massachusetts, Amherst (2007). K. He, Y. Fu, W. Zhang, C. Wang, Y.G. Jiang, F. Huang, X. Xue, “Harnessing synthesized abstraction images to improve facial attribute recognition,” in *Proc. Thirty-First AAAI Conf. Artif. Intell.*, 2018, pp. 733-740. Y. Lu, A. Kumar, S. Zhai, Y. Cheng, T. Javidi, R. Feris, “Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification,” in *Proc. IEEE Conf. Comput. Vis. Pattern Recog.*, 2017, pp. 5334-5343. E. M. Hand, C. Castillo, and R. Chellappa. “Doing the best we can with what we have: multi-label balancing with selective learning for attribute prediction," in *Proc. Thirty-First AAAI Conf. Artif. Intell.*, 2018, pp. 6878-7885. [^1]: http://dlib.net/